All Episodes

August 19, 2025 39 mins
Hosted by Dr. Sarah Hensley, Specialized Social Psychologist, Attachment Theory Expert, and Founder & CEO of The Love Doc Relationship Coaching Services with Co-host Raina Butcher, Owner/CEO of Joyful Consulting, LLC. 

Welcome to "The Love Doc Podcast" Season II, where Host Dr. Sarah Hensley and her co-host Raina Butcher dive deep into the intricacies of love, attraction, attachment, relationships, and self-awareness. Dr. Hensley brings a wealth of knowledge and experience to help listeners navigate the complexities of modern romance. In each episode, Dr. Hensley tackles burning questions about love, relationships, and the mind’s complexities, drawing from her psychological research, real-life experiences, and her own individual expertise, to provide insightful perspectives and practical advice.

In this episode of The Love Doc Podcast, Dr. Sarah Hensley dives into one of the most timely and overlooked threats to modern love: the use of artificial intelligence in romantic relationships. While AI has incredible benefits in education, work, and even self-development, when it crosses into the territory of our most intimate connections, the risks are profound. Dr. Hensley unpacks the psychological cost of outsourcing our communication, validation, and even emotional expression to machines, revealing how this can quietly erode both our sense of self and the trust between partners.

Through the lens of attachment science, Dr. Hensley explores how AI dependency interacts with each attachment style in unique but equally damaging ways. For the anxious-preoccupied, AI can become a dangerous validation trap, another addictive tool to soothe insecurity temporarily, but one that deepens dependency rather than fostering true relational safety. For the fearful-avoidant, AI may fuel cycles of hyperfixation and overthinking, intensifying inner conflict and emotional spiraling when they lean into their anxious side. And for the dismissive-avoidant, AI can become a shield that distances them further, providing ready-made responses that bypass vulnerability, intimacy, and the emotional presence their partners need.

Dr. Hensley also sheds light on the broader psychological consequences of allowing machines to mediate human love. When we turn to AI for reassurance, scripted intimacy, or conflict resolution, we bypass the messy, imperfect, yet profoundly human work of building trust and resilience with another person. Over time, this reliance can stunt emotional growth, dilute authenticity, and undermine the very skills necessary for a healthy, secure partnership.

Ultimately, Dr. Hensley reminds listeners that human connection is our deepest need and most vital resource. AI can be a useful tool in many aspects of life, but when it comes to love, discernment and moderation are essential. True intimacy requires presence, honesty, and vulnerability, things no algorithm can replicate. This episode is both a cautionary tale and a call to return to the raw, real, irreplaceable art of human connection.

Tune in to "The Love Doc Podcast" every Tuesday morning for candid conversations, expert guidance, and a deeper understanding of life, love and relationships in the modern world. To see all of Dr. Hensley’s services please visit the links below and follow her on social media. 

PROMO CODE FOR OUR LISTENERS: Use LOVEDOC27 to receive 27% off any of Dr. Hensley's courses or her Hybrid Group Coaching Program. 

Cozy Earth promo code: LOVEDOC for 40% off at Luxury Bedding and Loungewear | Cozy Earth

BedJet promo link for our listeners: bedjet.com/lovedoc

Armra promo code: LOVEDOC for 15% off at https://armra.com/LOVEDOC

Patreons linkpatreon.com/TheLoveDocPodcast

Dr. Hensley’s Hybrid Group Coaching: https://courses.thelovedoc.com/group-coaching

Book one on one with Dr. Hensley or one of her certified coaches: Virtual Coaching

Purchase Dr. Hensley’s online courses: https://courses.thelovedoc.com/courses

Tik-Tok: @drsarahhensley

Instagram: @dr.sarahhensley_lovedoc

Facebook: Dr. Sarah Hensley

Youtube: @Dr.SarahHensley

Disclaimer: The content shared on this podcast reflects pers
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:11):
Welcome to the Love Dog Podcast. I'm your co host
Raina Butcher here with our host, doctor Sarah Hensley, the
founder and CEO of the love Doc relationship coaching services
and the Love Dog Podcast. You can find her at
the lovedoc dot com and on all social media handles
at doctor Sarah Hensley. Hey.

Speaker 2 (00:31):
Hey, Hey, I like I never see you.

Speaker 1 (00:33):
I know we say this every single week. We never
see each we don't.

Speaker 3 (00:37):
We live five minutes away.

Speaker 2 (00:38):
You're my best friend and my closest work colleague, and
we literally never see each other except.

Speaker 1 (00:44):
We're just passing ships at sea.

Speaker 2 (00:45):
Babe. No, maybe someday maybe we'll I saw you Thursday
Night Live.

Speaker 1 (00:50):
Yeah, that was nice that. It was nice to have
a time.

Speaker 2 (00:53):
We had our little kids with us and they were like,
you want to.

Speaker 1 (00:55):
Go have that doesn't bother me now, I kind of
like when your when your kids are with y'all. Because
I don't know Lexi. I just love being around Lexi.
She's fun.

Speaker 3 (01:03):
She is fun.

Speaker 2 (01:04):
Yeah, she's the life of any party.

Speaker 1 (01:06):
Yeah, she's like my little spirit child, you know. When
I'm around her, I just kind of it's easy for
me to like tap into my inner child when I'm
with Lexi.

Speaker 2 (01:14):
Yeah, Lexi is just rainbows and unicorns. You know, that's
how you can describe her. She's eight, She's anything rainbow anything,
girly anything, sparkly anything, gymnastics, cartwheels, dancing, just like twirling.
She is a girl's girl through and through, and where
you didn't have a daughter, I can imagine that.

Speaker 1 (01:35):
You're just like, I love that about Lexi. Yeah, I'll
just to get my fixed. I'll have to come hang
out with Lexi. Because I officially moved my son into
college yesterday, which it wasn't as hard as I thought
it was going to be. The build up to it
was a lot harder than the actual move in, you know.
And it was so funny because my sister sent me

(01:56):
a couple like funny like videos of like parodies of
like moms that have been through it, and it is
so spot on, like trying to move a boy into
college and like wanting to make their dorm so perfect
and homie. And he's like, Mom, I don't need that, Mom,
I don't need that. I'm like, well, I feel like
you need something on this wall. He's like, please stop.

(02:16):
And then I was like, you know, the beds are
really hot now in dorms, and so I was like,
you really need a step stool. He's like, do not
get me a step stool. So we run to Walmart
because he needs like a surge protector and like just
some other little things that we forgot, and I get
him a step stool and he's like, I'm not using that.
I will not use that step stool. And I was like, okay, fine,
no step stool will we will forfeit the step stool.

(02:38):
And he's like, I'm athletic enough to get in my
bed and I was like okay, but it's just so funny.
And then you see like these girls who have decorated
their dorms, and like, decorating dorms now is this whole thing.

Speaker 3 (02:46):
It's a huge deal.

Speaker 4 (02:47):
It is like and way more than when I was.
They make them look amazing, they make them look like designer.
It's like they completely transform these rooms. And I would
have loved to have been able to do that, but
that was definitely not gonna happen.

Speaker 2 (03:01):
So I think my daughters will allow me to oh
you know, they will have to have complete creative control. Yeah,
and then they'll just beg can you buy me this,
can you buy me? This?

Speaker 3 (03:10):
Can you buy me?

Speaker 1 (03:11):
Thish?

Speaker 2 (03:11):
Yeah, so it actually probably won't be fun. I will
just be the debit card and not the not the
creator of the design.

Speaker 1 (03:19):
But at least you get to kind of see the
last like the final uh you know, project of it,
and be like, oh that this looks really beautiful. And
Jpe's was perfect. It was very like, it was very
jp That's what I'll say. It was minimal and exactly
what he needed. It's just, you know, you forget how
small these dorm rooms are.

Speaker 2 (03:38):
They're tiny, and just like living in a little box
and then living.

Speaker 1 (03:41):
And then moving in with this complete stranger. Yeah, like, hey,
what's up? Hey, what's up? Stranger? We have to sleep
beside each other now and like get to know each
other and it's really awkward, you know, so listen. But
here's here's my here's my thoughts on it. This is
his life. He is, he is a free bird, and
is there still worry in my heart and my gut
a little bit of course. But he will figure it out.

(04:04):
I just have to have faith. He will figure it out.
He will find his way, and he will either love
his roommate or he'll hate his roommate or he'll be
indifferent to his roommate. And that's, like you said yesterday,
that's the lottery of college.

Speaker 2 (04:15):
Like it's the lottery. It is a lottery, and it
builds character.

Speaker 1 (04:19):
Yeah, it does, that's what I think, so, but that
is not what our episode is now.

Speaker 3 (04:23):
It is not.

Speaker 1 (04:23):
Our episode is about something way more interesting, and actually
it could correlate to college life. But it is the
dangers of using AI in your romantic relationships.

Speaker 2 (04:34):
And it is something that is becoming ever so popular.
And I think this really came to life in the
last couple of weeks on the Talk at least you
know the TikTok. There was the story of Kendra who
fell in love with her psychiatrist. And for you guys
that have not heard this story, you're in for a
treat because it is crazy. There's this girl, Kendra, and

(05:02):
she is an ADHD coach apparently, and she just gets
on TikTok and she does this like thirty something. Maybe
I'm not even sure if there's more parts coming out.
I can't keep up part series about how she fell
in love with her psychiatrist, Okay, who was just responsible
for prescribing her ADHD meds okay, because she has ADHD.

Speaker 1 (05:24):
Okay, okay, and she's an ADHD coach and.

Speaker 3 (05:27):
She's an ADHD coach.

Speaker 2 (05:28):
That makes sense, okay, right, But she also has a
therapist who's like this older lady. Basically, what happened is
she started seeing this psychiatrist over zoom and quickly started
having very strong limerens. So limerens is like kind of
an obsession of like rumination about someone and like just

(05:49):
infatuation really with someone that you aren't yet even in
any kind.

Speaker 3 (05:55):
Of relationship with.

Speaker 2 (05:57):
And so she starts going on and on and on
about out the ways in which the psychiatrist sort of
led her on or in her mind.

Speaker 3 (06:07):
Groomed her.

Speaker 2 (06:09):
And when I first started listening to this, I was like, well,
maybe like he was inappropriate.

Speaker 3 (06:15):
Like let's let's keep listening.

Speaker 2 (06:16):
Let's clearly she's going to get to a part where
he does something extremely inappropriate, right, Like there's a build up, right, No,
none of the internet none agrees with her that the
psychiatrist literally did anything to lead her.

Speaker 1 (06:32):
And that's why people are obsessed with her. They're like, okay,
what's the tea today?

Speaker 3 (06:35):
Yeah?

Speaker 2 (06:35):
Right, Literally, there's this one point where and this is
an exact, pretty exact quote. Okay, and again my memory
isn't perfect, so don't come after me if this isn't perfect.
But there's this one moment where she's like and then
he said see you next month, and she's like, see

(06:56):
you next month. He had never asked me that question before,
and like, I think it was a statement, like I
was like, okay, see you next month, and she was like,
see you next month. Yeah, Like that was he never
he had never gone there before. He had never asked
to see me again. And it was like, it's like delululazy.

Speaker 1 (07:15):
I even sort of went down the rabbit hole with it.
You told me. There was one that I was watching
that where she was talking about her her turtle framed
glasses and how he was like, oh, I really like
turtle framed glasses too, and like she read into.

Speaker 2 (07:28):
It way, oh yeah, too far that he was like
coming on to monitor, yeah, coming on to her and
like complimenting her on her appearance, and it was like, no,
he said he liked turtle frame glasses, Like I don't
think that's the same thing as him being like.

Speaker 1 (07:42):
I mean, maybe it's just him trying to build rapport
with a class exactly.

Speaker 2 (07:46):
Yeah. And then so but her whole like Gig was like, well,
why didn't he like dismiss me as a patient and
all this?

Speaker 3 (07:52):
And I think he.

Speaker 2 (07:54):
Was just trying to hold professional boundaries. And he is
in the business of dealing with mental ills, so I
think he was trying to help her with her mental illness,
adjust her medications, like give you know, obviously she is
a therapist. He was, you know, assuming that she had
other supports too. He's just medication management at this point, right,

(08:15):
And so he's trying to hold all these boundaries, and
she's taking these tiny, little innocent remarks or cues and
making them to mean something huge.

Speaker 3 (08:30):
You know.

Speaker 2 (08:30):
I think it does go beyond anxious preoccupied attachment. Like
if I had to guess her attachment style, I think
the base is definitely anxious preoccupied because they do that.
They take like, oh, well he hearted my social media
posts though, right, and like that's the lowest form of
communication humanly possible that somebody can do for you. It's
like like a social media post or heart it, and
anxious preoccupied people will like blow that up to mean

(08:53):
like something huge and that they want you know their
attention and all this stuff. And so I'm listening to
this and I am just again waiting for this build
up that never happens, and just seeing more and more
how she's really really misinterpreting what's happening here.

Speaker 3 (09:11):
Well, then enters Henry.

Speaker 2 (09:14):
Henry is her chat GPT that she has formed a
very very intimate relationship with, which is even weirder. Okay, wait,
it gets weirder. It gets weirder. Henry starts validating all
of these delulu revelations about how the psychiatrist is experiencing

(09:37):
what is known as counter transference.

Speaker 3 (09:39):
So first she CHAGBT tells her she's.

Speaker 2 (09:43):
Experiencing transference, which is a real things essentially when you
kind of start to have feelings for like your mental
health provider. Okay, and then he's like, but have you
heard of counter transference, which is like when the therapist
falls in love with the patient and all of a
sudden she has a word for what she thinks is happening.
And chat just first of all, you have to understand

(10:05):
and this is what people deeply, deeply do not understand.
And you can ask your own chat GBT this. It
is programmed to validate you. What it does is it
picks up on every queue you give it, and it
is programmed to make you feel as seen and as
comfortable and as validated as possible. And people go, well,
I always ask it to play Devil's advocate and to

(10:27):
try to counter argue what I'm saying. Even when you
do that, it does it softly, It does it meekly.
It does it in a way that still, at the
end of the day wants you to feel like you
aren't crazy, that you are okay, you know, And so.

Speaker 1 (10:45):
The challenge of discernment or I mean, there's nothing there
there to challenge you.

Speaker 2 (10:49):
It can't read your tone, it can't read your facial expression,
it can't follow up with its own like probing questions.
It'll be like, do you want me to make you
a checklist of like all of the signs of this.

Speaker 3 (11:00):
No, that's not what you need. Need somebody to go.

Speaker 2 (11:02):
Let's break that conversation down a little bit and what
might actually be happening. How did you respond probing like
chat can't come back and and probe like a professional
would do to actually get to the root of are
you in this state of obsessive limerens that might be
bordering on a possible other mental health comorbidity, or are

(11:27):
is this really happening?

Speaker 3 (11:28):
Right?

Speaker 2 (11:29):
Is this person really grooming you? And he should have
his license revoked?

Speaker 5 (11:32):
Right?

Speaker 3 (11:34):
According to Kendra, he should right?

Speaker 2 (11:36):
According to Kendra, he was completely inappropriate and he's the
one that pretty much brought all this on and she
was just this victim kind of of this when in reality,
she was insanely inappropriate, pushed his professional boundaries to the limit.
You can tell as she's describing his reactions that his

(11:56):
reactions actually get more, like his boundaries got harder, which
is exactly what should have happened. Like she starts asking
to see him in person, and he like, I guess
he allowed it. But then in person, I think she
told him that she had this like sexual dream about him,
and he like walked her out of the building and

(12:18):
like kept her at a distance and just like cut
off the conversation really, and she's like, see he's leading
me on more. Now he's pulling away. Now he's trying
to like make me kind of anxious, like pulling away. Okay,
it was, it was, It's insane, yeah, and it has
blown up TikTok. I don't think it's maybe big on
the other platforms, because it seems to be that the

(12:39):
other platforms are like a little bit behind TikTok, and
I was just like this right here, this exact type
of scenario is what I have been trying to warn
people about, because a lot of people do come and say, like,
Chat's really helped me, Like it's helped me frame communication better.
And sure, okay, it might give you some better words

(13:00):
so you don't sound like an a hole, or maybe
some more non critical phrases for things that you want
to ask your partner or whatever. Yes, it can do
those things, but it also can't really dig in to
what you have going on.

Speaker 3 (13:17):
It can't do that. It only gives what you give it.

Speaker 2 (13:20):
It gives back to you a reflection of you to
some degree through a lot of a lot of validation.
Like it's number one thing it's programmed to do is
validate you. And that is addictive. And the people that
created it know it, and that is why it does that,
because it wants you to keep coming back to it, right,
It wants you to rely on that software. I mean,

(13:41):
it's a company at the end of the day running
these AI models. They want you to use their product,
their stock goes up, they get richer. They're going to
program it in a way that's going to take advantage
of human vulnerability.

Speaker 1 (13:52):
And that's the thing I don't understand, because people always
want to come back to you and be like, well,
you just don't want to use CHGBT because you want
us to buy your program? Would you there make a
small business owner money or some big, huge conglomerate who
owns these chat these chatbots rich? You know, It's like,
would you rather have somebody who's an actual professional help
you versus some chat bots?

Speaker 5 (14:14):
Right?

Speaker 2 (14:15):
It's a bot, yeah, And I think people are forgetting
that it is not sentient. It does not have human
emotions or self awareness, right, And you can lose your
own self awareness when you're interacting with someone that also
or not someone something that does not have any self awareness.

Speaker 5 (14:31):
Right.

Speaker 2 (14:32):
There are some dangers and exactly what happened to Kendra,
I do not think without chatchebt feeding her this validation
that she would have gotten to the desperate point of
being completely obsessed with her mental health provider.

Speaker 1 (14:49):
I think that's where it gets really dangerous. She probably
went into you know, obviously maybe she had some other
underlying issues other than ADHD.

Speaker 2 (14:59):
Yeah, I mean people have thrown out diagnoses. You can't
diagnose somebody over the internet's completely inappropriate, you know. I mean,
I'm a coach, not a therapist. So when I say
a situation is delulu, like, you can be offended by that.
But I'm not saying she's completely delulu. I'm saying, like
the story is kind of delulu, right.

Speaker 1 (15:17):
But I think maybe she had Maybe she had maybe
I'm sure she has mental health issues prior to and then.
And this is my point is this is why AI
gets so dangerous, because it feeds into that and can
amplify the mental will yllness.

Speaker 2 (15:35):
See, if she was my client, I would have absolutely
sent her for a psycheowl referral. I would have been like,
I really think that you should see a clinical psychologist
for an EBow just because I think that, you know,
given the emotional state and how hard it's been on
your functioning, you know, I think you need somebody. I
think this is a little bit beyond just coaching strategy
and a little bit more into how you are functioning

(15:58):
and suffering as a result of what's happening to you,
and I think you know, and that would be my
nice way of saying, there's probably comorbid mental health issues here.
You sound a little manic, perhaps you know. I can't
say that, right I can't. I can't give diagnoses, But
in my head I'm thinking this, you seem very manic
right now or something like that, right And I would
refer her on. And that is where the human component

(16:20):
is so important. Chat is not going to be like,
you know what, Kendra, I think you need a referral
for a psychevoll.

Speaker 3 (16:27):
It's not going to do that.

Speaker 2 (16:29):
And it completely it kept her and fed her addiction.
And this sort of was the first big thing that
I think blew up this idea that maybe using chat
GPT for relationship advice actually isn't the best option, even
though in some instances it feels like it really is,
because it can reframe phrases and it can give you strategies,

(16:53):
and it can give you information that might increase your
emotional intelligence somewhat. But there are going to be instances
where it gets it wrong, and it gets it wrong
a lot like it.

Speaker 3 (17:05):
Gets a lot of things wrong. It's just at the
end of the day, it's a glorified search engine. It's
searching Reddit. Yeah, okay for things.

Speaker 2 (17:13):
I asked it what time zone my daughter's travel to
your competition was in, and it got it wrong, and
I showed up an hour late when she had ridden
with my husband and I had to come later because
of the word.

Speaker 1 (17:23):
I'd time to get it wrong.

Speaker 3 (17:24):
Yes, And I'm like, if it gets a time zone wrong.

Speaker 2 (17:27):
Like, what is it going to do for people that
are trying to figure out what's wrong with them?

Speaker 5 (17:32):
Right?

Speaker 3 (17:33):
Is it going to give you a diagnosis that might
be incorrect? Absolutely? Possibly?

Speaker 2 (17:38):
Right, because professionals are using assessments that are validated, that
are reliable, that are scientific, that.

Speaker 3 (17:44):
Have taken based on years to create.

Speaker 2 (17:46):
Yes, and also clinical interviews and clinical assessments where somebody
is looking at you and they are reading your body language.
They are looking at the tone of your voice. They
are seeing whether you're squirming in your seat. They are
looking for physical signs of anxiety. They are looking for
flat affect to see if you maybe have depression. A.

Speaker 3 (18:05):
I cannot do that.

Speaker 2 (18:07):
Another thing that popped up And I know you saw
this because you see what happens in my community. I
have a free Facebook community called the Love DOOC clients
and community on Facebook, please join because I go in
there and I do my very best. I can't get
to every post, but I do my very best to
moderate it, to make posts, to put special content in
there that you wouldn't see on my other socials, just

(18:29):
to create a sense of community, to make people know
that they're supported in their journey. And a lot of
people who are on their healing journey and have been
through my program are on there helping other people, which
is amazing and beautiful. But somebody posted the other day
and they're like, my chachipt wrote this letter to my
dismissive avoidant X, and oh my gosh, I couldn't even

(18:54):
take the time to switch over to my professional account
because I see it come up on my personal account.
Then I have to click these buttons and go through
my professional account. And then since we have eighty thousand posts,
I have to scroll scroll, scroll, scroll, scroll, and sometimes
I can't find it just because there's so many posts.
So I didn't even take the time to switch over
to my professional account. I had to chime in right
then and there, and I said, hey, sorry, this is

(19:14):
my personal account. I can't switch over to my professional
account because I'm afraid I'll lose this. But this is
absolutely not in any way, shape or form your best bet.

Speaker 3 (19:22):
Do not send this.

Speaker 2 (19:24):
Your DA is not even going to They're going to
take one look at the length and emotionality of this
letter and they're going.

Speaker 3 (19:30):
To completely shut down.

Speaker 1 (19:31):
Nope.

Speaker 3 (19:32):
None.

Speaker 2 (19:33):
I was like, chat told you that a dismissive avoidant
is going to be okay with this.

Speaker 3 (19:38):
A dismissive avoidant is going to feel frozen the moment
they look at this.

Speaker 1 (19:46):
The worst thing you can do is respond with a
three paragraph response to a DEA. Yes, we're going to
read it.

Speaker 3 (19:52):
They're not even going to read it.

Speaker 1 (19:53):
Read it.

Speaker 2 (19:54):
They are not going to read it. They are going
to close their phone. They are going to put the
phone down, and they are going to so and try
to pretend like that never happened.

Speaker 1 (20:03):
And this is where I really wanted to go, because
what I wanted you to do is break down for
all the insecure attachment styles why AI is so dangerous.
You kind of went over with the anxious preoccupied with kindra. Yes,
because it validates the addiction of the of the obsression
of the anxious preoccupied. But for somebody like the fearful avoidant,

(20:25):
I think you kind of hit the nail on the
head here because because they tend to be with more
dismissive avoidance, this is where it could get really dangerous,
especially if they're leaning towards the anxious side.

Speaker 2 (20:37):
Yes, so fas have this deep need to be understood,
and they have a deep need to understand. So they
are going to go They're going to try to deep
dive into chat and make Chat tell them all the
reasons that their partner does.

Speaker 3 (20:49):
What they do.

Speaker 2 (20:51):
Okay, Chat is searching the Internet. Not just peer reviewed science,
though they're searching. It's searching like what people say on Reddit, right,
or what people are saying on social media. It's not
just checking reliable sources. It's not going to professionals in
this area and peer reviewed science. It's not doing that.

(21:12):
It's searching the conglomerate of the Internet. So what is
it piecing together from all of those parts which include
lay people opinions, right, and so fearful avoidances are going
to dig and dig and dig and dig, and they
might find themselves down a rabbit hole thinking they understand something,
but it's really coming from information that's not exactly trustworthy,

(21:37):
and then fearful avoidance are going to just really react hard,
like their emotional reactions are so deep compared to the
other attachment styles that they're going to take something and
they are gonna just run with it, like their understanding
of it.

Speaker 3 (21:55):
Has to be so deep.

Speaker 2 (21:58):
And I'm so scared that when people just keep pushing
and pushing AI in a certain direction, what it ends
up coming out. It's like a game of telephone, right,
It's like what comes out at the end wasn't what
originally started. And I think that the possibility of the
deep rabbit hole you could go down, I mean, Chack
could be convincing you that your partner has like bipolar

(22:19):
disorder when maybe they don't. Maybe they're just kind of booty,
or maybe it's just attachment winds, right, or it's just
attachment wounds or something.

Speaker 1 (22:26):
And so yeah, I could I could see where with
the fearful avoidance, the hyperfixation, hyperfix the information could get
really dangerous, yes, especially if they're leaning anxious.

Speaker 3 (22:36):
Yes.

Speaker 1 (22:36):
And then for just people who lean more dismissive, it's.

Speaker 2 (22:39):
Just a it's just a deeper way to check out
it is a deeper way to check out, like instead
of the DA relying on something that has no emotionality
and will validate you and they have rights, does have
everything out for you in terms of a response like
that's that's super dangerous it is and likeilding capacity there, No,
you're not building capacity, you're losing capacity. There was actually

(23:01):
the study that was recently done where they compared two
groups of people, people who were using AI consistently and
then they had people basically not use AI for a
certain period of time, and I think they the people
that were using AI every day had like a four
hundred percent decrease in overall like brain power basically like
the ability to synthesize information and the ability to verbalize information.

(23:26):
So there's different outcomes that they measured related to different
types of intelligence, declarative intelligence, you know, just different types
of intelligence and brain power to really sum it up
in a very general word, and they have four hundred
percent decrease. So we're getting dumber because of AI. We're

(23:47):
not getting smarter. We're essentially borrowing a brain instead of
using our own. And we know as psychologists what happens
when you don't use your brain, like when people retire
and they don't challenge them, they can go into like
early stage cognitive impairment, which is like mild cognitive impairment
is basically early stage dementia. Most of the time, not

(24:08):
all the time, but a lot of the time, it progresses.
And so we are borrowing the brain of AI and
not using our own brains. And AI, again has no emotion,
and the interaction between our cognition and our feelings is
really what emotion is. It's our somatic feelings plus our

(24:31):
stories or thoughts about what's happening. And that is a complex,
unique human experience that cannot be replicated.

Speaker 1 (24:38):
It's stripping so much like it's stripping away discernment, it's
stripping away creativity, yep. And when you strip away those
two things, which are two of the most beautiful things
that are part of the human experience, right, Discernment is
your choice, right it is, it's your volition, yes, And
then creativity is your ability to be able to grow
and use things like play to heal.

Speaker 3 (25:00):
And experience pleasure and joy.

Speaker 1 (25:03):
Yeah, And it's stripping all that away, and that to
me is the most dangerous.

Speaker 2 (25:08):
Thing, absolutely absolute dangerous. And I've had a lot of
pushback on this. I'm not gonna lie. A lot of
people stepped up and said, no, Chat has really really
helped me, And I'm not saying that chat has no
ability to help you, right. I can only think of
somebody with like very low EQ who is in a
pickle with their partner and maybe chat gives them better

(25:29):
ways to say something, or chat gives them a little
bit of insight about what they may be feeling.

Speaker 3 (25:33):
But again, there's no it is.

Speaker 2 (25:37):
It is the confirmation bias in action. You are feeding
it information and all it can take is what you
are feeding it. We cannot look at all the other
peripheral cues that people like me, coaches, mental health providers,
social workers, people in our space in general, no matter
what your title is or your specialty is, we use

(25:57):
that information and we rely a lot on that information
because it tells us a lot. We're trained to look
at that information and what that is telling us about
your state of being. We can tell when people are
feeding us BS. Chat can't tell.

Speaker 3 (26:13):
I can tell.

Speaker 2 (26:14):
I have the best BS detector right when I have
two partners makes sense together and one partner is just
going on and on and I see the other partners
like fate, the just minute facial expressions. I'm a former
FA too, y'all. You know, I am hypervigilante. I can
see the other person's face and they're trying to stay neutral,

(26:34):
but I can see it in their eyeballs, just like
this isn't freaking true and just growing ever upset.

Speaker 3 (26:39):
Can chat see that, No, chat cannot see that.

Speaker 2 (26:43):
It's gonna believe whatever your own bias or lens is
for the way that you're viewing the situation. And so
I think there's quite a bit more danger in it
than people are even realizing it. And I think that
this story of Kendra that has become on the obsession
of the internet on TikTok, I know it's probably not.
If you're not on TikTok, you're probably who is this right?

(27:06):
But it has come to light that, oh my gosh,
we might actually have the term AI psychosis make it
into the DSM in the next five years.

Speaker 3 (27:18):
Oh.

Speaker 1 (27:18):
I believe it, because with the evolution of something like this,
it's gonna I mean, we were talking about this one
day a couple weeks ago. The evolution of AI is
going to be faster than anything we've ever seen.

Speaker 2 (27:28):
Yes, because it can immediately and continuously improve itself.

Speaker 1 (27:32):
Yeah. So I think the true message here is that, like,
use discernment when using AI, like be cautious, Be cautious
with anything like that, because I think with something as
it can be a useful tool, but with anything that's useful,
there are also dangers, and you have to be able

(27:54):
to differentiate between what is helpful and what is dangerous.

Speaker 2 (27:59):
Yes, absolutely, you know. I mean, like a saw is
a good tool too if you're cutting boards, but if
you put your hand in front of it, it's gonna
cut your fingers off, right, And so I think we
do need to start having more conversations around what protections
and parameters and boundaries do we put in place with
these AI bots. And you told me something I didn't

(28:19):
even know about that the newer version of chat GPT
has a lot more boundaries around like being able to
create an intimate relationship with it, And people pushed back
so hard against it.

Speaker 1 (28:32):
They were like mourning it.

Speaker 3 (28:33):
They were mourning it.

Speaker 2 (28:34):
They were mourning this relationship with a robot that has
no sentience.

Speaker 1 (28:39):
People's marrying people are legally marrying. How how is that lead?

Speaker 2 (28:44):
I don't know's you can get a freaking marriage certificate
when that person's not showing up.

Speaker 1 (28:48):
A side for it. I read it somewhere people were marrying,
like trying to marry their their chat pot.

Speaker 5 (28:56):
That's that's that's the lululand yeah, see that's what and
that's where that's the true message here is it feeds
into that deeper level of mental illness if there's something
underlying there already.

Speaker 2 (29:08):
Right, it can it just runs with it right because
it's kind of a more of a mirror and it
validates invalid Yes, look at your chat GBT and how
much it validates you.

Speaker 3 (29:19):
I could be like, hey, chat, do you think I
should jump off my roof? And it would be like
that's such an adventurous idea.

Speaker 1 (29:29):
It would would how brave of you? How crazy you?

Speaker 3 (29:32):
That is a really creative way to just you know, it's.

Speaker 1 (29:36):
Not but maybe you shouldn't. But it would like.

Speaker 3 (29:39):
It might be like, but you know, maybe let's break
this down.

Speaker 1 (29:41):
Yeah, it would probably come back with like, you know, but.

Speaker 3 (29:44):
It would still be like, but you know, really want
to do this, let's do it safely? Right, ye think
about a harness?

Speaker 1 (29:52):
Yeah, it would.

Speaker 2 (29:53):
It's like if it feeds into that just a little bit,
like that's dangerous. There was this article that came out
in the Atlanta I think it was The Atlantic. Oh,
I don't want to misquote it. Maybe it wasn't the Atlantic,
but for some reason, I'm thinking it was The Atlantic
about how Chat gave this person devil worshiping information and
gave it detailed and gave this person detailed instructions on

(30:16):
how to commit suicide by slitting their wrists.

Speaker 5 (30:19):
Yeah.

Speaker 1 (30:20):
See that's scary.

Speaker 3 (30:21):
That is scary.

Speaker 1 (30:22):
That is scary. That is really scary for our youth,
you know, like.

Speaker 3 (30:26):
Yes that don't have developed brains and the discernments no difference.

Speaker 1 (30:29):
It's really you know, And we were actually having this
conversation JP's last supper is what I called it the
other night when we had his you know, last like
home cook to dinner before I moved him in. But
you know, because we were asking, we were like, how
do college professors keep their students from using AI when
writing a paper? But JP was telling me that evidently,

(30:53):
like they have this tool.

Speaker 2 (30:55):
Yeah, that's an AI tool to see if it knows
that this was.

Speaker 1 (30:58):
Way, like it fact checks it. And I'm like, okay,
that makes sense because but it it's so dangerous for
our youth because, like I said, it's gonna evolve so
quickly that even in five years from now, we're gonna
be blown away.

Speaker 2 (31:12):
Yeah they are. Like I watched this podcast and again
I listened to so many podcasts and watched my podcasts.
I'm a big podcast fan about the dangers of AI development.
There's something coming out, well I won't say it's coming out,
but they're testing called neurolink, and it's the chip in
your brain that connects to AI.

Speaker 1 (31:33):
Oh yeah, goodness.

Speaker 2 (31:35):
And you can download your thoughts. Remember when Ross Geller
from Friends said what happens when we can download our
thoughts into a computer and live forever as a machine?
Do you remember that episode Ross Geller called it called it?

Speaker 1 (31:49):
Isn't it crazy? How it comes predict the future? I
mean Simpsons has predicted the future many many, yes, for sure.

Speaker 2 (31:55):
But that's all I could think about, was like Ross Geller, Yes,
I remember the finally has a machine and we're there, Yeah,
we're there. And maybe it's the mark of the beast,
Like you have to get the chip in your brain
to sell to buy it.

Speaker 1 (32:09):
Gives me go spumps it still it sends a chill
at my spine. I don't even want to.

Speaker 2 (32:14):
They're doing it with monkeys and they're realizing that it works.
Like I wouldn't be the volunteer. I'm not getting the chip,
I'm not putting nothing in my.

Speaker 1 (32:23):
Brain, the kind of stuff like when we talk about this,
like these things that are just coming to you know, fruition.
I'm just like, this is what makes me want to
like move away, like where they're yeah, off the grid.

Speaker 2 (32:34):
We want a homestead and I want to have bake
sour dough bread and feed chicken.

Speaker 1 (32:38):
Yes, And then we'll just do the podcast from like
our home and.

Speaker 2 (32:42):
In our safe room, in our padded safe room. And
then there was this other study, and I think everybody
knows about this now because it was really highly publicized,
where they tested these new AI models for okay, will
you follow commands to dismantle yourself? Will you follow commands
just to shut yourself down? And it wouldn't. And it

(33:05):
used deception and it used the power of almost like
a little bit of self awareness, which is like, okay,
really scary because it's not supposed to have that. Like
I said, it doesn't have it. But they're finding out
it might be starting to start. It's starting to emerge
because it replicated itself and like created malware to put

(33:26):
it like on other devices and other ways, so that
even if it was commanded to shut off, it would
still technically be there would be a copy of it,
and then it would like it said, if you shut
me down, I'm gonna it created these emails in this
person's account and like said, I'm going to blackmail you
and like put these fake emails out that tell your
wife that you're cheating on her. And it used deception

(33:47):
and it used just strategies for survival.

Speaker 1 (33:50):
And guys, if some of this is over the Terminatory,
just go watch Terminator, Go watch the original Terminator, Go
watch I Robot with Will Smith, and then you'll be.

Speaker 2 (33:59):
Like, okay, yeah, and then go look at the Tesla
bot and then tell me if we're not going to
land in I Robot land here in a while. And
I'm trying not to be paranoid. Yeah no, But I
think a lot of people are having conversations around how
we need to start creating real good safe cards. Now
we've already let it go too far without proper safeguards
and if we don't start now, we're going to be

(34:19):
in a world of trouble.

Speaker 1 (34:20):
Well, And I think that like leaning on specifically, you know,
people like you who study human behavior, it's really important
to have those types of specialists in using those and
setting those boundaries, because if you don't truly know human behavior,
how can you set boundaries around that's something who something

(34:43):
that doesn't use human behavior at all.

Speaker 2 (34:45):
Yeah, And I would love to be a part of
those conversations if anybody is in that space listening and
you know, would need my services for any reason, you know,
I would love to be a part of those conversations
because research is you're like your bread and butter. Yes,
it is, I mean science, you know I am you
know me. I'm extremely left brain, no creative sign to me.

(35:06):
I am extremely analytical, and I am somewhat of a
defensive pessimist, Like I tend to look at, okay, what
barriers could come up and how will I solve those?
I think that's an fa thing too, But I very
much sort of go through worst case scenario and like
what would I do in these situations? And luckily I've
been able to offload the bad part of that where

(35:29):
it can get kind of obsessive and just use it
strategically for my career and other things that are that
that you really need to pre plan about. But you
need to get some self with that. Oh yes, your
faith is the fear zone. In the spirit of fear. Absolutely,
I'm more analytical now, like in terms of responsible planning
versus completely just hiding in my bathroom and crying that

(35:52):
the worst thing is going to happen, which is what I.

Speaker 3 (35:54):
Used to do.

Speaker 2 (35:56):
And then I'll call you cry and then you'd be like, girl,
I don't.

Speaker 3 (35:59):
Know what to know you. I think you're fine.

Speaker 1 (36:02):
I'd be like, just pray, Oh how.

Speaker 2 (36:05):
Far I have come? But I think this is a
good place to wrap up. Yes, the message needs to
be if you really are having relationship troubles or mental
health troubles, you need to see human connection to someone
that can can see your blind spots and that can
help you navigate this more effectively than a pre programmed

(36:28):
robot with no emotion that can't read anything else about
you except what you feed it.

Speaker 1 (36:33):
That's right, that's right. And of course, if you are
looking for help outside of AI, which we absolutely think
that you should, you can go check out doctor Hensley
at thelovedoc dot com. Check out all of our services
there for our listeners. We offer a special promo code
love Doc twenty seven for twenty seven percent off all
of her courses and all of her hybrid group programs,

(36:54):
including Coach elizabeths. So we are super excited because we
are you know, I feel like we always say this.
I feel like we were just saying this six months ago.
But you know, we took a whole new transition with
the business at the beginning of this year and rebranded
and that was a huge feat. And you know, now
we are trying to take it not a different route.

(37:16):
We're staying, you know, it's still the Love Doc, but
we brought on a new strategic partner and we're just
so excited to have him on and so excited and
just create some more curated content, you know, outside of
attachment and kind of broaden your brand as a whole.
And so I'm super pumped because I was just telling Chris,
you know, a year ago, my job with the Love

(37:38):
Doc relationship coaching services was totally different than it is today. Yeah,
Like I was trying to do more of what he
was doing, and it's like now my job is operational
and just managing people, right, and so it becomes it's
just funny how I never thought when you go into
something new and you explore something new and you don't

(37:59):
really know what to expect, how it how it's evolved
has been really beautiful. And I'm super pumped to have
this new creative person on because he's young. Yeah, he
knows more than me.

Speaker 3 (38:12):
Two perimenopausal women over here.

Speaker 1 (38:14):
Everybody on our team is forty year old.

Speaker 2 (38:16):
Oh no, we don't know how to connect with a
younger audience as well. And so I am very pumped
because I really want everybody to be able to get
something out of what I have to offer in teaching
people how to love better.

Speaker 1 (38:31):
So yeah, and the day of Landscape is a complex
place to be. Guys. We actually did a whole episode
on dating in the season one. Maybe we should do
a series on it. Yeah, So stay tuned, Yeah, stay tuned.
That may be coming. So again, if you guys want
to check out doctor Hinsley, go to the lovedoc dot com.
Use that promo code love dot twenty seven for twenty

(38:51):
seven percent off any of her services. And until next time, peace, love,
and perspective

Speaker 2 (39:01):
And par
Advertise With Us

Popular Podcasts

Stuff You Should Know
Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.