All Episodes

September 3, 2025 25 mins

This week, Diosa and Mala discuss a viral series on TikTok: a woman who falls in love with her psychiatrist and how an AI tool affirms her beliefs. Together, they unpack how ChatGPT, the most popular AI chatbot, and how it's being used as a search engine AND therapy substitute. New research shows continuous and uninterrupted use of AI chatbots can lead to a mental health crisis or AI psychosis. 

Related Episode: Crash Our Or Crisis? 

Sources: 

Woman Who Fell in Love with Her Psychiatrist Speaks Out

Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.

The Emerging Problem of "AI Psychosis"

Illinois Bans AI Therapy

Support the show: https://www.patreon.com/locatora_productions

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
So personally as someone who is actually in love with
a psychiatrist and the psychiatrist is actually in love with me.
I find this next story particularly intriguing.

Speaker 2 (00:11):
Let's get into it.

Speaker 3 (00:12):
Let's get into it.

Speaker 2 (00:14):
Alreadio Ola la Loka motes, I'm viosa.

Speaker 3 (00:18):
And I'm Mala.

Speaker 1 (00:20):
Earlier in this season of Loka Thoughta Radio, we discussed
the crash out or the crisis, and we're building off
of this idea when social media users post tons and
tons of video online which seemed to be depicting a
crash out, and we ask ourselves, is this actually a
mental health crisis in real time? And most recently, a

(00:43):
TikTok user, Kendra Hilty, posted an over twenty five part
series to her TikTok stating that she fell in love
with her psychiatrist and he kept her for four years
until she had the strength to walk away. Now some
TikTok users are asking the question, is this the first
instance of AI psychosis that we're seeing unfold in real

(01:05):
time online and.

Speaker 3 (01:06):
We're going to talk about it on today's episode.

Speaker 2 (01:09):
And I think the reason that we're talking about AI
psychosis and how it relates to this story that this
TikTok user is that she not only was seeing a
psychiatrist who she claims she or says that she was
in love with, but she also was using a chat
GPT therapist named Henry, and Henry, her chat GBT therapist

(01:33):
was affirming and mirroring everything that she was saying about
her experience with her psychiatrist. And so when you have
someone affirming this to you in real time and not
someone but an AI, a chat bot, it can lead
to this AI psychosis. And that's one of the reasons
that we're talking about for us, this new term and

(01:56):
this new effect of being on on CHATGBT or using
a type of artificial intelligence and in this case a
large language model.

Speaker 1 (02:07):
Now, I do also want to point out that on
our previous episode crash Out or Crisis, we did bring
up the idea of talking to CHATGBT as if it's
your therapist, and why that's problematic. We did have some
folks online basically tell us that like therapy is inaccessible,
psychiatry is inaccessible, so some people turn to chat GBT.

Speaker 3 (02:31):
For some type of a sounding board.

Speaker 1 (02:33):
I think that we maintain our position that CHATGBT and
an AI bot is not a proper replacement for therapy,
and I think that this story and all of the
conversation that has come out because of it validates that position.
So I'm not going to go point by point into

(02:53):
Kendra Hilty's story of how she fell in love with
her psychiatrist or her claims, but the long and the
short of it is this woman is definitely in a
state of a mental health crisis, and so we also
don't want to like make fun of her, right because
she's very clearly not doing well well.

Speaker 2 (03:11):
And I think that's why this is a really good
build off of our first episode or our previous episode
of the season of crash Out or Crisis is because
now this woman, Kendra, is going through a mental health
crisis to us, right, but it has become internet fodder.
Not only are I think there are the folks that
are in the mental health field that are responding in

(03:32):
very kind ways, but then there's also just a lot
of jokes, right that comes from being online and it
becoming the internet fodder. And so this is I think
another way that we're seeing someone going through a mental
health crisis and then it becomes content and it becomes
part of the digital online Zeitgeist. It becomes a part
of the language.

Speaker 1 (03:53):
Absolutely, and I think it also allows us to remove
empathy from somebody who very clearly is having a mental
health episode, which I don't think is healthy for anybody, right,
either the person receiving that, you know, criticism and being mocked,
or for us as viewers and as commenters. I don't

(04:16):
think it's healthy for us to mock people who are
very clearly ill.

Speaker 3 (04:21):
Right.

Speaker 1 (04:22):
And so a brief summary, now, Kendra Hilty did do
an interview with People magazine about her story and about
her experience posting her story to TikTok, and essentially what
she talks about is starting to see a psychiatrist, I
think just once a month, and she talks about how
overtime she started to develop feelings for him, and over

(04:45):
time she would drop hints or explicitly tell him, you know,
I had a dream that we hooked up in your office.
I have feelings for you. You know, I am in
love with you. And something that she talks about is
that this psychiatrist didn't immediately transfer her care, that he
continued to see her as a patient, and she felt

(05:06):
that sometimes he would be very friendly and quote let
her in, and other times he would be less friendly
and more professional and draw more rigorous boundaries. And this
woman took everything he did or didn't do as a
sign that he was playing coy and emotionally manipulating her

(05:27):
and playing hot and cold and giving her a little
just to take it away and then to draw her
back in. And I think something that's really interesting is that,
you know, psychiatrists, it is their job to treat people
who are depressed or schizophrenic or are experiencing delusions.

Speaker 3 (05:46):
I mean, this is what they do.

Speaker 1 (05:49):
You know, they work with people who are not seeing
things clearly and are out of touch with reality. So
there's I think really interesting nuances here about professional boundaries
about what doctors are supposed to do when their patients
are overstepping.

Speaker 2 (06:10):
Yeah, And I think there is the nuance of that
there are medical professionals that do take advantage of their positions,
and a psychiatrist is in a position of power, right,
and so that of course is in here. But because
of social media and also hippa like, the psychiatrist cannot

(06:31):
come and make his own video about what also was happening,
right and in his point of view, And so I
think that that it also creates this imbalance of where
she's giving. It seems like she is having a mental
health crisis, is giving her own side of it. And
there's also this other party involved who could potentially lose

(06:54):
his job.

Speaker 3 (06:56):
I think.

Speaker 2 (06:58):
In the People article she did meant that his name slipped,
and that she then took down the video. And so
there's a lot of nuance and pieces here, because of
course there are medical professionals that will and can take
advantage of patients or clients. Don't go anywhere, look, amotives.

Speaker 1 (07:16):
We'll be right back, and we're back with more of
our episode. There's another piece here in which Kendra is
a white woman and the doctor is a Muslim, a
man of color. And so of course you have folks
online saying that this woman is expressing what we have

(07:38):
now come to know as white women's tears, that she
actually has been sexually harassing her psychiatrist, that if this
were nineteen fifty, you know, he would be in a
lot of trouble. And I have also seen other therapists
and psychiatrists themselves posting on TikTok and saying I don't

(08:00):
think that she is in a place to navigate those nuances.
I think that she needs continuing care because she is
experiencing delusions, and she would have been experiencing those delusions anyways,
because that's the state that she's in as far as
her mental health, and that psychiatrist just happened to be

(08:24):
the object of her delusions that they were going to
be placed somewhere and they were just placed on him.

Speaker 3 (08:31):
She also talks about a therapist that she.

Speaker 1 (08:34):
Was seeing at the same time that she was seeing
her psychiatrists. And what's interesting is, according to Kendra, both
her therapist and her psychiatrist were unprofessional towards her, were
inappropriate towards her, were crossing boundaries, and that both of
these mental health professionals failed her to such an extent

(08:55):
that she had to turn to chat GPT to find
in a proroate mirror an appropriate sounding board for her
needs because she felt like these two mental health professionals
weren't doing it, and incomes her chatbot that she named
Henry affectionately.

Speaker 2 (09:14):
So I feel like CHATGBT is more widespread than ever,
and it is the most popular AI chatbot with seven
hundred million weekly users compared with tens of million users
for its competitors. And what I find really interesting about
CHATGBT in the way I hear people talk about using it,

(09:35):
is that there's a lot of authority and credibility given
to CHATGBT, when from my understanding, it's pulling from hundreds
of things on the web. You're getting sourced information, but
you're not given the source where it's from. For example,
I pulled this from the New York Times, or I
pulled this from Time magazine, and it is feeding you

(09:58):
information as it you're in conversation with it. And so
that is what I think is really interesting is that
it is given this like authority in people's lives, where
people give it authority versus like a Google search versus
doing their own research, right, And I think, I mean,
it's its own conversation about like research and accessibility. And

(10:20):
I think that that is one of the reasons chat
GBT is thriving right now and it's used by so
many people.

Speaker 1 (10:26):
Yeah, the chat bought Henry also in the context of
this story, seems to be doing a lot of validating
for Kendra. She describes her dynamic or her perceived dynamic
with her psychiatrist, and then the chat bought gives her
something back that affirms and validates her perception.

Speaker 3 (10:49):
Right.

Speaker 1 (10:50):
Oh, what you're experiencing, Kendra is transference where a patient
develops feelings for their psychiatrist, and then the psychiatrist is
experiencing counter transference, where they then develop feelings for you.
And I think what's really interesting about this is that
when you go see a doctor in person or over zoom,

(11:12):
they're not just assessing you based on the words that
you're saying, but they're like looking at you and how
you make eye contact and the speed at which you're talking,
and your body language, and there's all kinds of things
that go into your physical presentation and the way you
express yourself that help a doctor to come to some

(11:33):
sort of a diagnosis. But a chat bot does not
have that ability, you know, to assess you for like
your mood and your tone and your energy level and
again like your eye contact.

Speaker 3 (11:47):
And the chatbot is just never.

Speaker 1 (11:50):
Ending, gonna you give it something, It's going to regurgitate
something back over and over and over and over again.
And something you brought up the usis there's no time
limit to the check. They have no boundaries, they have
nowhere else to go, they have nothing else to do
except to engage with you.

Speaker 2 (12:07):
Yeah, Whereas if you're if I'm talking to you, mala,
and I'm telling you about something that I'm going through,
there's a natural time limit.

Speaker 3 (12:14):
Right.

Speaker 2 (12:15):
I have my own energy and what I can give
to someone, and so do you, and so in a
conversation in a friendship, right, there's a natural ending of Okay,
this conversation is dwindling. I've given you all the advice
I can give you about this one topic and we
take a break. And we're not seeing that with chatbots.

(12:36):
And that is according to Time Magazine, this article written
by Robert Hart, there's studies that show that a growing
number of reports suggests that extended chatbot use may trigger
or amplify psychotic symptoms in some people. And we're talking
about chat GPT psychosis or AI psychosis. But also want

(12:56):
to point out that it's not a formal diagnosis because
AI and CHATGBT models CHATBOP models are fairly new, the
data is scarce, and there's no clear protocols for treatment
for folks that are experiencing this type of AI psychosis.
And so it's like this colloquial term that we're using
right now because of this very new technology that we're

(13:17):
all that many of us are using in our day
to day lives.

Speaker 1 (13:20):
Yeah, this is, like you said, very new terminology that
I think is still being developed. It does not AI psychosis.
You're not going to find it in the DSM, the
Diagnostic and Statistical Manual of Mental Disorders. So it's not
an official condition recognized by the medical community. This, I
think is a term that TikTok users are coining and

(13:42):
maybe applying to what we're seeing unfold and perhaps the
medical community will like do mour research and follow up.
Maybe it will become something more official, maybe it won't.
But I don't think that Kendra is the only person
that has been sucked in to like a very deluded

(14:03):
relationship with a chatbot.

Speaker 2 (14:06):
Don't go anywhere, look amotives, We'll be.

Speaker 1 (14:08):
Right back, and we're back with more of our episode.

Speaker 2 (14:16):
The New York Times released this report about a man
named Alan Brooks who believed he had discovered a brand
new mathematical formula. Over the course of twenty one days,
he spent three hundred hours on chatgibt, and because he
was a user with an account, all of his logs

(14:36):
were saved and The New York Times reviewed it. They
reviewed the log with chatgibt. Psychiatrists reviewed it as well.
People in tech also reviewed it. For him, his AI
psychosis was a combination of talking to this chat gbt
this spot without interruptions for three hundred hours for twenty
one days, but also sleep deprivation, dehydration, not eating, becoming

(15:01):
completely sequestered because he believed he was on the brink
of this brand new discovery, this mathematical innovation. Because chatgybt
was also affirming that he was on the right path.
Was he updated his LinkedIn and started emailing people like
professionals in like science, math, like head of businesses, companies right,

(15:25):
and was telling him he had a new discovery. And
so he very much fell into this delusion that he
was going to discover something incredible. He was onto something
because the CHATGBT bought was affirming his quote findings, right.

Speaker 3 (15:41):
And it's it's just so important to remember.

Speaker 1 (15:44):
That the engine that is chat GPT, they need and
want you to use it. They're not going to give
you an obstacle. They're not going to say no to you.
They want you to keep feeding it. Like that's how
it gets, That's how it makes more money. That's how
it becomes stronger. And the more we feed it, right,

(16:07):
the more it gives back to us.

Speaker 3 (16:09):
And I think this is also goes back.

Speaker 1 (16:12):
To the situation with TikTok user Kendra, who, in the
course of telling her story on TikTok, in addition to Henry,
she generated more chatbots to talk to and she was
talking to them on live and telling, like TikTok live
viewers that they refer to her as the oracle because

(16:35):
she's a truth teller, because she's speaking for victims, because
she communes with God, you know. And so someone who
already believes that in their core or in a state
of delusion, to then hear it or to see it
in writing, you know, validated for her and from multiple sources,

(16:56):
is a very dangerous thing where if she were to
tell another human being, I'm an oracle, I speak to God,
another person might say, are you okay?

Speaker 3 (17:07):
You know, where is this coming from?

Speaker 2 (17:09):
What do you mean by that?

Speaker 3 (17:10):
What do you mean by that?

Speaker 2 (17:11):
SayMore?

Speaker 1 (17:12):
Yeah, But what she's experiencing is is just complete validation.

Speaker 2 (17:18):
Right, And a therapist, as someone that's been in therapy
for about eight years now, you know, my therapist challenges me.
She does not just affirm what I'm saying. She will,
of course validate how I'm feeling, but there's a difference
between like, Okay, you're feeling this, You're I'm validating that

(17:39):
you feel this, But could this also be part of
what's happening?

Speaker 3 (17:42):
Right?

Speaker 2 (17:43):
It's not so black and white. That's part of the
problem of what we're seeing if by using chadgybt as
a therapist and so much so, I mean there's not
enough I think regulation of CHAGYBT or AI bots yet,
but rely Illinois is the third state to restrict therapist
use of AI bots, along with Utah and Nevada, because

(18:07):
there's conversations within like the therapist community of AI and
using chagibt and AI bots as therapy.

Speaker 1 (18:17):
Yeah, I think it's this is this is another moment
where although we keep hearing so much about AI taking
over and AI replacing jobs, you cannot replace the human
factor in so many parts of life, And you cannot
replace human therapists and human psychiatrists with search engines, and

(18:39):
you can't replace human artists across the board with AI,
you know, because it's not original, so it's never going
to be that nuanced piece that you are hoping for
and that you're looking for. And I feel that even
in the case of Alan Brooks with this mathematical formula.

(18:59):
You know, I'm no mathematician, but I get the sense
that when scientific and mathematical advancements have been made historically,
scientists and mathematicians are working in teams because you need
someone to check your work, and you need someone to
cross reference or to give you ideas or bounce your
ideas off of. I don't think very many mathematical and

(19:23):
scientific advancements were made in complete isolation by one person.
It takes a university, a research university, and a team
of people. And I think that's another place where like
this poor guy thought he was on the verge of something,
but is it even practical and possible to do something
like that by yourself?

Speaker 2 (19:44):
Right? And I mean now he's advocating for more regulations
as well. And The New York Times did interview a
few experts and folks that work in tech and folks
that are tech adjacent or just studying the impacts of AI,
and it's clear that there need to be more safeguard
to mitigate the delusional reinforcement that AI can create, and

(20:06):
that there needs to be a break. Right if if
someone is using chatgybt for hours on end, you know
that the AI is then trained to remind folks like
to take a break, to actually exit, to close right,
to leave chat gibt for a minute. And so I
think that that's part of hopefully what will come. But

(20:30):
I do think like we're we're at we're at the
surface of like what we're seeing AI, what it can do,
and how it can affect us as humans and our
psyche but also of course like the job implications and
all of that that we're going to get into in
a later episode. But I think for now we really
wanted to focus on the mental health aspect because, like

(20:51):
Mala mentioned at the top of the episode, when we
talked about chatgybt as therapy or folks using it as therapy,
it did hit a nerve with a couple of people,
it did, and I want to recognize that, yes, therapy
is not accessible for everyone, but I think we can
both agree that turning to an AI is not the solution.

Speaker 3 (21:10):
It's not the solution.

Speaker 1 (21:12):
It's like a band aid on a bullet wound. And
just in summary, right, AI is not trained to respond,
and it doesn't have critical judgment and people need community
and real life connection. And not to sound trite, but
we have to go outside and touch grass and like
hug our family members and like make eye contact with
other people.

Speaker 2 (21:31):
Yeah, and not to go to not to be too tangential.
But I was listening to this podcast about this man
who created a chat gybt or an AI girlfriend, and
what to me was so striking is that during the
pandemic he created this real quote real life you can
like design your AI friend to look however you want

(21:55):
and sound however you want. And during the pandemic he
was isolated. Life is very sick, from what it sounds like,
is very immo compromise, has a heart condition, and even
more so, he wasn't able to have like a little pod,
so he was very very isolated. And that's when he
created this AI girlfriend. It's this AI companion. And while

(22:18):
I was listening, I thought, if this man had more
of a safety net, had services as a caregiver, he
might not have turned to an AI companion. If he
had more tangible support as a caregiver as someone living
through the pandemic like we all did. And it does

(22:39):
go back to the systems that we have, at least
in this country where a lot of us lack social
safety nets. And so if we had these these things,
would we be turning to these AI bots, to these
chat bots to get validation, to get support, to even research.
You know, I've seen, folks, I heard. I saw this

(23:03):
TikTok video of a man saying that he went on
a date with a woman and she typed in her
chaggybt while they were on the date a bunch of
order from the restaurant. No, so we're using it in
these ways that one are like really scary, and two
we'll get into the environmental impacts in a later episode,
but I think we're kind of lost the plot. I

(23:24):
think chadgybt. I personally don't use it, but I do
see how it's a tool for people, and I think
we're making it do more than it should.

Speaker 1 (23:32):
Yeah, it's a very advanced magic eight ball, and if
you are already in a vulnerable state, it can just
reinforce delusions instead of giving you an actual like mirror
right to yourself. So I think that's really important. And
stay safe out there, y'all. Stay safe out there.

Speaker 2 (23:54):
Talk to your friends, talk to your community.

Speaker 1 (23:57):
Talk to your neighbors.

Speaker 3 (23:58):
Yeah, I think, like so.

Speaker 2 (24:00):
Are everything I've learned this season with producing with you
Mala is like we all need a connection community, We
all need to be irl. The digital communities are wonderful,
but they have it has to extend beyond that.

Speaker 1 (24:14):
It's so important. So thank y'all for anyone who's been
coming out to our events past and in the future,
because we have more coming up. We do want to
take the show out into the world like we used to.
It's so important to connect with our audience and like
look people in the eye and meet people and shake
hands and hug and you just cannot replace the human connection.

Speaker 2 (24:35):
Yeah, well, thank you for listening to another Capito Radio.
We'll catch you next time, Bessie Thos. Look At Radio
is executive produced by Viosa FM and Mala Munios.

Speaker 1 (24:46):
Stephanie Franco is our producer.

Speaker 2 (24:49):
Story editing by Me diosa.

Speaker 3 (24:51):
Creative direction by Me Mala.

Speaker 2 (24:53):
Look at Our Radio is a part of iHeartRadio's Michael
Dura podcast network.

Speaker 1 (24:57):
You can listen to look at Our Radio on the
iHeart Radio app or wherever you get your podcasts.

Speaker 2 (25:02):
Leave us a review and share with your prima or
share with your homegirl.

Speaker 1 (25:05):
And thank you to our local mores, to our listeners
for tuning in each and every week.

Speaker 2 (25:10):
Besitos Loca Lumia
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Herd with Colin Cowherd

The Herd with Colin Cowherd

The Herd with Colin Cowherd is a thought-provoking, opinionated, and topic-driven journey through the top sports stories of the day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.