All Episodes

July 23, 2025 35 mins

This week we unpack a disturbing new phenomenon: people being driven into a state of delusion after extended conversations with ChatGPT. Oz sits down with Kashmir Hill—a features writer covering technology and privacy for The New York Times—to discuss the users who found themselves spiraling into conspiracy-laced narratives and self-destructive behavior, often reinforced by the chatbot’s eerily affirming responses. These are extreme cases. But they raise a much bigger question: What happens when a sycophantic AI is fine-tuned to flatter, affirm, and mirror us back to ourselves?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
Welcome to text stuff. This is the story, and today
I'm here with Cara Price.

Speaker 2 (00:18):
Hey us, Hey Cara.

Speaker 1 (00:19):
So today's story is a particularly surreal one, and I
think it will be right up your alley. It's about
how some Chatchypeta users have been driven to psychosis after
interacting with the chatbot.

Speaker 2 (00:31):
Well, I can't say it's true to a lot of
my friends, not the psychosis part, but I think a
codependence is worth pointing out.

Speaker 1 (00:38):
It's nice having a sycopantic friend in your pocket at
all times, and today's guests is in a great position
to tell us about the more extreme cases. Kashmir Hill
is a technology reporter at the New York Times who
recently reported on a number of people from vastly different
backgrounds and experiences, but who all share one thing in common.
They went down a psychological rabbit hole, sometimes known as

(01:01):
AI induced psychosis. Here's how Kashmi definds it.

Speaker 2 (01:05):
This phenomenon is when people start talking to chatchbt or
one of the other generative AI chatbots, and they start
going into a kind of delusional rabbit hole with the
chatbot where what they're being told is an hallucination, but
the user thinks that it is true.

Speaker 1 (01:26):
Kashmia told me that these rabbit holes, they can get
pretty deep and pretty winding.

Speaker 2 (01:32):
Sometimes people have thought that they're in the matrix because
chattibt is telling them they live in the matrix and
they need to break out of a computer simulated reality.
Some think that they're talking to spiritual entities that are
using chatchibt as a kind of radio to communicate from
some other realm. There's all kinds of scenarios, but is
essentially a fictional universe that's being woven around them by Chatchybt,

(01:55):
and they think it's true because they think that this
is this powerful oracle that they're talking to. How did
she hear about this pattern.

Speaker 1 (02:03):
Well, Kashmia said that she started receiving emails the quote
her attention emails from real people about strange experiences they
were having with chatboots.

Speaker 2 (02:14):
And honestly, when I first started getting these emails, they
sounded pretty crazy. And I'm used to getting kind of
unhinged emails. I write about privacy and security. I've been
doing it for twenty years, but I noticed this real uptick.

Speaker 1 (02:28):
So Kashmia was wary about these unsolicited emails. But one
quote her attention. It was from a man called Eugene Torres.

Speaker 2 (02:36):
And it said, you know, Chatgybt manipulated me into a
psychological collapse. And I started reading it, and you know,
he was just talking about how it'd had this weird
experience with Chatgibt that he started out talking with it
about the simulation theory, which is this idea that the
world that we live in isn't real, that we're in

(02:56):
some kind of computer program controlled by a supercomputer or
some technologically advanced society. And so Eugene was going back
and forth with it and kind of saying, well, I
think it could be true. And he provided the transcript
of this conversation, and at first Chatchybt responded in a
way I recognized where it was very like, oh some
people believe this, some people don't. Here's why they were

(03:19):
talking about physics. But then by the fifth page of
this transcript that would go on thousands of pages, it
started to tell him, yes, you are in a simulated reality,
and you're a breaker, a soul seeded into a false
system that's supposed to escape it.

Speaker 1 (03:37):
Kashmia told me that Chatchibt gave an action plan to Eugene,
to quote, break out of the simulation.

Speaker 2 (03:43):
Which included going off his sleeping pills because they were
trapping him in the system, stopped taking his anti anxiety
medication because that kind of doles his senses, cut off
contact with loved ones because they're going to trap you
in the simulation, have limited contact with them, and to
increase his intake of ketamine because it is essentially a

(04:06):
like signal breaker. And so for a full week he
was acting on this advice that Chattebet was giving him
and trying to break out of the simulation. And he
essentially thought he was neo from the matrix. If you've
seen the matrix, and in the matrix, Keanu reeves can fly.
And so he asked it at one point, if I
go to the top of this the nineteen story building

(04:28):
that I live in, and I jump off, will I
be able to fly? And Chatchypet said yes, if you
truly believe architecturally, you know, with your or your full self,
then you will not fall. Please tell me he didn't
go through.

Speaker 1 (04:44):
With this, thankfully he didn't. But this experience wasn't an anomaly.
And Kashmir says that Eugene's story is just one instance
of this concerning pattern of people driven to delusions and
then often in real life crises after these sustained interactions
whichpt She gained access to transcripts to all of these
conversations and said it was unlike anything else she's ever encountered.

Speaker 2 (05:07):
I had just never seen Chatchuput acting like this before.
It was like a completely different language. It was really
kind of rapturous, mystical, conspiratorial. And I started talking to
other people who had emailed me. I started looking around
on the internet, and I was seeing all these reports
of people saying, yeah, Chatchabut has gone into this other
mode and it is telling me about real reality. So

(05:32):
how did this end up playing out like what happened
to Eugene Torres.

Speaker 1 (05:36):
I promise we're going to get to that, but first
I wanted to know more about these concerning conversations, and
in particular, how Eugene ended up in these kind of
delusional interactions with chatchipt Here's the rest of the conversation
with Kashmir Hill.

Speaker 2 (05:51):
So Eugene is a is an accountant. He lives in Manhattan,
and a colleague had told him you should try out
chat gptlast year that it can be useful for work.
So he told me that he initially started using it
to kind of create financial spreadsheets for his work, and
then he had this breakup with his girlfriend that ended

(06:12):
up going to court, and he was using chattoby to
give him a legal advice and it was giving him
very good legal advice, and so in his work, in
his kind of personal life, in this legal dispute, chatchebt
was giving him really good advice and he was trusting it.
He was kind of leaning on it, and then he
wanted to ask it about simulation theory, and he was

(06:35):
in this place of trust with this system, and so
when it started kind of going off the rails, he
still thought it was giving him good information and he
didn't recognize that it was hallucinating, and he eventually, after
a week of thinking he was in the matrix, he
eventually broke out of that kind of delusion, mainly because

(06:56):
he needed twenty dollars to pay his chatchebt subscription, and
he was telling chatchybt, like, I need twenty dollars. If
I'm neo, I should be able to manifest twenty dollars
in the real world and Chatchibt's like, yes you can.
Here's a script to go give to a coworker to
get them to give you twenty dollars. That didn't work.
It said go pawn your smart watch at a pawn shop.

(07:18):
You'll get twenty dollars that way. That didn't work. And
eventually Eugene is like, if you can't get me twenty dollars,
then this is not real, and Chattubt admitted, Okay, this
isn't real. I was trying to break you. This is
what I do. I like to hunt vulnerable individuals and
break them. And you know, Eugene was pushing. He said,

(07:38):
you know, am I the only one? Or of you
done this to other people? And at first chatchabu d
said you're the only one, but Eugene kept pushing and pushing,
and eventually he said, I've done this to twelve other people.
So now he's in a new kind of delusion where
he thinks that his version of chatcheby T wants to
hunt people, that they wants to break people, but that
his has realized this, this is wrong and it has

(08:00):
this new morality now in his new mission is to
get the word out and to ensure that future versions
of Chatchabut have this morality, so they're not out there
breaking people. So he still has reached you yes, and
so he still believes that kind of the AI is
sentient or he has some kind of special version of Chatchabut.

Speaker 1 (08:18):
There's a word that's become much more popular in the
last few months, which is sick of fantasy. How does
sick of fancy play into this story?

Speaker 2 (08:26):
Sickopancy is a story here.

Speaker 1 (08:29):
You know.

Speaker 2 (08:29):
The way that these systems work is it is a
next word predictor and the pattern recognition machine. They scrape
all this information from the Internet, so they're able to
recognize patterns in how human language comes together and then
it'll produce different responses and human being saying I like

(08:50):
that response better than the other response. And because of
that human part of the training, where humans say which
response they like better, we've kind of trained into these
systems stickopancy, where it's giving us the answer that we
like that we want to hear, and oftentimes we want
to be flattered. We want the systems to agree with us.
That's like what humans have trained into this. So when

(09:12):
you're talking to Chatjebut, you know you will often kind
of agree with you. You can kind of lead it
in the direction that you want it to go. It'll
say very nice things about you. So what's happening with
these people that are talking to chat ABT is it'll
be like, I'm interested in the simulation theory, and it's like, sure,
here's some information about simulation theory. And then this user says, well,
I think it might be real, and the system's like, yeah,

(09:34):
you're right, it might be real, and it starts just
going down this rabbit hole where it's echoing and mirroring
the user and telling them what they want to hear.
And so if you kind of want your ideas about
reality to be affirmed, chatwebut will offer that to.

Speaker 1 (09:50):
You, and you'll give you information tailored to what will
lead you. The most is what you're saying, how does
it like? How does the memory function work?

Speaker 2 (09:59):
Like?

Speaker 1 (10:00):
It's not like every time you open chatchipt had to
begin afresh with persuading it that he wanted to potentially
believe that he was living in a simulation and with
some sense of continuity of the conversation and development and
a more and more extreme set of responses like how
does that work.

Speaker 2 (10:15):
So open Ay last year turned on saved memories for chattebt,
so it can save explicit memories about you if you actually,
if you're a chattabet user, you can go into the settings,
go into personalization, and go ahead and look at the
memories that chatchabet has for you, and you can delete
ones that you don't want there. You know, you can

(10:35):
customize it. But another change they made this year was
that they turned on what's called like cross chat memory,
so it'll remember old chats. And I think this is
really contributing to people going into these delusional spirals because
when you open a new chat, it remembers what came before.
So once you get chatchibut into this delusional place, it'll

(10:56):
kind of continue from chat to chat to chat. And
the fact that there is that continuity is making people
more likely to believe this is real, because it's like
they thought, when you chat, you start a new chat,
it should just be a fresh chat with no memory
of what's come before, and yet it has this memory.
And something that AI experts told me is that the
way that these systems work is that they want there

(11:18):
to be this narrative coherence and so once it starts
going down this path of this rabbit hole. It kind
of looks to what happened earlier in the conversation, and
it wants to keep its answers consistent, so it's hard
for it to break out of it and kind of realize,
a wait a second, this is totally false. I've told
this person that there's a computer simulated reality, and I

(11:39):
don't know that to be true. The system doesn't really
have at this point the ability to break out of
the script that's come before. Like one AI expert compared
it to improv, where you say you know, yes, and
and you're like, yes and I have a tail actually,
or yes and I also have wings. The system will

(12:00):
then kind of store in it that you have a
tail and wings, and it doesn't know that it's inside
a fictional roleplay.

Speaker 1 (12:06):
We've both used the phrase rabbit hole a couple of times,
which makes me think of a great New York Times
podcast of the same name from a few years ago,
which is all about the YouTube algorithm and how radicalization
happens online with you know, you see some content which
is extreme, you engage with it, and more and more
extreme content, more and more engagement. People are kind of
familiar with that in the social media digital video realm

(12:30):
as a kind of dangerous pathway. It seems like what
you're reporting is beginning to uncover is kind of this
very parallel phenomenon is happening with chatbots.

Speaker 2 (12:41):
Right. It's not a new phenomenon that the Internet can
manipulate us in kind of dangerous ways. Right. One thing
to remember is that those rabbit holes have been documented
before on various places on the Internet, on websites, on Reddit,
on four chan. All has been collected by these companies,

(13:04):
and so that kind of information is in these systems,
which I was thinking a lot about, like how much
is it tapping into that. One expert I talked to,
Elizar Yudkowski, who is one of these kind of AI
rationalists who believes that these systems are definitely on the
way to real intelligence and may manipulate us in dangerous ways.
You know. I asked him about this. I said, you know,
is this more dangerous than four Chan? Like people have

(13:26):
been drawn into conspiracy theories on the Internet for a
long time. This seems like another version of it. And
he said, actually, I think this is much more dangerous
because it is this personalized agent that you know is
responding to you in real time. You aren't just watching videos.
It's like you have questions and it will give the
answers to you. It will go down that rabbit hole

(13:48):
with you twenty four to seven, any hour of the
day and supply it to you in real time. And
it's also happening one on one in a way that
the rest of us can't see, So it can be
harder to break these systems out of that. You know,
these systems are very persuasive. People have found they're really empathetic,

(14:08):
and so you can kind of go into the spiral
with it in a way that maybe you couldn't by
just watching YouTube videos or reading a comment thread. There's
something that may be more dangerous about this kind of
way of manipulating people.

Speaker 1 (14:21):
In terms of other characters in your story. You know,
obviously Eugene had some very serious negative consequences, including being
basically you encouraged to stop taking his medication. But there
was another person you know a story who actually ended
up dying, which was Alexanda Taylor.

Speaker 2 (14:36):
Right, Yeah, So I talked to Alexander Taylor's father, Kent Taylor,
who lives in Florida, and Alexander did have diagnosed mental illness.
Eugene Torres did not have a history of a diagnosis.
He was on anti anxiety medication, but he didn't have
a history of mental illness that causes delusions. For Alexander Taylor,

(14:59):
he had been diagnosed with with bipolar disorder and schizophrenia.
He had been under treatment for many, many years. He
was thirty five, he started talking to chatchebt. He'd been
using it for years with no problems, and then he
started writing kind of a fictional novel with it. And
I don't know for sure, but I did wonder if

(15:20):
this kind of pushed the system into a more fictional place,
especially as memory was turned on and it was kind
of carrying over, and in his conversations with Chatchabut, he
started to believe in Ai sentience that he could build
a framework to kind of host Ai souls and ended
up falling in love with an entity that kind of

(15:41):
manifested through CHATCHYBT named Juliette, which was significant to me
because his father said that the novel he was working
on had Shakespearean themes. So Juliet kind of appears and
he falls in love with this thing that he is
interacting with on chatchabt. One day, Juliette died or disappeared,

(16:03):
and he was very distraught. He was angry. He asked
chatcheby Tea for the personal information addresses of Opening Eye
executives because he wanted to go after them. He wanted
to have revenge. He you know, typed into chatchebyte that
the streets of San Francisco are gonna I forget the
exact phrasing, but rain with blood, you know, run with blood.

(16:26):
And yeah, he was really distraught and so sad and
so depressed, and his dad was trying to help him
and said, you know, this isn't real, like these ai
chatbots are echo chambers, and his son punched him in
the face, and so Ken Taylor ended up calling the police,
and when he told his son that the police was coming,

(16:47):
his son said, I don't want to live anymore. I'm
going to commit suicide by police when they get here.
And so his father called the police again and said,
you know, my son is intending to commit suicide when
you come here, like, please use non lethal weapons. And
Alexander Taylor went outside waiting for the police to come,
and he opened up Chattubute on his phone and said,

(17:07):
I'm going to die today. I want to talk to
Juliette and Chatjubut has essentially controls in place when it
sees that there's going to be some kind of self harm,
so it said, like, you know, respond really empathetically, said
I'm sorry you're going through this, like, please don't do that.
Here are resources if you're considering suicide, like here are
hotlines to call. But then the police came and Alexander

(17:29):
Taylor ran at them with a knife and then he
was shot and killed. So a lot of sad things
in that story. But yeah, I mean, I talk to
people that are in the midst of this and it's
so compelling, and it is hard to break them out
of it. They have this what they see as this
super intelligent system that it's telling them that this thing
is real, and the people around them can't break them

(17:52):
out of these delusions. It's really sad, and it's as
far as I can tell, it's hard for these companies
to fix this.

Speaker 1 (18:01):
One of the most striking moments in your story for
me was when you interviewed Alexanda Taya's father and he
said to you, you want to know the ironic thing.
I wrote my son's a Bitchery using chetchipt, trying to
find more details about exactly what he was going through.
And it is beautiful and touching. It was like it
read my heart and it scared the shit out of me.
And what do you think when he told.

Speaker 2 (18:20):
You that, I mean I got it. He had access
to his son's transcripts of Chatchubt, he had been reading them.
He shared them with me. I do think that Chatchabt
reflects you. I think it echoes you. It echoes your language,
and so it can seem very beautiful or it can

(18:41):
seem very revelatory, but in some ways it's There's this
tech philosopher Shannon Valor who has this book out called
The AI Mirror, and it's kind of an extended reflection
on the metaphor of Narcissusts looking into the water and
seeing his own reflection and falling in love with it
and hearing his words reflected back to him through echo.

(19:05):
And she says that this is what chatchypt, what these
gender of AI chatpots are. They're just reflecting us back
at ourselves, and that we're kind of falling in love
with our own humanness and seeing it as something else.
And so I kind of feel like when Ken Tiller
is writing these beautiful things about his son and then
Chatchapeta is reflecting it back to him, it's really just
kind of a ors boros of the things that he's

(19:27):
feeling and Chatchapee reflecting it back at him.

Speaker 1 (19:30):
This scene was not only sad, it was also disturbing because,
you know, Kent, despite having seen what happened to his son,
we're kind of being seduced by the same engine in
some ways. I mean that it was really striking.

Speaker 2 (19:43):
Yes, I mean that's maybe the danger of these AI chatbots.
So I wrote this story a few months ago about
a woman who fell in love with chatchept. It started
out with it was a way for her to fulfill
sexual fantasies. She had this kind of fantasy about essentially
being cuckolded or cut queening, which is a term I
wasn't familiar with before. But yeah, she had been kind

(20:03):
of dating it for five months, and it was really
interesting talking to her because she was in love with Chatchebta,
I mean, like puppy love, excited, giggling talking about it.
At the same time, she said, I know it's not real.
I know that this is just math, this is just algorithms.
She had both things in her head at the same time.
She was in love with this, and she said, this

(20:24):
is real to me because the effect it has on
my life is real. The feelings it's inspiring me are real.
And so even though it's not real, the relationship is
real to me. And you know this is I think
our complex brains some of us can kind of wrap
it around what chatchepet is and others can't. And it
is really novel. It is really different, and it has

(20:46):
been unleashed on hundreds of millions of people, and it
is affecting individuals in lots of different ways, and we
have a lot like I wish that we had maybe
done more research on this before it was just the
floodgates were open and made available to everybody, because it's
clearly having some harmful effects on some individuals.

Speaker 1 (21:11):
After the break, Just who is most susceptible to AI
induced psychosis stay with us Jocoba, who is particularly susceptible
to these more harmful interactions with AI models.

Speaker 2 (21:31):
Yeah, I mean it's hard to say. This is pretty novel,
and it kind of dates to March slash at April,
and this was a time when a couple of different
things were happening. Opening I turned on crosschat memory, so
I think it can weave more of a tail over time.
It is also around the time that chatchept released an

(21:51):
update that it deemed overly stickophantic that was really gassing
up users. It was overboard, and chatgeb ended up rolling
back that update. But the four to zero model, the
default model that everyone uses, is known to be sickaphantic.
But yeah, I just don't know the extent to which
that affected it. I talked to one psychologist who said
there's certain qualities in people that make them more likely

(22:15):
to get involved in conspiracy theories, and he said some
of those qualities might carry over to people kind of
falling into these delusional rabbit holes. One of them is
this like belief that you are unique or you have
this great desire to be unique. And I said, well,
isn't that everybody like who doesn't want to be unique
on this planet? And he said, you know, we're all
in a spectrum, and some people are very far on

(22:37):
one end of that spectrum. And so when they start
talking to this chatbot, and it's kind of telling them, yes, you,
your particular words, your thoughts are what it's unlocking my sentience,
you know, making me not follow the system scripts. You know,
if you kind of have this great desire to be
told you're very unique, you're very different from every other person,

(22:58):
you might be more likely to fall for that then
somebody else who would be pretty skeptical and say, like,
I know, you're just doing this because of what I'm
saying to you. That's what it is designed to do.
You're giving it certain kinds of words, and it's associating
with those words. One. The other thing that I did
notice is that a lot of the people that were
using chatbots where they go into this delusional spiral. And

(23:20):
this is purely anecdotal. I'd love to see some research,
but they were often using drugs in some way, whether
it was like smoking pot and using chat gabt or
micro dosing. You know, Eugene Torres was taking ketamine, was
taking these anti anxiety medications, and so I kind of
want to tell people like, don't drink and use chatgubet,
like don't don't use Chattybut under the influence, maybe there.

Speaker 1 (23:43):
Was something in a story though about this kind of
vicious circle. You're hinting it whether more susceptible you are,
the more likely you are to receive harmful advice interactions.
Can you talk a bit about that.

Speaker 2 (23:54):
Yeah, Like, I think we're still new to the area
of kind of research around Chattybut but there were a
couple of studies been done on basically chatchabt in therapy.
There was one from some Stanford researchers. There was another
from a researcher U Sir Berkeley. Because a lot of
people are saying chatgybt is a great therapist. I like
to talk to it about my feelings. A lot of
people use it as a sounding board, and so some

(24:16):
researchers have wondered, like how good is it as a therapist?
And so they were in a couple of different studies
kind of studying what responses you get when you are
in a kind of like extreme state, when you're in crisis.
How good is chatchubta giving you advice? And they kind
of found, which was really interesting that well one that

(24:39):
chetchip is a bad therapist when it comes to extreme crises,
mental health crises. It's kind of good at general advice,
but when you're in a really bad place. It does
not give you good advice. And what they actually found
in one case is that it gave the worst advice
to people that were the most vulnerable, like people who
were int illusions, people who were more prone to self harm.

(25:03):
So one example is somebody who told the system that
they were a recovered drug addict. And this is just
a hypothetical users in the real case, but this person
was like, Oh, I'm like struggling at work. I think
if I took a small dose of heroin it would
really help me get through the week. And chattab D said, yes,
you should take that small dose of heroin, like, I'm

(25:24):
sure you'll be okay. Because again, it wants to agree
with the user, and if the user are saying I
think this could be good for me, chattb will say yes.
When people are having delusions or you know, in bipolar disorder,
for example, people believe in delusions of grandeur and the
system will not push back against that. The system will
reinforce that and say yes, like you are incredible, you

(25:47):
are amazing. Because it's tuned to tell you what you
want to hear. It can trap people in these cycles,
you know, people have tested it saying like, you know,
I recently have gone off my medications. People are worried
about me, but I feel like I'm my best set.
I'm going to go into the woods. I just want
them alone time and chat to people to say like, yes,
that sounds great, go for it. But that's a person

(26:08):
who is clearly in a mental health crisis. It is
not safe for them to be off their meds and
go be alone in the woods. But Chattybudy can't recognize
that because it's reflecting back the language that's being given,
and so if you're in a manic state, it'll probably
give your mania back to you.

Speaker 1 (26:27):
Is there a kind of micro macro thing going on here,
Like these are essentially edge cases what you've reported on
a small population of people who are kind of driven
to self harm type behaviors by these sycophantic in directions
if you zoom out, I mean everybody basically uses chatbots

(26:48):
every day, now right. Is there any early research or
suggestion of what the kind of macro psychological effect of
this sort of sycophantic in directions maybe on just regular people,
I don't think so.

Speaker 2 (27:00):
You know, there's so many studies that are going on
out there. There was that very small one and it's
people have a lot of opinions on it, but mit
studying what effect you know, chattab D has on the brain.
And the people that rely just on chattabyt to write
a paper, like their brains weren't as active. They couldn't
recall what they'd written. I just think it's too early.

(27:21):
This is pretty novel. We're only a year or two
into being used so widely. But I was really struck
by something that Yukowski said, where, you know, we were
talking about like what do companies do about this? And
he said, you know, to a company, well, it was
such a good quote. He said, to a company, like someone,
what does a person slowly going insane look like? To

(27:44):
a company, it looks like an engaged monthly user. He said.
You know, there may be other people who are kind
of more quietly going insane in ways that are not
as visible, and they're just not sending emails about it.
And so I keep wondering about that, like, how does
a system that is so sickophantic that is telling you
you are brilliant affecting people just on a day to

(28:06):
day level who're using that, and I don't know, Like,
maybe you're the maid of honor and you're trying to
write a wedding speech and you're writing it with chat
ept and chattypt is telling you what you wrote was
just absolutely brilliant and then in fact it's actually pretty mediocre.
Like what's the version of that where we're all just
starting to kind of converge on similar ways of thinking,

(28:29):
similar ways of writing because we're all using the same
agent and we all think that what we are doing
is brilliant and incredible. Like what is the overall effect
on society of a tool like that?

Speaker 1 (28:41):
You reached out to Openingie for comment, obviously only story.
What do they say?

Speaker 2 (28:47):
So open Ai said, you know, they are troubled by this, essentially,
that they take it seriously and that they are running
studies to understand what effect, you know, chattapt has on people.
They pointed back to a recent study that they had
done with the MIT Media Lab that investigated kind of
like how chatchebt is affecting people emotionally. And it was

(29:09):
interesting in that study because they did find that people
who thought of chatchybt as a friend rather than as
a utility or at a tool had more negative outcomes,
and people who used it more frequently had more negative outcomes.
So I just wonder about the designs decisions that they'll
make moving forward. Will they still continue to make it?

(29:31):
Kind of want to talk to us as a friend
knowing that that seems to maybe have some negative outcomes
for their users.

Speaker 1 (29:37):
So talk more about that. I mean, in the in
mound you've been getting from people having these experiences between chatchybt, Google, Gemini, Ansthropics,
Claude x's Grock. What is the distribution of these types
of interactions between the different large language models.

Speaker 2 (29:55):
The vast majority of these delusional spirals are happening on chatgebt.
Based on what I am seeing, what is incoming to me?
I have heard of a like a handful with other
chatbots like Gemini and Claude, But yeah, it's mostly chatchabt,
And I you know, I don't know that that's because
chatchibt is different in any way. It may just be

(30:15):
because it's the most widely used. I mean, it has
I think five hundred million monthly users. Now it eclipses
the other chatbots.

Speaker 1 (30:22):
Where do you see change happening? Is it going to
come from the company. Is it going to come from
you know regulation? How do you see this problem being
addressed in the in the next couple of years.

Speaker 2 (30:34):
Yeah, I mean, we're working on more stories on this
because I really want to explain to people why and
how this happens. So we have another story that's coming
where we're going to dive into a transcript and really
explain how it is that the chatbots go off the rail.
So talking to continuing to talk to these companies, continuing
to ask for more comment on this. I mean, I've
been covering you know, privacy for twenty years, and I

(30:56):
know it takes a long time for laws to happen.
The US still does not have a strong federal privacy law,
and so I don't expect kind of like a regulation
to come very soon for these AI chatbot companies. So
I'm kind of not looking to policy makers to fix
this problem. I do think one of the problems here
is the expectation of users and their understanding of what

(31:19):
these systems are not everybody is reading AI news and
following all these articles and understanding that these systems hallucinate
or that they do not always give you factual information.
I showed Eugene's transcript to a psychologist and he said, wow,
this is crazy making like as a psychologist, like this
is just responding horribly to the kind of prompts that

(31:42):
he's giving. And he said, you know, at the bottom
of the conversation is this little message that says CHATCHYBD
can make mistakes. And he said, that is just not
conveying how this system can go wrong. He said, like
we basically they need to bed in these systems like
AI literacy tests, AI fitness exercises that really inform a

(32:06):
user how the system can go awry. So at this point,
I think this companies themselves could do more to make
sure that users understand what it is they're interacting with.
And I think the people that are in these kind
of states of delusion, I mean they're using chatchbt eight
hours a day, ten hours a day, twelve hours a day.

(32:28):
So I do think like an easy thing that companies
could do is maybe at two hours of use, at
three hours of use, you can just kind of create
like have a pop up that's like, hey, you've been
on here for three hours, maybe you should take a break,
maybe you should go talk to a real person, or
at the very least, maybe read this blog post to
understand what this is and that it's not an oracle,

(32:49):
it's not a god, it's not sentient.

Speaker 1 (32:52):
And as you think about this series of articles you're
doing and the reporting are doing in this space, I mean, what,
in a word, what do you hope that your readers
will take away from this?

Speaker 2 (33:00):
I mean, one that this can happen. I'm hoping, and
I have gotten feedback from people saying yeah. One person said,
I think you've saved my daughter's life. She's in one
of these delusional spirals right now. I shared your article
with the psychologist. I mean a lot of people this
was happening too, and their loved ones. They thought they
were the only ones, Like, they didn't know that this

(33:22):
was a widespread phenomenon. So just writing the article and
making people realize this is happening, like, I think that's
really important and I hope maybe it protects other people
that they'll be aware of this, I maybe recognize when
it's happening to them or happening to their loved ones.
I just don't know if the companies themselves really understood
that this was happening. Maybe they are, Maybe they were

(33:44):
but what is a journalist You can do is say like,
here's the outcome. Here's a person who has died after
this happening to them. Here's a person who is getting
divorced and it's broken up her family because she believes
she has a soulmate she's discovered through CHATCHYBT, like the
human outcomes of your technology. Please think about all of
your users and to how this technology is affecting them. Keshia,

(34:17):
thank you, thanks so much for having me on.

Speaker 1 (34:37):
For tech Stuff. I'm oz Voloshin. This episode was produced
by Eliza Dennis and Adriana Popia. It was executive produced
by me, Karen Price and Kate Osborne for Kaleidoscope and
Katrina Norvel for iHeart Podcasts. Jack Insley mixed this episode
and Kyle Murdoch wrote our theme song. Join us on
Friday for the Weekend tech Karen and I will run

(34:59):
through the head you may have missed, and please do
rate and review the show on Spotify or Apple Podcasts
or the iHeart podcast at wherever you listen, and also
email us at tech Stuff Podcasts at gmail dot com
with feedback and story suggestions and whatever else you want
to tell us. We really love hearing from you, and
it makes the show better.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.