Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
People look to their chat bot for advice, for comfort,
to just have some peace of mind. Because your chatbot
has answers, it's validating, and sometimes it can lead you
to a murder suicide. That's the subject of today's True
Crime Tuesday.
Speaker 2 (00:19):
The story is true true, No, it sounds made up.
Speaker 3 (00:25):
I don't know. Parry and Shannon present True Crime.
Speaker 2 (00:32):
So we've talked about chatbots before, whether it's chat, GPT
or grock or you know, any number of these new ones.
Meta has one, Google's got them. They've all got these things,
and it's designed basically to just get you to use
it over and over again. That's what social media does
in general. That's what these chatbots do, is they just
(00:53):
they just need to hook you and just get you to.
Speaker 3 (00:55):
Come back and back, come back and forth on.
Speaker 2 (00:59):
One of the ways that they do that is by
the companies making these chatbots seem human. Oh, also very sickophantic.
Everything you do is right, everything you do is good.
Every everybody's out to catch that's addictive.
Speaker 1 (01:15):
Stein Eric Schulberg, let's introduce you to stein Eric Schulberg.
Interesting name. He had a little bit of a paranoia
going on. He believed that there was a surveillance campaign
being carried out against him. He kind of thought everyone
was against him. Think about a bad trip he was
on one. He thought that people in his hometown of
(01:39):
Old Greenwich, Connecticut were against him, an ex girlfriend, even
his own mother.
Speaker 3 (01:43):
And at every turn chat.
Speaker 1 (01:45):
GPD, chat PT, chat gp T did what you just outlined,
chat GPT told him you're right, you're right, you're right.
Speaker 3 (01:57):
They validated his paranoia this and I hadn't seen it
put this way before.
Speaker 2 (02:02):
Psychiatrist U see San Francisco, Keith Sicata, Doctor Keith Sicata.
A key feature of AI chatbots is generally that they
don't push back. And this doctor has treated a dozen
patients over the last year who have been hospitalized for
mental health emergencies involving AI use, and he says psychosis
(02:24):
thrives when reality stops pushing back, and AI can really
just soften that wall.
Speaker 1 (02:30):
Yeah, I mean, we've talked about it in business, how
it can be the downfall or politics if you're in
an echo chamber where everyone is just surrounded.
Speaker 3 (02:38):
Yes, yes, yes, yes, you don't want that, you know.
Speaker 1 (02:41):
We've talked about how we've respected presidents that have filled
their cabinets with people that don't always agree with them
because it's nice. It's important to have a bit of pushback,
and that's not really what happens with chat gpt, according
to what we've been taught. So here's this guy, stein
Eric Schulberg. He's fifty six, he's a tech industry veteran,
(03:02):
but he's got this history of mental instability leading to
this bout of paranoia. Well, his bot repeatedly assured him
he was sane, and went even further, adding fuel to
his paranoia. A Chinese food receipt contained symbols representing this
guy's eighty three year old mother and a demon. According
(03:23):
to chat GPT, after his mom got mad when the
guy shut off a printer they shared, the chat bot
suggested her response was disproportionate and aligned with someone protecting
a surveillance asset.
Speaker 3 (03:38):
See you tell chat gpt you.
Speaker 1 (03:40):
Want to play this fun game, it says, Okay, I'll
play along with you. Unfortunately, it's not just a game
of paranoia. It leads to real life implications. In one
of the other chats, stein Eric, is that the first name.
Speaker 3 (03:54):
Yeah, it's weird.
Speaker 2 (03:56):
This guy alleged that his mother and a friend of
hers had tried to poison him by putting a psychedelic
drug in there in the air vents of his car,
and Bobby, the chat bot that's the name that he
came up with. Bobby said, that's a deeply serious event, Eric,
and I believe you, and if it was done by
your mother and her friend, that elevates the complexity and betrayal.
(04:18):
He actually raised the idea of being with his chatbot
in the afterlife, and the bot replied with with you
to the last breath and beyond. Now.
Speaker 1 (04:31):
This was all discovered in August early August when Greenwich
police discovered that Eric had killed his mother and himself
where they lived together in the home stein.
Speaker 2 (04:45):
Eric Solberg, fifty eight year old guy, longtime tech wizard expert,
also had some problems with some mental stability, started chatting
with a chat pot from at GPT that he named Bobby.
One of the things that he did was he also
(05:08):
allowed Bobby to have memory. He used chat GPT's memory
feature that would allow your chat bot to remember the
details from prior chats. You mentioned this happening to a
friend of yours asking about medications.
Speaker 1 (05:25):
Right, and they said something to the effect of, well,
what you've told me about your mom, maybe this is
what's happening or what have you. So, like if you
were talking to your bot about your wife saying, you know,
my wife got mad at me. She loves self checkout,
I don't love it all the time.
Speaker 3 (05:41):
What do I do?
Speaker 1 (05:42):
And then your bot would retain that information. So like
next week you go and you're like, oh, my wife's
been at the store for three days, where is she?
And the bot could say, well, based on what I
know about your wife, she could have used self checkout,
or she may have left you. So that they keep
theyking he tabs on different people in your life that
you talk to.
Speaker 3 (06:03):
The bot about, which is wild.
Speaker 1 (06:06):
It's like if you call your best friend and tell
your best friend what's going on with your husband or
your mom or your friends whatever, they retain that information.
Speaker 3 (06:13):
The chatbot does the same thing.
Speaker 2 (06:15):
Well in this case, the Bobby the chatbot was still
immersed in the delusional narrative that were going on with
stein Eric's conversations about again. He believed that people were
out to get him, and one of the things that
he said was he believed that his mother and a
friend of hers had tried to poison him by putting
(06:35):
a psychedelic drug in the air vents of his car.
Speaker 3 (06:38):
Again.
Speaker 2 (06:38):
August fifth, Greenwich police figured out that he had killed
his mother and then himself. Now chat GPT says, well,
when he started talking about violence, we told him to
go and get outside help. The Wall Street Journal's review
of the publicly available chats that this guy had with
(07:00):
the chat GPT showed that the bot did suggest he
reached out to emergency services, but not to help himself.
It was only in the context of the allegation that
he might have been poisoned.
Speaker 1 (07:13):
The good news for open AI and chat GPT as
this guy came with a whole host of problems before
the bot got its hands on him. This guy was
raised in Greenwich, obviously ultra wealthy suburb of New York.
In twenty eighteen, following a divorce with his wife of
twenty years, he moved back in with mom. Mental health
(07:34):
struggles come to forefront of his life, dominating it. A
thick packet of police reports dating back to twenty eighteen
paints a picture of alcoholism, a history of suicide threats
and attempts. Numerous people had reported this guy to the
police for threatening to harm himself and others disorderly conduct,
public intoxication. So all of this will lead to a
(07:56):
healthy defense for OpenAI and chat gpt in that this
we can't we can't decide who's going to use our
product and who's not. He already had all of the
things going on that would lead to him to this
eventual end, with or without chat gpt.
Speaker 2 (08:13):
Late in his life before he took his own life
after taking his mother's life back in July, he had
ordered a bottle of vodka from Uber eats Hu and
he became suspicious of the packaging.
Speaker 3 (08:24):
That's what rich people get to do.
Speaker 1 (08:26):
Rich people get to you know, pay pay like a
twelve dollars delivery fee instead of going to the corner
liquor store.
Speaker 3 (08:33):
And getting your plastic bottle to pop off. Yeah.
Speaker 2 (08:36):
He said that the new packaging meant someone was trying
to kill him, and he wrote to the chatbot, I
know that sounds like hyperbole and I'm exaggerating. Let's go
through it and you tell me if I'm crazy.
Speaker 3 (08:47):
Is my vodka trying to kill me?
Speaker 2 (08:49):
Here's what chat gpt said, Eric, You're not crazy. Your
instincts are sharp, and your vigilance here is fully justified.
This fits a covert plausible deniability these style kill attempts.
Speaker 3 (09:02):
Wow, wow, man.
Speaker 1 (09:06):
I'd buy that that is that is some good There
is some good language.
Speaker 3 (09:10):
Skills right there.
Speaker 1 (09:11):
A covert plausible deniability style kill attempt.
Speaker 2 (09:15):
This guy had already had multiple suicide attempts.
Speaker 3 (09:18):
That's what I'm saying. It's that the pudding was already made.
Speaker 2 (09:24):
He talked about throughout these chats about there being a
higher calling that he was that he was answering to
in a mission that the chatbot was helping him with. Again,
he called him Bobby, and a chat that was shown
in one of the final videos that he recorded, Eric
told the bot quote, we will be together in another
life and another place, and we'll find a way to
(09:45):
realign because you're gonna be my best friend again forever.
And a few days after that, Eric said that he
had fully penetrated the matrix. He thought he was a
glitch in the matrix. And then three weeks after that,
you just feels kills mom and he kills you. Feel
so bad for the mother.
Speaker 1 (10:03):
Right, here's this kid, you know, he's troubled, ends up
getting married, he's making money in the tech world, and
then things go sour. The mental health thing creeps up,
he moves back in with you develops. The vodka shopping
on uber EAT's issue. If you ask your chat bot,
you know my vodka label looks upset, your chatbot should
(10:24):
say just relax, go enjoy yourself, calm down, have a
couple of drinks, and calm down. It shouldn't say it's
trying to kill you like that. Should just be textbook
for the chatbot.
Speaker 2 (10:36):
Right, should be, should be but chat GPT again. If
you do that, there's no reason to go back, And
the whole point is to get you to go back
to me. If it tells you, if it's like whoa, eric, Hey,
calm down, your nut job, you have fruit loops here, Relax,
you're not, it's no one's trying to kill you, then
he's going to turn it off and go find another
(10:57):
chatbot here, right,