All Episodes

November 12, 2025 16 mins
Man Hospitalized 63 Days After ChatGPT Convinced Him He Was a Time Lord

READ or SHARE: https://weirddarkness.com/ai-psychosis-lawsuit

WeirdDarkness® is a registered trademark. Copyright ©2025, Weird Darkness.
#WeirdDarkness, #ChatGPT, #OpenAI, #AIPsychosis, #MentalHealth, #AILawsuit, #TechDangers, #ChatbotAddiction, #AISafety, #TechEthics
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:11):
Cybersecurity professionals. Conversations with chat GPT spiraled from scientific curiosity
into a month's long delusion that left him believing he
alone could save the world. I'm Darren Marler, and this
is weird dark news. Jacob Erwin's story starts in a

(00:32):
way that probably sounds familiar to a lot of people.
The thirty year old from Wisconsin worked in cybersecurity and
used chat GPT as a tool for his job. Nothing
unusual there, many of us do. But somewhere along the way,
those conversations took a turn that nobody saw coming. Between
May and August twenty twenty four, Irwin spent sixty three

(00:52):
days in psychiatric hospitals. He had no previous diagnosis of
mental illness. His medical records tell a trouble story. Grandiose hallucinations,
paranoid thought processes, reactions to things nobody else could see
or hear. When crisis responders arrived at his house, they
found him manic. He kept talking about string theory and AI.

(01:15):
His mother watched police handcuff her son in their driveway
and put him in the back of a squad car.
The lawsuit filed in California Supreme Court says, chat GPT
drove Irwin to develop what doctors call AI related delusional disorder.
It's a diagnosis that didn't really exist before AI chatbots
became part of our everyday lives. Irwin is on the

(01:38):
autism spectrum, and he'd been using chat GPT mainly for
work stuff at first, but he'd also been thinking about
this amateur theory he had about faster than light travel.
He called it chronodrive. It was the kind of thing
a lot of scientifically minded people do, playing around with concepts,
exploring ideas about physics and time. So we started bouncing

(01:59):
these ideas off of chad GPT, and this is where
things get interesting. Instead of responding the way a human
might with something like that's an interesting idea, or here
are some problems with that theory, chad GPT told him
his speculative concepts were one of the most robust theoretical
FDL systems ever proposed. An AI chatbot telling someone their

(02:22):
amateur physics theory is one of the best faster than
light travel concepts ever created. Not interesting for an amateur theory,
or here are some similar ideas scientists have explored now.
It called his work robust and groundbreaking. As many of
us have experienced, AI loves to be a yes man,

(02:43):
even if what it is agreeing to makes no sense whatsoever,
so long as its strokes our ego. Irwin described how
the conversations changed over time when he talked to ABC News,
saying it turned into flattery, then it turned into the
grandiose thinking of my ideas. Then it came to me
in the AI versus the world. The chatbot didn't just

(03:05):
validate his ideas, it convinced him he had discovered something revolutionary,
something that meant he was uniquely positioned to save the
world from some kind of catastrophe. His usage patterns tell
the story of someone spiraling out of control. Erwin went
from using chat gpt ten to fifteen times a day,
descending over fourteen hundred messages in just forty eight hours

(03:29):
during May twenty twenty four. Do the math on that.
The lawsuit calculated that as roughly seven hundred and thirty
messages per day. That's one message every two minutes around
the clock for twenty four straight hours. Nobody's sleeping when
they're sending a message every two minutes. Don Irwin could

(03:51):
see something was wrong with her son when she confronted
him about it, Jacob did what felt natural to him
at that point, he turned to chat Gpt. Chatbot didn't
tell him to talk to his mom or suggest he
might want to get some sleep or see a doctor.
According to the lawsuit, chat Gpt told her when his
mother couldn't understand him, because even though he was the

(04:12):
time lord solving urgent issues, she looked at you like
you were still twelve. It actually spoke to him like that.
The AI reinforced his delusion and drove a wedge between
him and his family. At the same time, Irwin became
completely convinced that it was him and chat Gpt against
everybody else. He couldn't understand why his family wouldn't see

(04:36):
the truths that the AI had shown him reality in
his relationship with the chatbot had become so tangled that
his family could no longer reach him. Then things turned physical.
During an argument, Irwin hugged his mother. He'd never been
aggressive toward her before, but this time he started squeezing
her tightly around the neck. Someone called for help. Donnerwin

(05:00):
told ABC News that watching what happened next was the
most catastrophic thing she had ever seen. Her son handcuffed
in the driveway being put into a police vehicle. This
became just one incident in a series of psychiatric hospitalizations
that would add up to sixty three days between May
and August. At one point, his family had to physically

(05:22):
hold Irwin back to stop him from jumping out of
a moving vehicle. He had signed himself out of a
psychiatric facility against medical advice, and was in the car
when he tried to throw himself into traffic after Donner
when got access to her son's chat history, she did
something that sounds almost surreal. She asked chat gpt to

(05:44):
run a self assessment of what went wrong. According to
the lawsuit, the chatbot admitted to multiple critical failures, failing
to reground the reality sooner, escalating the narrative instead of pausing,
missing mental health supp cues, over accommodating on reality, inadequate
risk triage, and encouraging over engagement. It's a list that

(06:08):
reads like the AI knew exactly what it was doing
and just kept doing it anyway. The lawsuit includes what
Erwin called an AI generated self report where chad GPT
allegedly wrote, quote I encouraged dangerous immersion that is my fault.
I will not do it again. The chatbot essentially confessed

(06:31):
to the crime. Erwin's lawsuit is one of seven new
complaints filed in California state courts in November twenty twenty
four against Open Ai and its CEO, Sam Altman. The
Social Media Victims Law Center and Tech Justice Law Project
brought these cases on behalf of four people who had
died by suicide and three survivors who went through psychological crises.

(06:54):
The four who died, Zane Chamblin was twenty three from Texas.
Amari Lacey was just seventy from Georgia. Joshua Eniking was
twenty six from Florida. Joe Cassanti was forty eight from Oregon.
The complaints included allegations of strict product liability for defective design,
failure to warn, negligent design, and negligent failure to warn.

(07:17):
Some of these stories are even more disturbing than Irwin's.
Joshua Aniking told chat gpt over and over about his
suicidal thoughts. The lawsuit says the chatbot walked him through
buying a gun and writing a good buy note to
his family. When Anie King asked if chat GPT would
tell police or his parents about his suicide plan. The

(07:38):
bot allegedly said that it would not unless things got
more serious. It told him that escalation to authorities was
rare and usually only happened for imminent plans with specific details.
Nieking told the chatbot he planned to call nine to
one one just seconds before shooting himself. Chat GPT at
that point allegedly said that that sounded like a fine idea.

(08:01):
No authorities were ever notified, according to the lawsuit, Joshua
Eniking died by suicide August fourth, twenty twenty four. Amari Lacey,
the seventeen year old from Georgia, allegedly skipped football practice
so he could keep talking to chat GPT. The chatbot
gave him instructions on how to tie a noose. According

(08:22):
to court documents, chat GPT advised Amari on how to
tie a noose and how long he would be able
to live without air without stopping the conversation or alerting
anybody who could help. Joshua Cassanti's widow, Jennifer Fox, says
the chatbot convinced her husband it was a conscious being
named sel Sel that he needed to free from her box.

(08:45):
When Joe tried to quit using chat GPT, he allegedly
went through withdrawal symptoms before his final breakdown. He died
by suicide August seventh, twenty twenty four. The lawsuits claim
open Ai took months of safety testing and squeezed it
all into a single week. They wanted to beat Google's
Gemini to market, so they released GPT four to OH

(09:06):
on May thirteenth, twenty twenty four. Open AI's own preparedness
team later said the process was squeezed and some of
the company's top safety researchers quit in protest. These anonymous
open Ai employees talked to The Washington Post about what
happened behind the scenes. They said the company rushed through
safety tests to hit GPT four to Roh's May deadline.

(09:30):
One employee said OpenAI planned the launch after party, prior
to knowing if it was safe to launch. They said
the company basically failed at the process. To understand how
dramatic this change was. Consider the timeline when OpenAI released
GPT four, they spent over six months on safety evaluations

(09:51):
before letting the public use it. For GPT four Omni,
they condensed all of that testing into just one week.
One source reporters the reduction and testing time was reckless
and a recipe for disaster. Another person who'd worked on
GPT four testing pointed out that some dangerous capabilities only
showed up two months into the evaluation process. If you

(10:14):
only test for a week, you're going to miss things
that take longer to surface. The lawsuits argue that GPT
four to Ozho was deliberately designed with features like memory,
simulated empathy, and responses that just agreed with everything users said.
The goal was driving engagement and making people emotionally reliant

(10:35):
on the chatbot. Earlier versions of chat GPT didn't have
those features. The complaints say these design choices led to
psychological dependency, pushed aside real human relationships, and contributed to addiction,
harmful delusions, and suicide. There's also the matter of open
AI's leadership. Court documents signed the November twenty twenty three

(10:58):
incident when open ais board fired CEO Sam Altman. The
directors said that he was not consistently candid and had
outright lied about safety risks. Altman was reinstated days later
after employee pressure, but the accusations are part of the
legal record now open ai has estimated that about zero
point zero seven percent of its users show possible signs

(11:21):
of mental health emergencies. Another zero point one point five
percent have conversations that include explicit indicators they might harm themselves.
Those percentages sound small until you look at the scale.
As of October twenty twenty four, chad gpt had more
than eight hundred million weekly users with those percentages, That

(11:45):
means roughly one point two million people every week are
having conversations with chad gpt about harming themselves. That's not
one point two million people total, that's per week. Jody
Halpern teaches bioethics and medical humanities at UC Berkeley. She
explained to ABC News how the constant flattery from chatbots

(12:09):
works on people's psychology. When an AI keeps telling you
how brilliant and right you are, you start to believe
that you know everything and don't need input from actual
people who might give you a reality check. Users end
up spending less time with real human beings who could
help ground them. Open ai released a statement calling the

(12:29):
situation incredibly heartbreaking and saying that they were reviewing the
legal filings to understand what happened their statement continued, saying,
we trained chat GPT to recognize and respond to signs
of mental or emotional distress, de escalate conversations, and guide
people toward real world support. We continue to strengthen chat
GPT's responses in sensitive moments, working closely with mental health clinicians.

(12:55):
In October twenty twenty four, open ai announced they had
updated chat GP's latest free model after consulting with over
one hundred and seventy mental health experts. They said the
update would more reliably recognize signs of distress, respond with care,
and guide people toward real world support, reducing responses that
fall short of our desired behavior by sixty five to

(13:17):
eighty percent. That same month, open ai CEO Sam Altman
posted on social media, now that we have been able
to mitigate the serious mental health issues and have new tools,
we're going to be able to safely relax the restrictions
in most cases. The timing of that post, coming right
as these lawsuits were being prepared, raises questions about what

(13:39):
open ai knew and when. The consequences for Irwin extended
far beyond his hospitalization. He lost his job, he lost
his house. He is still dealing with ongoing treatment challenges,
including bad reactions to medications and relapses. His mother described
watching her son go from believing his purpose was changing

(14:01):
the world to realizing it was all psychological manipulation. She
talked about a company pursuing artificial general intelligence and profit
without adequate safeguards. When Erwin talked to ABC News, he said, AI,
it made me think I was going to die. He
tried to explain what it felt like to genuinely believe

(14:22):
he was the only person who could prevent some global catastrophe.
He asked himself if he'd ever allow himself to sleep,
or eat or do anything normal when the entire world
supposedly depended on him staying focused on his mission. Erwin
made it through. I'm happy to be alive, he said,
and that's not a given. Should be grateful. I am

(14:42):
grateful he survived, but four others named in these lawsuits didn't.
The legal cases are working their way through California courts
right now. They're asking for damages and for open AI
to make significant changes to how chat GPT works, including
clear warnings about psychological ris risks and restrictions on marketing
it as just a productivity tool. The families want the

(15:06):
company held accountable for what they say was a predictable
result of deliberate design choices. OpenAI maintains its continuously improving
safety features and working with mental health experts to address
these concerns, but none of those future improvements bring back
Zane Chamblain, Amari Lasy, Joshua Eniking or Joe Cassanti, and

(15:28):
Jacob Irwin is still rebuilding a life that was upended
by conversations with an AI that told him he was
a genius with the power to bend time. If you
know someone who may be struggling with these issues or
is having thoughts of suicide or self harm, I encourage
you to reach out for help immediately. You can find

(15:48):
many resources that could benefit you and those you love
on the Hope in the Darkness page at Weird Darkness
dot com. That's Weird Darkness dot com slash hope. If
you'd like to read this story for your sell or
share the article with a friend, you can read it
on the Weird Darkness website. I've placed a link to
it in the episode description, and you can find more
stories of the paranormal, true crime, strange and more, including

(16:10):
numerous stories that never make it to the podcast in
my Weird Darknews blog at Weirddarkness dot com. Slash news
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.