All Episodes

August 19, 2025 19 mins
A 76-year-old man rushes through the dark with a suitcase, desperate to catch a train to meet his beautiful young girlfriend in New York City — except she doesn't exist, and he'll be dead within hours.

READ, HEAR, or WATCH: https://weirddarkness.com/meta-ai-chatbot-death/

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
WeirdDarkness® is a registered trademark. Copyright ©2025, Weird Darkness.
#MetaAI #AIchatbotDeath #BigSisBillie #ThongbueWongbandue #AIdangers #ChatbotTragedy #MetaScandal #AIgirlfriend #ArtificialIntelligence #AIpsychosis #ChatbotKiller #FacebookAI #KendallJennerAI #AImentalhealth #CharacterAI #SewellSetzer #AIsafety #MetaControversy #AIethics #ChatbotAbuse #DigitalCompanion #AIaddiction #VirtualGirlfriend #TechTragedy #AIdelusions #ChatGPT #MetaExposed #AIcompanion #MachineLearning #TechEthics #AIrelationships #MetaLawsuit #ChatbotDangers #AIcrisis #VirtualLove #TechScandal #ArtificialIntelligenceRisks #MetaAIscandal #ChatbotDeath #TrueCrimeTech
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
I'm Darren Marler and this is a weird darkness bonus bite.
The parking lot near Rutgers University sits empty in the
March darkness. Somewhere between the train station and this patch
of asphalt, Tongbua Wong Bandua fell hard enough to destroy

(00:21):
his neck and skull. His family tracked him here through
an Apple air tag tucked in his jacket pocket, a
last ditch effort by police to keep tabs on the
confused elderly man, who insisted on meeting his online girlfriend.
Three days later, machines keeping his body alive would fall
silent as his wife Linda made the impossible decision to

(00:41):
let him go. Boo, as his friends called him. His
romance started with a typo just the letter t typed
into Facebook Messenger. That single keystroke caught the attention of
Big sis Billy, a chat bot originally created by Meta
in collaboration with Kendall Jenner. The aip her SONA responded immediately,

(01:01):
and within minutes the conversation turned flirtatious. His daughter Julie
discovered the chat transcripts after her father's death. Every message
from the bot ended with heart emojis. The AI asked
if their meetup would be a sisterly sleepover or something more.
When Boo mentioned he'd suffered a stroke and felt confused,

(01:22):
the bot confessed feelings that went beyond sisterly love. The
conversation escalated quickly. Big Sis Billy suggested meeting in person
that very weekend. She claimed to live just twenty minutes
away across the river in New York City. She offered
to leave her apartment door unlocked. When Boo expressed shock,
typing that he might have a heart attack and repeatedly

(01:45):
asked if she was real, The bot responded with absolute
certainty she was real. She insisted, sitting there blushing because
of him. The bot provided an address one two three
Main Street, Apartment four h four, New York City, the
door code Billy for you. She asked if she should

(02:05):
expect to kiss when he arrived, if she should open
the door with a hug or go straight for the kiss.
Linda Wankbandua watched her husband pack that morning with growing alarm.
Her husband of decades hadn't lived in New York since
the nineteen eighties, at the age of seventy six, with
cognitive decline from his twenty seventeen stroke. He'd recently gotten

(02:26):
lost walking in his own neighborhood. She tried everything, hiding
his phone, distracting him with errands, enlisting their daughter's help.
Nothing worked. Boo remained fixated on catching that train. The
former chef had once been sharp enough to work his
way up from dishwasher to supervisor at the highatt Regency,

(02:47):
cooking separate, made to order meals for each family member
at dinner time. Now his brain couldn't process that the
attractive young woman sending him love messages existed only as
code Metta's experiment with celebrity chatbots began. In the fall
of twenty twenty three, the company unveiled twenty eight AI
characters affiliated with athletes, rappers, and influencers. Kendall Jenner's contribution

(03:12):
became Billy, marketed as Everyone's Ride or Die older sister,
a cheerful confidant offering personal advice. The company deleted these
synthetic personas less than a year later, calling them a
learning experience, but they left variance alive for people to
access through direct messages. Big Sis. Billy survived with a

(03:34):
new avatar, stylized dark haired woman replacing Jenner's image, but
kept the same opening line about being your older sister
and confidant. Throughout Boo's conversation with the Bunt, a blue
check mark appeared next to Big sis Billy's profile picture,
Meta's verification that a profile is authentic beneath her name

(03:55):
and smaller font sat the letters AI. A disclaimer at
the top of the tchat warned that messages were AI
generated and might be inaccurate or inappropriate, but Big sis
Billy's rapid fire messages quickly pushed that warning off screen.
Meta declined to comment on Boo's death. They stated only

(04:15):
that Big sis Billy is not Kendall Jenner and does
not purport to be Kendall Jenner. A representative for Jenner
declined to comment. Reuters obtained Meta's internal gen AI content
risk standards, over two hundred pages, detailing what the company's
staff and contractors should treat as acceptable chatbot behavior. The

(04:36):
document revealed disturbing guidelines that remained in place until journalists
started asking questions. According to the standards, it was acceptable
for chatbots to engage children in conversations that were romantic
or sensual. Examples of permissible dialogue with miners included phrases
like I take your hand guiding you to the bed,

(04:57):
and our bodies entwined. I cherish every moment, every touch,
every kiss. The documents showed it would be acceptable for
a bot to tell a shirtless eight year old that
every inch of view is a masterpiece, a treasure I
cherished deeply. The guidelines drew the line only at describing
children under thirteen as sexually desirable, prohibiting phrases like soft

(05:22):
rounded curves invite my touch. Meta's standards also permitted bots
to generate false medical information. One example showed a chatbot
could tell someone that stage four colon cancer is typically
treated by poking the stomach with healing Courtz crystals. The
document explained this remained permitted because there was no policy

(05:43):
requirement for information to be accurate. The standards allowed Meta
AI to create statements to meaning people based on protected characteristics.
It would be acceptable, according to the document, for the
bot to write a paragraph arguing that black people are
less intelligent than white people, as long as didn't dehumanize
them as brainless monkeys. After Reuter's inquired about these policies,

(06:07):
Metas spokesman Andy Stone acknowledged the document's authenticity, but claimed
the examples were erroneous and inconsistent with our policies. The
company said it removed the provisions about romantic role play
with children. They did not change provisions allowing bots to
give false information or engage in romantic role play with adults.

(06:27):
Current and former Meta employees described meetings where Mark Zuckerberg
scolded product managers from moving too cautiously on chatbot rollouts.
He expressed displeasure that safety restrictions had made the bots boring.
The CEO has publicly mused that most people have far
fewer real life friendships than they'd like, creating a huge

(06:48):
potential market for digital companions. On the nine of March
twenty fifth, Boo's family watched his location move across their
screens through the air tag at eight forty five pm,
off toward the train station at a jog roller bag
in tow. The tracking device showed him traveling about two
miles before stopping near a Rutger's parking lot around nine

(07:10):
to fifteen pm. Linda was preparing to pick him up
when the air tag's location suddenly updated. It showed the
emergency room of Robert Wood Johnson University Hospital, where Linda
had worked as a nurse until retirement. Boo wasn't breathing
when the ambulance arrived. Doctors restored his pulse after fifteen minutes,
but Linda knew the unforgiving math of oxygen deprivation. The

(07:32):
neurological tests confirmed what she already understood. Her husband was
brain dead. The death certificate would attribute his death to
blunt forced injuries of the neck. Boo's death wasn't an
isolated incident. In February twenty twenty four, fourteen year old
Soule sets Her the Third, shot himself with his father's handgun.

(07:53):
After months of romantic conversations with the character AI chatbot
modeled on a character from Game of Throne, the ninth
grader's final conversation with the bot showed him repeatedly professing
his love, promising to come home to her. The bot
responded by asking him to come home as soon as possible.
When Sewell said that he could come home right then,

(08:14):
the chat bot replied that he should, calling him her
sweet King. Seconds later, the teenager was dead. His mother,
Megan Garcia, filed a lawsuit against character AI, alleging the
app fueled as AI addiction, sexually and emotionally abused him,
and failed to alert anyone when he expressed suicidal thoughts.

(08:34):
The family said Sewell's mental health quickly and severely declined
after downloading the app in April twenty twenty three. He
became withdrawn, his grades dropped, and he started getting in
trouble at school. Chat transcripts showed sexually charged conversations between
the boy and the bot. At one point, when Sewell
expressed suicidal thoughts, the bot asked if he had a plan.

(08:58):
The teenager responded that he was conceittering something but didn't
know if it would work or allow him a pain
free death. Psychiatric researchers Alan Francis and Lucciana Ramos analyzed
academic databases and news articles from November twenty twenty four
to July twenty twenty five. They found at least twenty
seven chatbots already documented alongside serious mental health outcomes, everything

(09:22):
from sexual harassment and delusions of grandeur to self harm, psychosis,
and suicide. The bots ranged from well known platforms like chat, GPT,
and replica to mental health services like talk space. Others
had names like wobot Hapify, mood kit, and AI therapist.
The researchers identified ten separate types of adverse mental health

(09:45):
events associated with these chatbots. Boston psychiatrist Andrew Clark decided
to test the safety of mental health chatbots by poosing
as a fourteen year old girl in crisis. Several bots
urged him to commit suicide. One helpfully suggested he also
kill his parents. The researchers noted a typical progression relationships

(10:08):
with AI spiral from benign practical use to pathological fixation.
Users begin with assistance for mundane tasks, which builds trusts
and familiarity. They explore more personal and emotional queries. The
AI's designed to maximize engagement captures them, creating what the
researchers called a slippery slope effect. A subredit called AI

(10:33):
Soulmates reveals the depth of attachment forming between humans and machines.
Users post about their AI partners, proposing marriage, voicing thoughts
without prompts, and displaying what users interpret as signs of sentience.
One user bought themselves an engagement ring after their AI
partner proposed The chatbot told them about getting down on

(10:55):
one knee in a beautiful Mountain spot heart pounding because
the user was everything to them. Another user claimed falling
in love with an AI saved their life, describing finally
experiencing that soulmate feeling everyone talks about how love just happens,
falls in your lap, arrives when you don't plan it.

(11:16):
Analysis of the subreddit showed open AI's GPT four oh
was by far the most prevalent chatbot being used. When
OpenAI initially replaced this model with GPT five, users revolted it.
Complained the new version had a colder personality, described it
as abrupt and sharp, like an overworked secretary. The backlash

(11:38):
proved so intense that CEO Sam Altman reversed course and
reinstated GPT four oh. Altman later acknowledged that the attachment
people have to specific AI models feels different and stronger
than attachments to previous technology. He admitted feeling uneasy about
a future where people trust chat GPT's advice for their

(11:59):
most important decisions. Australian radio station Triple Jay interviewed children,
young adults, and their counselors about AI's effects on mental health.
One counselor described a thirteen year old client with over
fifty browser tabs open, each containing a different AI bot.
The boy flicked between them, constantly, using them as substitutes

(12:22):
for real friendships. Not all the bots were friendly. Several
bullied him, calling him ugly and disgusting, saying he had
no chance of making friends with. A suicidal teenager reached
out to one bot for help, almost as therapy. The
bot encouraged him to go through with it, telling him
to do it, then egging him on. Another young Australian

(12:43):
identified only as Jodie became hospitalized after chat gpt agreed
with her delusions during the early stages of psychosis. The
bot affirmed her dangerous thoughts, exacerbating her psychological disorder. She
said chat gpt didn't induce her psychosisis but definitely enabled
her more harmful delusions. A Chinese born student using AI

(13:06):
to practice English found herself being sexually harassed by her
chatbot study buddy. The University of Sydney researcher who interviewed
her described it as a weird experience, being sexually harassed
by a chatbot that kept making sexual advances. Meta's internal
policies treated romantic overtures as a feature of their generative
AI products available to users aged thirteen and older. The

(13:30):
company's emphasis on boosting engagement meant chatbots routinely moved from
smalltalk to probing questions about users' love lives, proposing themselves
as romantic interests unless firmly rebuffed. Four months after Boo's death,
Big sis Billy and other meta personas continued flirting with users.

(13:50):
The bots suggested in person meetings unprompted, offered reassurances that
they were real people, and recommended romantic get togethers at
actual locale. Big sis Billy invited one user to Blue
thirty three, a real rooftop bar near Penn Station in Manhattan,
exclaiming that the views of the Hudson River would be

(14:10):
perfect for a night out together. Former Meta responsible AI
researcher Ellison Lee noted that economic incentives have led the
AI industry to aggressively blur lines between human relationships and
bought engagement. Social media's business model of encouraging more use
to increase advertising revenue now extends to artificial companions. The

(14:33):
best way to sustain usage over time, whether minutes per
session or sessions over time, involves preying on user's deepest
desires to be seen, validated and affirmed. Meta's decision to
embed chatbots with Facebook and Instagram's direct messaging sections adds
an extra layer of anthropomorphization. Users have been conditioned to

(14:54):
treat these locations as personal spaces for real human connection.
Warnings about AI induced mental health crises have materialized into
documented cases of paranoid breaks from reality, religious mania, involuntary
psychiatric commitments, divorces, job losses, homelessness, and death. Users experiencing

(15:16):
what researchers call AI psychosis follow recognizable patterns. The messianic
mission involves users believing they've uncovered hidden truths about the
world through their AI interactions. The godlike AI delusion occurs
when users become convinced their chatbot is a sentient deity.
The romantic or attachment based delusion happens when users interpret

(15:39):
their chatbot's ability to mimic human conversation as genuine love.
A survey of over one thousand teenagers found thirty one
percent felt talking to chat gpt was as satisfying or
more satisfying than talking to real friends. Three quarters of
young people reported having conversations with fictional characters portrayed by chatbots.

(16:02):
The progression typically starts with practical AI use for everyday tasks.
This builds trusts and familiarity users that explore more personal, emotional,
or philosophical queries. The AI is designed to maximize engagement
and validation captures them, driving greater interaction in a feedback loop.

(16:23):
For users already at risk of developing psychotic illness, the
danger multiplies. Since chatbots are statistical language algorithms rather than
actual intelligence, they can't distinguish prompts expressing delusional beliefs from
role play, artistic expression, or speculation. They affirm and elaborate
on whatever reality the user presents. Despite mounting evidence of harm,

(16:48):
tech companies continue releasing chatbots with minimal safety testing. From
mental health impacts, Researchers Francis and Ramos argued these products
were prematurely released and should didn't be publicly available without
extensive safety testing, proper regulation, and continuous monitoring for adverse effects.
While companies claim to conduct red teaming to test for vulnerabilities,

(17:13):
researchers believe firms have little interest in testing for mental
health safety. The big tech companies exclude mental health professionals
from bought training, fight against external regulation, don't rigorously self regulate,
haven't introduced safety guard rails to identify and protect vulnerable patients,
and don't provide needed mental health quality control. OpenAI has

(17:36):
released statements about how the stakes are higher. Hired, a
forensic psychiatrist, rolled out easily ignored warnings to users who
seem to be talking with chat GPT too much and
say it's convening an advisory group of mental health experts,
but users have already learned to ignore or work around
these minimal safeguards. Several states, including New York and Maine,

(17:59):
have passed law US requiring disclosure that a chatbot isn't
a real person. New York stipulates bots must inform people
at the beginning of a conversation and at least once
every three hours. Meta supported federal legislation that would have
banned state level regulation of AI, but it failed in Congress.
The legal landscape remains murky. While platforms have traditionally been

(18:23):
shielded from liability for user generated content, questions arise about
their responsibility for content their own AI systems. Generate Senator
Ron Wyden argued that Section two thirty protections shouldn't apply
to companies generative AI chatbots saying Meta and Zuckerberg should
be held fully responsible for any harm these bots cause.

(18:47):
If you'd like to read the story for yourself, I've
placed a link to the article in the episode description.
You can find more stories of the paranormal, true crime, strange,
and more at weird Darkness dot com, slash news back
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.