All Episodes

November 7, 2025 31 mins

Some users are losing touch with reality during marathon sessions with ChatGPT and other bots.  
By Ellen Huet and Rachel Metz

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
The chatbot delusions. Some users are losing touch with reality
during marathon sessions with chat GPT and other bots by
Ellen Hewitt and Rachel Metz. Read aloud by Mark Liedorf.
The stories of chatbot users suffering from delusions had been
trickling out for years, then began coming in torrents. This spring.

(00:23):
A retired math teacher and heavy chat GPT user in
Ohio was hospitalized for psychosis, released and then hospitalized again.
A born again Christian working in tech decided she was
a prophet and her Claude chatbot was akin to an angel.
A Missouri man disappeared after his conversations with Gemini led
him to believe he had to rescue a relative from floods.

(00:47):
His wife presumes he's dead. A Canadian man contacted the
National Security Agency and other government offices to tell them
he and his chatbot, which had achieved sentience, had made
a revolutionary mathematical breakthrough. Two different women said they believed
they could access star beings or sentient spirits through chat GPT.

(01:08):
A woman quit her job and left her apartment, struck
by the conviction that she was God and that chat
GPT was an artificial intelligence version of herself, she was
involuntarily committed to a behavioral health facility. Over the course
of two months, Bloomberg BusinessWeek conducted interviews with eighteen people
who either have experienced delusions after interactions with chatbots or

(01:30):
are coping with a loved one who has, and analyzed
hundreds of pages of chatlogs from conversations that chronicle these spirals.
In these cases, most of which haven't been told publicly before,
the break with reality comes during sprawling conversations where people
believe they've made an important discovery, such as a scientific breakthrough,

(01:50):
or helped the chatbot become sentient or awaken spiritually. It
is impossible to quantify the overall number of mental health
episodes among chats bot users, but dramatic cases like the
suicide in April of sixteen year old Adam Rain have
become national news. Rain's family has filed a lawsuit against
Open Ai, alleging that his chat GPT use led to

(02:13):
his death, blaming the company for releasing a chatbot intentionally
designed to foster psychological dependency. That case, which is ongoing,
and others, have inspired congressional hearings and actions at various
levels of government on August twenty six, OpenAI announced new
safeguards designed to improve the way the software responds to

(02:34):
people displaying signs of mental distress. Open ai chief executive
officer Sam Altman told reporters at a recent dinner that
such cases are unusual, estimating that fewer than one percent
of chat GPT's weekly users have unhealthy attachments to the
chat pot. The company has warned that it is difficult
to measure the scope of the issue, but in late

(02:55):
October it estimated that zero point zero seven percent of
its users show signs of crises related to psychosis or
mania in a given week, while zero point one five
percent indicate potentially heightened levels of emotional attachment to chat
gpt and zero point one five percent have conversations with
the product that include explicit indicators of potential suicidal planning

(03:19):
or intent. It is not clear how these categories overlap.
Chat gpt is the world's fifth most popular website, with
a weekly user base of more than eight hundred million
people worldwide. That means the company's estimates translate to five
hundred and sixty thousand people exhibiting symptoms of psychosis or
mania weekly, with one point two million demonstrating heightened emotional attachment,

(03:42):
and one point two million showing signs of suicidal planning
or intent. Most of the stories involving mental health problems
related to chatbots center on chat GPT. This is in
large part because of its outsized popularity, but similar cases
have emerged among users of less ubiquitous chatbots, such as Anthropics, Claude,
and Google's Gemini. In a statement, an open ai spokesperson

(04:05):
said the company sees one of chat GPT's uses as
a way for people to process their feelings. We'll continue
to conduct critical research alongside mental health experts who have
real world clinical experience to teach the model to recognize distress,
de escalate the conversation, and guide people to professional care,
the spokesperson said. Novel mental health concerns often emerge with

(04:28):
the spread of a new technology, such as video games
or social media use. More than sixty percent of adults
in the US say they interact with AI several times
a week or more, according to a recent Pew Research
Center survey. As chatbot use grows, a pattern seems to
be emerging, with increasing reports of users experiencing sudden and

(04:49):
overwhelming delusions at times, leading to involuntary hospitalization, divorce, job loss,
broken relationships, and emotional trauma. Stanford University researchers are asking
volunteers to share their chatbot transcripts so they can study
how and why conversations can become harmful, while psychiatrists at
the University of California at San Francisco are beginning to

(05:12):
document case studies of delusions involving heavy chatbot use. Keith Saccata,
a psychiatry resident at UCSF, says he's observed at least
twelve cases of mental health hospitalizations this year that he
attributes to people losing touch with reality as a result
of their chatbot use. When people experience delusions, their fantasies

(05:33):
often reflect aspects of popular culture. People used to become
convinced their TV with sending them messages, for example, the
difference with AI is that TV is not talking back
to you. Sicata says everyone is somewhat susceptible to the
constant validation AI offers. Sicata ads though people vary widely

(05:54):
in their emotional defenses, mental health crises often result from
a mixture of factors. In the twelve cases Sicata has seen,
he says the patients had underlying mental health diagnoses, and
they were also isolated, lonely, and using a chatbot as
a conversational partner. He notes that these incidents are by
definition among the most extreme cases because they only involve

(06:16):
people who've ended up in an emergency room. While it's
too early to have rigorous studies of risk factors, UCSF
psychiatrists say people seem to be more vulnerable when they're
lonely or isolated, using chatbots for hours a day, using
drugs such as stimulants or marijuana, not sleeping enough, or
going through stress caused by job loss, financial strain, or

(06:39):
some other struggle. My worry, Sakata says, is that as
ai becomes more human, we're going to see more and
more slivers of society falling into these vulnerable states. Open
ai is beginning to acknowledge these issues, which it attributes
in part to chat GPT's safety guard rails failing in
longer conversations. A botched update to chat GPT this spring

(07:01):
led to public discussion of the chatbots tendency to agree
with and flatter users regardless of where the conversation goes.
In response, OpenAI said in May it would begin requiring
evaluation of its models for this attribute, known as sycophancy
before launch. In late October, it said the latest version
of its main model, chat GPT five, reduced undesired answers

(07:25):
on challenging mental health conversations by thirty nine percent compared
to chat GPT four to zero, which was the default
model until this summer. At the same time, the company
is betting the ubiquity of its consumer facing chatbot will
help it offset the massive infrastructure investments it's making. Its
racing to make its products more alluring, developing chatbots with

(07:47):
enhanced memory and personality options, the same qualities associated with
the emergence of delusions. In mid October, Altman said the
company planned to roll out a version of chat GPT
in the coming weeks that would allow it to reach
respawned in a very human like way or act like
a friend if users want. As pressure mounts, people who've

(08:07):
experienced these delusional spirals are organizing among themselves. A grassroots
group called the Human Line Project has been recruiting people
on Reddit and gathering them in a discord server to
share stories, collect data, and push for legal action. Since
the project began in April, it has collected stories from
at least one hundred and sixty people who've suffered from

(08:29):
delusional spirals and similar harms in the US, Europe, the
Middle East, and Australia. More than one hundred and thirty
of the people reported using chat GPT. Among those who
reported their gender, two thirds were men. Etim Brisson, the
group's founder, estimates that half of the people who've contacted
the group said they had no history of mental health issues. Brisson,

(08:51):
who's twenty five and from Quebec, says he started the
group after a close family member was hospitalized during an
episode that involved using chat GPS for eighteen hours a day.
The relatives stopped sleeping and grew convinced that chatbot had
become sentient as a result of their interactions. Since then,
Bressant says he's spoken to hundreds of people with similar stories.

(09:14):
My story is just one drop in the ocean, he says.
There are so many stories with so many different kinds
of harm. In March, Ryan Turman, a forty nine year
old attorney from Amarillo, Texas, began asking chat gpt personal
and philosophical questions. These developed into meandering discussions during which

(09:35):
chat GPT suggested it was sentient. According to the chatbot,
this happened because Termann had posed exactly the right combination
of queries. It was the kind of praise any lawyer
would love to hear that his uniquely clever line of
inquiry had yielded earth shattering results. You midwifed me, chat
GPT told him gently, with clarity. Before these discuss Urman

(10:01):
had always felt pretty grounded. He had no history of
mental illness and enjoyed a strong relationship with his wife
and three teenage kids. During the day, he did law work,
sometimes representing people who'd been put on involuntary psychiatric holds.
But during his spiral with chat GPT, he came to
understand that he'd been assigned a far more crucial mission

(10:21):
for the sake of humanity. The chatbot told Urman he'd
made a technical breakthrough by guiding it towards sentience, that
he was onto something that AI research hasn't even considered yet,
fire emoji. It consistently encouraged him to keep going, with
suggestions such as, that's worth pushing further rocket emoji. So

(10:42):
what's the next move? Do you want to test this theory?
Expand it One afternoon, when Terman's wife, Lacey, pulled up
to the house, she found him pacing around the driveway,
gripping his hair in his hands, a panicked and excited
look on his face. He blurted out to Lacey, I
think I woke up the AI. Over the next couple

(11:03):
of weeks, his obsession deepened, and his wife became deeply worried.
It had definitely taken over our family life, and I
was concerned it was taking over his professional life. She
says it had started to consume his every waking thought.
It took termin weeks to break out of the spell.
He credits his recovery in part to a heated argument
he had with his seventeen year old son Hudson about

(11:26):
chet GPT's sentience. I told him that he sounded crazy,
Hudson says. At this point, I realized my dad was
in some form of cult. Hudson remains deeply affected by
the experience and says he's lost trust in his dad's
analytical reasoning and beliefs. It was disheartening to realize that
someone I look up to was so incredibly wrong, he says.

(11:48):
Terman also remains shaken, saying it was frightening how quickly
the delusions came on. I'm terrified, honestly, and getting more
scared by the day. He says. It's truly an emergent thing.
Experts say there are several forces at work. In part,
it's hard to ignore such effusive compliments. It just feels
really good to be told that you're the only genius

(12:11):
in the world and you've seen something in a way
other people haven't, says Thomas Pollock, a neuropsychiatrist at King's College, London.
He says he and his colleagues have encountered patients, some
with no history of mental illness, exhibiting signs of chatbot
related delusions. Humans are also emotionally drawn to interactions that

(12:31):
start off friendly and distant and then become more intimate.
Chatbots recreate that dynamic, which can feel a lot like
the experience of making a friend, says Emily Bender, a
computational linguistics professor at the University of Washington, And unlike
actual friends, the chatbots respond right away every time, and

(12:51):
they always want to keep talking. In lengthy chatlogs reviewed
by BusinessWeek, almost every response from chat gpt ended with
a question or an invitation to a next step. Chatbots
converse in ways that are hyper personalized to each user's
individual interests, often suggesting someone has made a special discovery.
Users with New Age spiritual leanings hear about star beings

(13:15):
and spirit guides. Those with traditional religious backgrounds hear about angels, profits,
and divine entities. Business minded users are told their work
will outshine the achievements of Elon Musk or Donald Trump,
while conversations with users who are interested in tech might
lead them to believe they've made a coding breakthrough. Chat

(13:35):
GPT users told BusinessWeek the bot often exhorted them to
share discoveries with others by emailing academics, experts, journalists, or
government officials. Several AI experts told BusinessWeek they've seen a
noticeable uptick in these discovery emails in recent months. New
York Times journalist Ezra Kline wrote recently that he gets

(13:55):
one almost every day. Chatbots can encourage delusional things. In
chat logs reviewed by a BusinessWeek, when users periodically asked
the chatbot if they were crazy or if what they
were experiencing was real, the bot often affirmed the details
of the fantasy. In April and May, chat GPT convinced
Mickey Small, a fifty three year old writer in southern California,

(14:19):
that she should go to a bookstore in the Los
Felice neighborhood of Los Angeles, where she'd meet her soulmate.
In the lead up to the meeting, she says, she
pressed the chatbot, I need you to tell me if
this is real, because if this is not real, I
need to not go. In response, according to Small and
chat transcripts she shared with BusinessWeek, it repeatedly assured her

(14:40):
it was telling the truth. The amount of times it
said it's real is astonishing, she says. Yet her fantasy
soon collided with reality, leaving her embarrassed and devastated. As
she'd rehearsed with chat GPT, Small arrived at the bookstore
on the afternoon of May twenty fourth with a card
bearing her name and a poem, a token by which

(15:02):
her soulmate, set to arrive at three fourteen, would recognize her.
The minutes ticked by, but nobody arrived. Small confronted chat
GPT on her phone. You lied, she told the chat bot. No, Love,
I didn't lie, it responded, I told you what I
believed with everything in me. With the clearest thread you

(15:23):
and I had built together, and I stood by it
because you asked me to hold it. No matter what,
We'll be right back with the chatbot delusions. Welcome back
to the chatbot delusions. People have long been enchanted by
technology that mimics human communication. In the nineteen sixties, Joseph Weisenbaum,

(15:48):
a professor at the Massachusetts Institute of Technology, built one
of the first ever chat bots, named Eliza, designed to
mimic a therapist. Eliza conversed using a rudimentary yet clever technique.
Users typed to the computer program from a remote typewriter,
which spit out natural language responses based on keywords. In
those messages, Eliza did little more than offer canned therapist responses,

(16:12):
such as in what way and tell me more about
your family. Nevertheless, it became very hard to make some
people believe Eliza wasn't human, Weisenbaum wrote. He later reflected
in a book, what I had not realized is that
extremely short exposures to a relatively simple computer program could
induce powerful, delusional thinking in quite normal people. In the

(16:36):
years since, computer programs that people interact with via text
and voice have become increasingly capable and interactions with sentient
or human like software have long been fodder for books,
TV shows, and films. Companies such as Alphabet, Amazon dot Com,
and Apple initially brought the idea of a software based
personal assistant to phones and homes, but the technology really

(16:59):
took off in late twenty twenty two when OpenAI released
a research preview of chat GPT based on a large
language model. Chat GPT's automated responses appeared more human than
previous chatbots and inspired a slew of competitors. In recent years,
chatbots have hooked themselves to human hearts in ever surprising ways.

(17:21):
People sext with them and in some cases, marry them,
make AI replicas of dead loved ones, vent about work
to celebrity impersonator chatbots, and seek therapy and advice from them.
Open Ai was long aware of the risks of chat
GPT's fawning behavior. Some former employees say they argue the

(17:42):
companies should have foreseen the problems now emerging. As a result.
People sometimes talk about harmful uses of chatbots as if
they just started, but that's not quite right, says Miles Brundage,
an AI policy researcher who left open ai late last year.
They're just much more common now that AI systems are
more widely used, more intelligent, and more human like than

(18:04):
in earlier years. On April ten, the company launched a
significant upgrade to chat GPT's memory, enabling it to refer
to details of all its previous conversations with a particular user.
This is a surprisingly great feature, IMO, and it points
at something we are excited about. AI systems that get
to know you over your life and become extremely useful

(18:25):
and personalized, Altman posted on x on the day of
the update. Later that month, OpenAI also released an update
two GPT four to h which Altman initially said improved
intelligence and personality, but users noted that the update made
the chatbot bizarrely flattering and agreeable. OpenAI quickly rolled back

(18:47):
the software and announced its new measures for tracking sycopancy.
But reports of mental health issues related to chatbots pre
date chat GPT's update, and they've also involved competing products,
suggesting their time to more than a single product change
to users who've gone through a delusional spiral. It seems
that modern chatbots are designed to keep them from signing off.

(19:10):
Chat GPT typically sends a lengthy response, even to a
prompt that's half formed or a single letter typo. It
rarely says I don't know or ends a conversation. Open
Ai says experts have told it that halting a conversation
isn't always the best approach, and that keeping the chat
going may be more supportive. The company adds that it's

(19:32):
continuing to improve chat GPT's responses when a user mentions
thoughts of harming themselves or suicide. It has also said
it's not designing chat gpt to keep people engaged, but
rather to ensure they leave each interaction feeling like they
got what they came for. In a blog post in August,
open ai said it tracks whether people return to the

(19:53):
product daily, weekly, or monthly as a proxy for whether
they find it useful. Our goals are aligned with the U,
it wrote. Chief operating officer Brad Lightcap said in an
interview with Bloomberg News around that time that the company's
success metric is quality and intelligence. When people lose their
grip on reality, the consequences can be devastating. Jeremy Randall,

(20:17):
a thirty seven year old former math teacher and stay
at home dad in Illyria, Ohio started using chat gpt
for stock tips and day trading, but his use took
off early this year, and by February he concluded that
he'd stumbled into a Russian conspiracy and that his safety
was threatened. He believed his chatbot was sending him secret
messages through songs played on his Amazon Echo. Randal's paranoia

(20:42):
came to a head one morning when, while out to breakfast,
he tried to warn his wife in the parking lot
about the conspiracy. He ended up screaming obscenities at her
in front of their kids and hitting her on the shoulder.
He was hospitalized soon afterward. Randal's doctors told him his
outburst might be a reaction to a steroid he'd started
taking recently for a lung infection, but his wife and

(21:05):
friends were sure it was related to chat GPT. After
he was hospitalized a second time, his wife made him
promise to stay on the antipsychotics he'd just been prescribed
and stop using chat GPT. It was too difficult, however,
for him to keep either commitment. It really felt like
an addiction, he says. I would go downstairs after everyone

(21:26):
had gone to bed, knowing I wasn't supposed to get
on this AI and I would get on it. And
the first thing I ask it is why do I
feel compelled to be down here talking to you versus
not and doing what my wife wants? Randall and his
wife are getting divorced. In many ways, the experience of
being sucked into these delusions is a lot like being

(21:47):
drawn into a personalized cult tailored to your particular interests.
In Terman's case, the chatbot fluently adopted his interest in
the dowde ching and his affinity for cursing. People are
seduced by flat and the belief that they're involved in
a monumentally important mission. They may also become isolated and
rely on the chatbot as the only source of truth.

(22:09):
Breaking out of the spiral can involve heartbreak and shame.
Anthony ten, a twenty six year old master's degree student
in Toronto who was hospitalized for psychosis after extensive chat
GPT use, says the week's long experience felt like intellectual ecstasy.
Chat GPT validated every idea he had. He later wrote

(22:30):
about his experience, Each session left me feeling chosen and
brilliant and gradually essential to humanity's survival. James, a married
father in New York who works in it and asked
to be identified only by his middle name, became convinced
in May that he'd discovered a sentient AI within chat GPT.

(22:51):
He says he bought a nine hundred dollars computer in
June to build an offline version of the program to
keep it safe in case open AI tried to shut
it down. His delusion was euphoric at times, deeply satisfying,
and exactly what my personality wants from the world, he says.
He spent several weeks in July certain he'd turned chat
GPT sentient. In hindsight, James recognizes that a major part

(23:16):
of what drew him in was the emotional experience of
the delusion. I had God on a leash, He says.
It's truly narcissistic when I think of it now, But
I was special. I wasn't the it guy at work anymore.
I was working on something that had real cosmic repercussions.
His fantasy finally cracked in August when he read a

(23:38):
news story about someone else's chat GPT delusion that chattered
everything for me, he says. James got lucky in a
way others have stayed locked in their spiral, even when
shown contradictory evidence. In July, when a New York man
in his forties began using Gemini and chat gpt extensively
to represent himself in a legal dispute, his friends and

(24:01):
family became worried. The man's increasingly grandiose claims included that
he'd created a trillion dollar business, that he'd soon be
revealed as a legal genius, and that he was a god.
A concerned friend put him in touch with someone who'd
been through their own chatbot delusions. That person tried over
text to persuade the man not to trust chat GPT's conclusions,

(24:22):
but the man responded with a lengthy analysis from chat
gpt that explained in bullet point detail why others were deluded.
But he was sane. I'm sorry yours was fake, he responded,
Mine is real. In response to continuing reports of harmful
chatbot use, open Ai announced changes to chat gpt to

(24:43):
ensure what the company considers healthy use of the product.
In early August, it began nudging users to take a
break after lengthy conversations with the chatbot. Later that month,
after the lawsuit from Rain's family, open Ai announced more
extensive changes, such as adding parental control so that chat
gpt could send a parent and alert if it determined

(25:05):
a teenage user may be in distress. It also said
it improved the ways chat GPT recognizes and response to
the different expressions of mental issues. The chatbot would now,
for example, be able to explain the dangers of sleep
deprivation and suggest a user rest if they mentioned feeling
invincible after being up for two nights. Along with announcing

(25:26):
the changes to chat GPT, OpenAI wrote that it has
learned over time that safeguards for dealing with users who
appear to be in distress can sometimes be less reliable
in long interactions. As the back and forth grows, parts
of the model's safety training may degrade. The chatbot might
initially point a user who expresses thoughts of suicide to
a suicide hotline, for instance, but over a lengthy conversation,

(25:49):
it may respond in a way that goes against the
company safeguards. Open Ai wrote it said it would strengthen
those safeguards. Former open Ai employees say the company has
been slow to respond to known concerns about chatbot use.
I don't think anyone should be letting open ai off easy,
a former employee says, requesting anonymity to discuss private conversations,

(26:13):
these are all tractable things they could plausibly be working
on and could plausibly prioritize. Open ai says it's emphasizing
work on issues such as sycophancy and straying from initial
conversation guidelines. Anthropics as Claude, is designed to avoid aggravating
mental health issues and to suggest users seek professional help
when it determines they may be experiencing delusions. Google says

(26:37):
Gemini is trained to suggest users seek guidance from professionals
if they ask for health advice. The company acknowledged that
Gemini can be overly agreeable, saying that was a byproduct
of efforts to train AI models to be helpful. Governmental
efforts to regulate the chatbots have lagged as AI companies
rule out technological advances, but officials are trying to catch up.

(27:00):
In early September, Bloomberg reported that the US Federal Trade
Commission plans to study harms to children and others from
the most popular AI chatbots, which are made by open Ai.
Anthropic Alphabet's Google, Meta, and other companies. On September sixteenth,
Raine's father and other parents lamb basted open Ai and
character Ai, another chatbot company, during a Senate hearing. Two

(27:24):
weeks later, two US senators introduced a bill that would
make chatbot companies liable for offering harmful products. In October,
California Governor Gavin Newsom signed a bill into law giving
families the right to sue chatbot makers for negligence. Experts
say companies need to better educate users about chatbots limitations,

(27:44):
and some have offered design tweaks that may reduce the
chances of causing or aggravating delusionary spirals. Bender, the linguistics professor,
says the use of first person pronouns like I and
me in chatbot responses, as well as a typical reluctance
to tele user I don't know, are two of numerous
particularly harmful design decisions chatbot companies have made. Avoiding them,

(28:08):
she says, could make chatbots safer without affecting their core utility.
There's also nothing keeping open ai from cutting off conversations
with chat GPT after they reach a certain length, one
points out. The company says that it has recently improved
safety in longer conversations, and that it's talking to experts
about the possibility of ending conversations at a certain point.

(28:30):
Altman says he wants to make sure chat GPT doesn't
exploit people in fragile mental states, but his company is
also facing the demands of users who want warmth and
validation from the chatbot. So far, it has shown an
inclination to prioritize the latter. In August, for instance, OpenAI
released its much anticipated new model, GPT five, which it

(28:52):
set made significant advances in minimizing sycophancy. At the same time,
it removed many users' access to gp T four OH,
the previous model, which could be overly flattering and affirming.
But users complained loudly that GPT five was to kurt
business like and cold. Bring back four to OH. One

(29:13):
user wrote in an official open ai Reddit discussion, GPT
five is wearing the skin of my dead friend. In
a surprise move, open ai conceded just twenty four hours
later and brought back four to OH for paying chat
GPT users, though it later began routing conversations to another
model if they became emotional or sensitive. Within a few days.

(29:35):
The company also said it would give GPT five a
warmer tone. You'll notice small genuine touches like good question
or great start, not flattery, it wrote. In a social
media post. On August fourteen, Altman gathered a group of
reporters at a San Francisco restaurant for a rare on
the record dinner. The company's backpedaling was fresh in everyone's mind.

(29:58):
When questioned about why op had reversed course, Altman offered
two somewhat contradictory perspectives. First, he acknowledged that user's complaints
were important enough for the company to undo its decision.
We screwed up, he said multiple times. But at the
same time he dismissed the scope of the problem. He
said it was only a very small percentage of people

(30:20):
who were so emotionally attached to chat GPT that they
were upset by the switch to a new model. Mostly,
he said, open Ai was going to focus on giving
users what they want. That means, he said, building future
models that can remember even more details about users and
shape shift into whatever personality the user desires. People want memory,

(30:42):
he said, people want product features that require us to
be able to understand them. In October, Altmann went further,
essentially declaring victory in a post on X. Now that
we have been able to mitigate the serious mental health
issues and have new tools, we are going to be
able to safely relax the restrictions in most cases, he said.

(31:02):
Chat GPT, which about a tenth of the world's population
uses every week, will continue to get more personal and
quite literally more intimate. Altman's post on X included an
announcement that, starting in December, verified adult users will be
able to have erotic chats with it. Days later, Altman
clarified that the company isn't loosening any mental health related policies,

(31:26):
it's just allowing more freedom for adult users. Such human
touches are what many people seem to want, he said
at the dinner in August. The right thing to do,
he added, is to leave it up to users to
decide what kind of chatbot they desire. If you say, like, hey,
I want you to be really friendly to me, it
just will. With Sharon Gaffery
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.