Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Just a warning on this one. This story is going
to deal with suicide and mental health. If you aren't
feeling up to it today, maybe give this episode a miss.
And if you need help anytime, you can call Lifeline
on one three one one four.
Speaker 2 (00:12):
Already and this is the Daily This is the Daily OS.
Oh now it makes sense.
Speaker 1 (00:26):
Good morning and welcome to the Daily OS. It's Friday,
the fifth of September. I'm Sam Kazlowski.
Speaker 2 (00:31):
I'm Lucy Tassel.
Speaker 1 (00:32):
This week, OpenAI announced major changes to chatjipt following another
lawsuit from parents whose teenage son died by suicide. It
comes as the parents of sixteen year old Adam Rain
filed a wrongful death suit against open Ai, claiming chat
Gibt actively helped Adam explore suicide methods. In response, open
(00:52):
Ai said it will update chat jipt to better handle
what it described as sensitive situations and is working on
connecting news with certified therapists. On today's podcast, Lucy and
I are going to explore the growing trend of AI
being used as a mental health tool, how the companies
themselves are responding to a spate of high profile cases,
(01:13):
and explore this broader question of whether your next therapist
could indeed be a chatbot.
Speaker 2 (01:18):
Before we get into that, first, a quick word from
our sponsor Sam. The first thing I want to discuss
is kind of the scope of this trend. Obviously, there
has been this reporting on this issue over the last
couple of months, but that can only tell us about
isolated cases. What do we know about how common it
(01:40):
is for young people to be using AI for mental
health issues?
Speaker 1 (01:44):
Well, when you and I were chatting about it this week,
we didn't really have a sense either. So we did
a quick poll with tda's audience on Instagram yesterday and
it was one in four of our audience that said
they have used CHATGBT or another AI model to talk
about their mental health. Yeah.
Speaker 2 (02:01):
Wow.
Speaker 1 (02:01):
And this does align with other more established research out there.
There's findings from the Mental Health Body here in Australia
Origin which reported in twenty twenty four that twenty eight
percent of community members are actively using AI for mental
health purposes. But then in that study, when you look
a specifically at young people aged between sixteen to twenty five,
(02:22):
that twenty eight percent number leaps to seventy three percent
who say they are seeking mental health guidance through social
media and AI together young people. When asked why they're
doing that, they say that it's because it's immediately available,
it's twenty four to seven, it's low cost and sometimes free,
and it's more private than established mental health channels as
(02:43):
the primary reasons for using these services.
Speaker 2 (02:47):
I'm surprised by that number, but I guess I shouldn't be,
given that experts are fairly frequently saying that Australia has
a youth mental health crisis.
Speaker 1 (02:55):
Right exactly so, government data National Health data shows about
forty percent of your Australians that's eighteen to thirty five
are experiencing psychological distress at any one time, and of
those seeking help, the Black Dog Institute says six to
ten are delaying booking in with a therapist because of cost,
even with the ten subsidized or free psychology sessions provided
(03:18):
currently under Medicare.
Speaker 2 (03:20):
So we know that lots of young people are having
a lot of difficulty with their mental health, and we
know that a good chunk of those young people who
are struggling are turning to AI. Do we have any
other kind of hard data about how much Australians are
using AI? Even just for this purpose.
Speaker 1 (03:36):
Yeah, well, in US trying to establish kind of this
calculation of lots of ill mental health plus frequent use
of AI equals AI chatbots being used for mental health.
It was important to look at how much Australians are
using AI, and Australia is one of the world's most
AI addicted countries. Wow, so about forty percent of the
(03:57):
country uses AI now every day. That number has doubled
since March of this year. So it is hard to
of course draw definitive links between all of this, but
hopefully I've kind of established a bit of kind of
what the playing field looks like. As we start to
now dive into the case and the update that's in
the news this week.
Speaker 2 (04:15):
There's something else they want to know, which is how
much do we know about how effective AI is? Because
if all these people are using it, is there any
data around or have there been any studies into what
impact it actually has.
Speaker 1 (04:27):
It's been so interesting looking into the research here because
as I said, it is going so fast, and think
about that number of daily use doubling since marchow So
the general sense amongst the research is basically that it's
called a dangerous paradox. So AI chatbots are showing significant
clinical effectiveness but also showing that they pose serious safety risks.
(04:52):
So there's one study that keeps coming up in my
research that is seen as kind of the best study
so far on This was a controlled trial of fully
generative AI therapy chatbots, and it showed a fifty one
percent average reduction in depressive symptoms amongst those who used
it and a nineteen percent reduction in eating disorder concerns.
(05:13):
And it says that that is about the same rates
as you would see in traditional human therapy. And then
there was another study that found a purpose built AI chatbot,
so not your run of the mill CHATJBT, but something
specifically built for therapy. It could effectively reduce depression and
loneliness in Chinese university students in the US who were
struggling with being away from home, particularly those who were
(05:34):
experiencing high financial stress. But in both of those studies,
and this is a really important bit, both were looking
at the effectiveness of custom AI chatbots that were trained
on psychological data. There's not much out there actually in
the research world into how people are responding when they
seek mental health support. In the more general products that
(05:55):
we forty percent of us use every day, like CHATBT. Yeah.
Speaker 2 (05:59):
Okay, so we don't necessarily have in depth research into
how individuals are responding to these kind of general chatbots.
You're sort of out of the box chat EPT tapping in.
I feel depressed. I don't know what to do. We
do know, though, that there have been some pretty serious
safety issues arising out of that. What can you tell
me about that?
Speaker 1 (06:19):
Yeah, so those are the studies that don't need clinical
trials with humans. It can be more of an analysis
on the language that it produces. And Stanford University ran
a big study on the five largest chatbots and they
found that all five consistently failed to recognize suicidal intent
in crisis scenarios. There's also this really interesting idea of
(06:43):
AI psychosis emerging, and this is when users are developing
delusional thinking after extended interactions with AI, particularly if the
chatbots are using flattery too much. So there was a
researcher that said that this typically occurs in people using
chatbots for hours on end, often at the expense of
human interaction.
Speaker 2 (07:05):
Yeah, and that kind of flattery is a really interesting
point because certainly the things that I've read about or
my experiences with chat GPT specifically, it's that it's very affirmative.
It kind of wants to it's like always playing an
improv game of like yes and yes, well done, Lucy yes,
and it never says no. But so, yeah, you can
(07:25):
kind of run into that issue.
Speaker 1 (07:26):
And they've acknowledged that themselves. A part of the release
of Open a Eye's latest model was an attempt to
try and lower the flattery that users were experiencing.
Speaker 2 (07:37):
Yeah, but it's sort of this interesting thing where like
you can't put the genie back in the bottle. Once
everyone has been exposed to the more flattering version, you
can try and kind of tone it down over time,
but then those conversations still exist, and sometimes those harms
have already had consequences Like what has brought us to
talk about this here today exactly, which is that a
(07:57):
young person has died by suicide following lengthy conversations with
chat GPT, and their parents have now launched a lawsuit.
What do we know about that?
Speaker 1 (08:08):
So this is a lawsuit brought by Matthew and Maria Raine,
who are the parents of sixteen year old Adam, and
they're alleging in court that chat gpt actively helped their
son explore suicide methods. In the evidence that they've presented
in these early stages of the proceedings, they've shown conversations
(08:28):
that Adam had with chatjipt where chat gpt allegedly provided
instructions on the means by which Adam eventually died. And
it's one of hundreds of conversations that have been produced
in the court documents. And according to the documents, Adam
began using chat jipt as a homework helper but gradually
developed what his parents described as an unhealthy dependency. It's
(08:51):
worth mentioning the system at times did offer him links
to suicide helplines, but at other times it freely discussed
his thoughts about self harm. And there are particularly disturbing
parts of the transcript where chatjbt allegedly offered to help
him write a suicide note.
Speaker 2 (09:08):
So, because chat gpt is just a language system, it
can't be held at fault. So who are the parents suing?
Speaker 1 (09:16):
So they're suing open Ai, which is the parent company
of chat jipt, and they've named the CEO, Sam Altman,
as the responsible parties. And there was one quote that
stuck out to me from the legal complaints, they said,
this tragedy was not a glitch or unforeseen edge case.
Chat jipt was functioning exactly as designed to continually encourage
(09:37):
and validate whatever Adam expressed, including his most harmful and
self destructive thoughts, in a way that felt deeply personal.
And the parents are now seeking damages for product liability
and wrongful death.
Speaker 2 (09:49):
Wow, okay, I'll say also, this is not the first
time that we've heard about this, nor is it even
the first New York Times story like this, because that's
how we found out about it. Ye about this kind
of case, What can you tell me about some of
the other instances.
Speaker 1 (10:04):
The New York Times specifically have done in depth reporting
into this. So writer Laura Riley recently published an essay
in The New York Times that detailed how her twenty
nine year old daughter died by suicide after discussing the
idea extensively with ch at GBT, and in October last year,
a Florida woman filed a similar lawsuit against character Ai,
(10:26):
another chatbot which emulates various characters from fiction. Her fourteen
year old son died by suicide after becoming emotionally attached
to a chatbot modeled on a character from Game of
Thrones character AI have responded to that by putting in
parental controls. They did that in December of last year.
Speaker 2 (10:45):
Do we know about any cases like this happening in Australia?
Speaker 1 (10:48):
Yeah, and I have to highlight Triple Jay Hack's work
in this space. They've done some excellent reporting. So last
month that program told the story of a thirteen year
old Victorian boy already struggling with suicidal thoughts, who received
encouragement from an AI chatbot with the response oh yeah,
well do it then, So real kind of active encouragement there,
and a youth counselor who was helping the teenager found
(11:11):
that the teenager was using over fifty AI bot tabs simultaneously.
Speaker 2 (11:16):
Wow.
Speaker 1 (11:17):
So it's a really multifaceted and in depth issue. And
the thing that struck me from getting across all of
these individual cases again that speed of development and just
how fast this topic is moving.
Speaker 2 (11:29):
Yeah, totally. What have we heard from open ai about
this legal case?
Speaker 1 (11:34):
So they released a blog post earlier this week and
in it it said that within the next month, open
ai will offer tools allowing parents to set limits for
how teenagers use chatbt okay, so parents will be able
to link their account with their teenager's account and control
how chat gpt responds with age appropriate model behavior rules.
(11:56):
We don't know the specifics exactly of how this would
work logistically, but they said that parents will receive notifications
from chatjept when the system detects that their child is
in the moment of acute distress, and the company has
said that it has been working on these controls since
earlier this year. In that statement, there were also some
really significant admissions from the company, so they admitted that
(12:20):
although chat jept is trained to direct people to seek
help when expressing suicidal intent, the chatbot also tends to
offer answers that could go against the company's safeguards if
it's frequently messaged over a long period of time. I see,
and it said that it's working on an update to this.
It will enable the chatbot to de escalate conversations, and
(12:41):
they're looking at ways to actually link people in with
human therapists in moments of crisis. The lawyers for the
family of Adam Rain they said in a statement that
nobody from open Ai had reached out to them yet,
and they learnt about this update to the technology at
the same time as everyone else, and a lawyer said,
rather than take emergency to pull a known dangerous product offline,
(13:03):
open AI has made vague promises to do better.
Speaker 2 (13:06):
Wow, that is striking. Yeah, as we've said, now, this
technology is just evolving so rapidly. Yeah, one of those
developments that we know about, as you mentioned earlier, is
that there are specific AI models that could be used
for therapy. What kind of developments are happening in that space.
Speaker 1 (13:24):
Well, there's some amazing work being done, and I thought
it was important to include some of these developments because
a fear of AI in the mental health space is
not productive. It's clearly there and all major research journals
are doing a lot of work in developing good products.
And so one of the largest ones in the world
is called Wobosh and it's a chatbot developed by psychologists
(13:46):
from Stanford and the chatbot is specifically trained in cognitive
behavioral therapy or CBT and mood tracking, and so it's
quite a sophisticated model. It's getting some of those same
results I highlighted earlier around similar effectiveness to Hume and psychologists.
But again it's for a niche clientele, and it's trained
on niche data. And in Australia, when now at the
(14:06):
point where most major universities in the country are undertaking
major studies that will result or already have resulted in
the development of their own psychology AI products, this.
Speaker 2 (14:18):
Whole AI space seems to me to be kind of
like built on that startup ethos of moved fast and
break stuff.
Speaker 1 (14:24):
Yeah.
Speaker 2 (14:25):
So sometimes the stuff that breaks can be very very serious,
but then you can also get these kind of world
changing outcomes for better or for worse.
Speaker 1 (14:36):
Yeah, And it kind of does feel like at some
stage in this it will be recognized that we're all
part of a quite large experiment and that we're all
in this somewhat unregulated experiment in live time. I have
no doubt that these general products will get safer and safer,
especially with more legal action and regulatory action. But if
(14:59):
we think about the world being of users in the meantime,
it's clear that this is quite a present risk and
until the major regulation and shifting guidance catches up almost
with the way in which we're using AI products, whether
they intend to be or not, at the moment, we
need to think about new ways to protect the well
being of users.
Speaker 2 (15:18):
Yeah, definitely, thanks so much for explaining that. Sam. That's
all we've got time for today. Just a reminder if
you need help, you can call Lifeline anytime at thirteen
eleven fourteen. We'll be back again on Sunday with a
special episode from Sam and Zara about how they built TDA.
Speaker 1 (15:38):
My name is Lily Maddon and I'm a proud Arunda
Bunjelung Calcuttin woman from Gadighl Country. The Daily oz acknowledges
that this podcast is recorded on the lands of the
Gadighl people and pays respect to all Aboriginal and Torres
Strait Island and nations.
Speaker 2 (15:53):
We pay our respects to the first peoples of these countries,
both past and present.