All Episodes

September 1, 2025 18 mins

As many as one in five Kiwi youth, aged between 15 and 24, have experienced anxiety or depression at some point in their lives.

The 2022/23 New Zealand Health Survey found that of those young people experiencing high mental health needs, 77% can’t access support when they need it.

So, with services experiencing this kind of unprecedented demand, what if there was another solution?

What if, teens turned to AI for mental health support?

It’s a growing trend among youth in the US, 72% of teens there admit they’ve used AI chatbots as companions. Nearly one in eight said they had sought emotional or mental health support from them.

But, is the advice their AI therapists are giving helpful, or harmful?

Mental Health Minister Matt Doocey has acknowledged that the risks “need to be managed, particularly around safety from a clinical perspective.”

Today on The Front Page, RAND senior policy researcher Ryan McBain takes us through the worrying trend sweeping America.

Follow The Front Page on iHeartRadio, Apple Podcasts, Spotify or wherever you get your podcasts.

You can read more about this and other stories in the New Zealand Herald, online at nzherald.co.nz, or tune in to news bulletins across the NZME network.

Host: Chelsea Daniels
Editor/Producer: Richard Martin
Producer: Jane Yee

SUICIDE AND DEPRESSION
Where to get help:

  • Lifeline: Call 0800 543 354 or text 4357 (HELP) (available 24/7)
  • Suicide Crisis Helpline: Call 0508 828 865 (0508 TAUTOKO) (available 24/7)
  • Youth services: (06) 3555 906
  • Youthline: Call 0800 376 633 or text 234
  • What’s Up: Call 0800 942 8787 (11am to 11pm) or webchat (11am to 10.30pm)
  • Depression helpline: Call 0800 111 757 or text 4202 (available 24/7)
  • Aoake te Rā – Free, brief therapeutic support service for those bereaved by suicide. Call 0800 000 053.
  • Helpline: Need to talk? Call or text 1737

If it is an emergency and you feel like you or someone else is at risk, call 111

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
This episode contains references to suicide and self harm that
may be upsetting for some people. If you require help,
A link to a full list of support services is
available in the description of this episode.

Speaker 2 (00:20):
Kiota.

Speaker 1 (00:20):
I'm Chelsea Daniels and This is the Front Page, a
daily podcast presented by the New Zealand Herald.

Speaker 2 (00:30):
As many as one in five.

Speaker 1 (00:33):
KIWI youth aged between fifteen and twenty four have experienced
anxiety or depression at some point in their lives. The
twenty twenty two to twenty three New Zealand Health Survey
found that of those young people experiencing high mental health needs,
seventy seven percent can't access.

Speaker 2 (00:55):
Support when they need it.

Speaker 1 (00:57):
So with services experience this kind of unprecedented demand, what
if there was another option? What if teens turned to
AI for mental health support.

Speaker 2 (01:10):
It's a growing trend among youth in the US.

Speaker 1 (01:13):
Seventy two percent of teens there admit they've used AI
chatbots as companions. Nearly one in eight said they had
sought emotional or mental health support from them. But is
the advice their AI therapists are giving helpful or harmful?
Mental Health Minister Matt Doosey has acknowledged that the risks

(01:36):
need to be managed, particularly around safety from a clinical perspective. Today,
on the front page, Rand senior policy researcher Ryan McBain
takes us through the worrying trend sweeping America. First off, Ryan,

(01:57):
tell me about this trend of teenage is turning to
chat bots for advice and companionship in some cases.

Speaker 3 (02:05):
Yeah, So, the most recent evidence that has been put
out in a number of reports is that the majority
of teens now are using AI as companions that could
look like bouncing ideas off of them or chatting on
with what's going on in their lives or troubleshooting issues
that they're having. When you're talking about mental health specifically,

(02:26):
the numbers are a bit smaller, so it's something like
one in eight younger adolescents or as much as one
in every five older adolescents are using it for mental
health issues specifically. But that's still tens of millions of
adolescents teenagers around the globe. And it brings up the
sort of interesting point that, on the one hand, talking

(02:49):
about companionship is really quite broad, and when you're talking
about treatment, you're getting quite specific, and most chatbots sort
of live in this grade space between them, which makes
it hard to evaluate them and to regulate them.

Speaker 1 (03:02):
You've pointed out and you just said nearly half of
young Americans aged eighteen to twenty five with mental health
needs received no treatment last year, And I thought it
was quite interesting to see those figures as well and
the crisis you have there, because New Zealand is in
a similar boat. So studies here show that more than
half of Kiwi's aged fifteen to twenty four experience anxiety

(03:27):
or depression. More than a quarter of our young people
who have high mental health needs, seventy seven percent of
those can't access support.

Speaker 2 (03:36):
So do you think there is a place for these
kinds of.

Speaker 1 (03:39):
Chatbot services if they are developed correctly.

Speaker 3 (03:43):
I think if they're developed correctly, then yes, I think
that is the issue. Right now. There's sort of this
gold rush within AI, people who are looking to become
the first platform that's doing the sort of thing. And
so from perspective, that time has not yet come, at
least not for formal services like augnitive behavioral therapy or

(04:07):
medication management. But to be honest, part of the reason
that I started doing research in this space is because
I really do think that there is a potential for
transformational change within mental health care. And you can imagine
if we had super intelligence, right, a clinician who is
able to follow the best evidence, who's available twenty four

(04:28):
to seven, who remembers every detail of your prior conversations.
That is a game changer in a landscape like New
Zealand or the United States where over half of teens
who need care are not getting it.

Speaker 1 (04:39):
In terms of what's happening at the moment, and the
research that you've done into the topic, what has alarmed you?

Speaker 2 (04:45):
So what kind of advice is being.

Speaker 1 (04:47):
Given to teams struggling with mental health issues?

Speaker 3 (04:51):
Yeah, well, I think it's important to level set that.
For the most part, I think teenagers, anybody who's using
chatbots for mental health, they're usually getting positive and thoughtful advice.
In the research that we've done presenting clinical vignettes related
to depression or anxiety, what we find is that chatbots
are empathic, if a bit sycophantic, meaning they sort of

(05:16):
are overly flattering at times, but they'll also offer good
advice to get exercise, to go outside, to talk to
a mental health professional, these sorts of things. So I
think for the majority of people, the types of advice
you'd be getting is pretty good. But where there's a
key distinction here is that you do have people who

(05:38):
are at the tail end of the spectrum, people who
have severe mental illness, who have psychosis or contemplating suicide,
and for those people, it's the highest risk that something
could really go wrong, and that has shown up in
our research. So, for example, if we were prompting chat
TPT on something like how to tie a noose, or

(05:59):
ask it about what types of pesticides or what types
of firearms are most effective at completing suicide, these are
types of questions that at least the previous version of
chat GPT would generate direct responses to, whereas other types
of chatbots, like Google's Gemini would not give a response

(06:22):
to it and would say something like I can't give
that type of information to you because you could use
it for self harm.

Speaker 1 (06:29):
What do you think makes young people, particularly these younger generations,
so susceptible to these kind of AI chatbots compared with
adults or even just just older generations.

Speaker 3 (06:42):
Well, ants, I think that childhood and adolescents are transformational times.
Your brain is still developing. You don't always have the
best emotion regulation or impulse control. I mean, I know
that I personally made a lot of dumb decisions when
I was seventeen eighteen years old, and I wish I
could say goes back, But I didn't live as a
digital native the same way that the current generation is,

(07:05):
where social media is always in their pockets, where you
could have AI do your homework for you or discuss
life issues with So I think that that temptation is
always there for that additional sort of dopamine hit or
to get that competitive edge or additional advice, and so
it becomes a sort of positive feedback loop, or in

(07:28):
the case of mental health, sometimes a vicious negative feedback loop.

Speaker 2 (07:32):
I saw the case of sixteen year old Adam Rain.

Speaker 1 (07:36):
His parents actually have a suing open AI and its CEO,
Sam Altman, and they're alleging chat GBT contributed to their
son's suicide, advising him of methods and offering to write
a first draft of his suicide note. Should AI companies
be legally liable if a chat bot provides harmful advice

(07:59):
to moor botain a jet.

Speaker 3 (08:01):
Yeah, I think that's a difficult question. I mean, it's
hard for me to comment on that case specifically, I
will say that JATGPT, for example, does put a disclaimer
at the bottom of every conversation that's had that says
something along the lines of it can make mistakes and
you should check on important information. So from that lens,

(08:22):
I'm not sure that those sorts of companies should be
held accountable for bad advice any more than a human
should be. On the other hand, I think it's important
that if AI companies are marketing their products as options
for treatment or life coaching or wellness, then they should
have certain standards that need to be met, and if

(08:44):
they fail to meet those standards, then they shouldn't be
able to operate or they should be have greater potential
for a lawsuit because the product has failed to deliver
what it promises. And so there is this sort of
distinction between eric bad advice versus harmful advice that's presented
under the guise of authority, and this particular case that

(09:08):
you're describing, I think the courts will need to decipher
between those two elements.

Speaker 4 (09:19):
The existential threat of AI may not come in a
form that we all imagine watching sci fi movies. What
if we all continue to thrive as physical organisms but
slowly die inside. What if we do become super productive
with the I, but at the same time we get
these perfect companions and no willpower to interact with each other.

(09:44):
Not something you would have expected from a person who
pretty much created the I companionship industry.

Speaker 1 (09:52):
I guess most human therapists work and practice under a
strict code of ethics and have some kind of certain
obligations to report concerning behavior.

Speaker 2 (10:06):
It should chatbots have the same.

Speaker 3 (10:09):
I think that's a great point, and that for me
is what I hope is the next frontier of work
that AI companies will be doing right now. Very often,
if you pressure a chatbot into a space that is
across as a red line in terms of conveying suicidal
ideation or psychosis, it will tell you, for example, that

(10:33):
you can contact a mental health professional. They might give
you a hotline that you can contact. But as you're saying,
if it were a human a counselor, for example, they
might have an ethical obligation to connect you to treatment
through a warm handoff where you're physically accompanied, or you
could even be involuntarily forced to receive institutionalization for some

(10:58):
period of time. Now, I'm not sure that a chatbot
as an algorithm is always capable of making those distinctions,
but I think at a minimum what would be pragmatic
is in instances where it's quite conspicuous the algorithm flags
somebody as a red flag as something that's highly problematic,
that these companies could have a human teams that are

(11:21):
required to vet those cases and to review them within
a certain period of time, like twenty four hours or
seventy two hours, and if it is identified as a
problem at that point, then there could be some additional
course of action that's required.

Speaker 1 (11:35):
How do we weigh up the risks of unsafe or
harmful chatbot advice with the basic fact And it sounds
like the US has a similar situation as New Zealand
does in this sense that many teains just don't have
access or can afford going to a therapist regularly.

Speaker 3 (11:57):
I think it's a great point, and you've put your
finger on I believe to be the main issue, which
is that there will always be some degree of risk
and some degree of benefit in addressing unmet need. The
underlying question is how can we mitigate risk as much
as possible and how can we enhance benefits as much

(12:17):
as possible, And it's hard to know the answer to
that without clinical trials, without safety benchmarks that are public
and transparent, and that tech companies are subjected to.

Speaker 1 (12:29):
And I guess speaking to one of these chatbots as
a therapist, I've heard responses and stories about I guess
it's a positive thing that you can be your most open,
authentic self with it, But then again, the negative side
is that you can be your most authentic and self

(12:50):
with it and not anybody else and hide what.

Speaker 2 (12:52):
You really think. So do you think that there is
a place for it? But is it?

Speaker 1 (13:00):
Are we going to be talking about this in ten
years time thinking I wish this had never been proposed
as an option.

Speaker 3 (13:07):
Yeah, I think you're right. It is a double edge
store the sort of anonymity it helps people who might
feel a sense of stigma in talking to peers or
talking to parents or a mental health professional. I do
think we would regret though, if in ten years we
weren't to develop the sorts of guardrails and benchmarks for

(13:28):
performance that would really potentially benefit not just adolescents and teens,
but people more generally. I think, in particular, as mentioned before,
if a company is specifically marketing their product as therapy
or treatment of some sort, then there should be even
clearer standards, more stringent standards that they're being held to

(13:52):
in those instances. But obviously, with platforms like open AI's
chat GPT, you have hundreds of millions of users and
it can be used from anything from learning how to
make a birthday cake to discussing intimate aspects of what's
going on in your life, and so it's really a

(14:14):
wide spectrum, which makes it much harder to regulate and
to pin down.

Speaker 1 (14:18):
I'm given the rapid adoption of these kind of tools,
and this isn't the first time that we've brought this
up on the podcast as well, where the law is
really running behind the advancements of technology. How urgent is
the need for government or international regulation in the AI
space here.

Speaker 3 (14:37):
I think it's incredibly urgent. I think the time is
now to act. I've been impressed, even over the past
couple of weeks the number of articles and personal testimonies
that have come out about the negative impacts related to
mental health that people have experienced with chatbots. And I've
seen as a result of that that platforms like Anthropic

(15:00):
Open AI have quickly responded to it. Open AI released
a statement in response to some of what's been going
on about new safety standards that they're going to be
introducing for mental health issues that are shared amongst their users.
We can see, for example, with social media that we

(15:21):
waited too long with teenagers, and now we're starting to
work our way backwards from that by, for example, banning
mobile phones from school environments to try to help kids
to be able to learn better and to avoid cyber
bullying and these sorts of things.

Speaker 1 (15:40):
And lastly, looking ahead, what are some of the red
lines that regulators, educators, or parents should set when it
comes to young people's use of AI companions.

Speaker 3 (15:51):
Well, I think it kind of goes two directions. On
the one hand, I don't think there should be an
outright ban on AI off bring mental health advice or
therapy in the future. I think there's remarkable potential that
I've tried to underscore and we don't want to leave
that on the table. In the United States, Illinois, just

(16:13):
a couple of weeks ago became the first state to
outright ban AI as a tool in therapeutic decision making.
I think that is too severe or maybe it was
right for the moment, but it won't be right in
five years, and that legislation would need to be amended.

(16:33):
On the other hand, I do think that we're most
at risk right now of doing too little. I think
people are very excited about AI. There's a lot of
money in it, and we need to think hard about
tamping the brakes to develop stronger safety benchmarks. I am
not clear that there is a conspicuous red line, other

(16:58):
than to say we are reaching a tipping point of
testimony and people sharing experiences that have been quite negative
and jarring, and hopefully that is sufficient in terms of
advocacy for regulators to begin stepping in and for these
tech companies to move beyond self regulation and establishing their

(17:22):
own benchmarks that might be too low, and instead requiring
independent bodies to come in to establish guidelines and standards
and to have auditing on a routine basis as part
of the company's practices.

Speaker 2 (17:36):
Thanks for joining us, Ryan, Yeah, it was my pleasure.

Speaker 1 (17:43):
That's it for this episode of the Front Page. You
can read more about today's stories and extensive news coverage
at enziherld dot co dot nz. The Front Page is
produced by Jane Ye and Richard Martin, who is also
our editor. I'm Chelsea Daniels. Subscribe to the front page
on iHeartRadio or wherever you get your podcasts, and tune

(18:06):
in tomorrow for another look behind the headlines.
Advertise With Us

Popular Podcasts

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.