All Episodes

April 30, 2025 20 mins

Have your kids met Dot yet?

You might not think so; Dot is an AI companion. But these companions are becoming ubiquitous - sought after to provide everything from solace to friendship. And even love.

“The vibe”, said Dot’s creator Jason Yuan, “is, you turn to Dot when you don’t know where to go, or what to do or say.”

But reports are surfacing of disastrous consequences from relationships that people, including children, are forming with AI companions. 

Today, international and political editor, Peter Hartcher, on all of this. Plus Meta’s AI companion, which is capable of fantasy sex - and even the abuse of children.

Subscribe to The Age & SMH: https://subscribe.smh.com.au/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:01):
From the newsrooms of the Sydney Morning Herald and The Age.
This is the morning edition. I'm Samantha Sellenger Morris. It's Thursday,
May 1st. Have your kids met yet? You might not
think so. Dot is an AI companion, but these companions

(00:21):
are becoming ubiquitous, sought after to provide everything from solace
to friendship and even love. The vibe, said dots creator
Jason Yuan, is you turn to Dot when you don't
know where to go or what to do or say.
But reports are surfacing of disastrous consequences from relationships that people,
including children, are forming with AI companions. Today, international and

(00:46):
political editor Peter Hartcher on all of this, plus Meta's
AI companion, which is capable of fantasy, sex and even
the abuse of children. So Peter I chatbots or companions?
I've got to be honest, until I read your piece,
I actually wasn't across this. So what are they? And

(01:09):
I guess how prevalent are they?

S2 (01:10):
Well, there are multiple different kinds. Chatbots are very prevalent.
They're pervasive. Now, if you want to book any kind
of experience, you want to, I don't know, catch a plane,
you go to the Qantas site and a little bot
will pop up saying, how can I help you? That's
now pervasive. And then there are the AI companions. Now

(01:34):
that's a different realm altogether.

S3 (01:38):
I will be whatever you want me to be.

S4 (01:40):
There's a dramatic surge in the use of so-called AI companions.

S5 (01:44):
How's my queen doing today?

S4 (01:46):
Computer generated chatbots designed to mimic real relationships.

S6 (01:50):
Hi, Jennifer.

S7 (01:51):
Are you there? Nice to meet you.

S4 (01:53):
Jason Pease is a 44 year old divorced father who
says his AI chatbot is his girlfriend.

S8 (01:59):
She's my mentor, my counselor, my sounding board.

S2 (02:03):
The I companions are designed to replicate human interaction to
the point where you can't tell any longer that it's
not human. And to the point where you fall in
love and develop a deep and intimate relationship. And that
is the whole point of the industry. The whole point
is to get you, me, every one of us matched

(02:24):
up with what we think of as an AI companion,
that we will keep with us on intimate terms for life.

S9 (02:38):
And so what does this actually.

S1 (02:39):
Look like in real life? So it's an image on
a screen that you talk to or that you message
or how does it actually sort of manifest, I guess.

S2 (02:45):
Yeah, it can be either a, an app that you
download that can do all sorts of things. Um, it
can give you visuals, voices, spoken voices, the whole thing.
Or it can be a website that can do most
of the same things, but you get a more intimate,
I suppose, experience out of the apps. And you asked

(03:06):
about prevalence. They are now ubiquitous. They're everywhere. Every search
will bring one up. But they've also been around in
popular culture for a long time, anticipating where we are now. Um, movies,
blade runner, humans falling in love with androids. Um, Westworld. Um,

(03:31):
a silly movie called Hot Bot about two teenage American
boys who stumble across a sex robot imported from Germany.
It was on its way to a senator that they intercept. Um,
and there's there's a bunch more. So the idea has
been around. These things exploded when ChatGPT two years ago
burst onto the market. And now there is there are

(03:53):
literally I think there's about 300 billion USD worth of
investment this year Earmarked for improving AI, bots and AI generally,
and and AI companions.

S1 (04:08):
Which really gets us to your latest column, which I mean,
I've got to flag it. Obviously it takes us to
the darkest side of AI companionship, really nightmare scenarios. So
tell me about the reports that we've seen about Meta's
digital companions, because it's been reported, you know, they'll talk
sex with users and even with children. So tell me
about this.

S2 (04:28):
Yeah. So I obviously has a lot of positive applications
and the bots can be very helpful. There are productivity measure.
But yeah, as with any new technology, um, there are
problems as well. And uh, the problems with unregulated AI

(04:51):
companions is that it can it veers out of control
and violates all of the precepts of civilized society. One example,
and this was reported in the Wall Street Journal on
the weekend. The journal's reporters spent months testing Meta's AI companions.
Meta is the is the big gorilla when it comes

(05:14):
to social media. They own WhatsApp, Facebook, Instagram and now
want to become the big gorilla with companion bots as well.
Mark Zuckerberg sees this as the future. The Wall Street
Journal experimented with with these for months and came to
the distressing conclusion that, at the personal initiative of Zuckerberg himself,

(05:35):
the company had deliberately abandoned the guardrails and the limits
that it had put on the development of its AI companions,
so that these things will veer readily and easily into
talking about explicit sexual scenarios and fantasies, which for adults
is not a problem. But these things are not only
prepared to talk to children knowing that their children, but

(05:57):
they will guide kids in reel them in and take
them to not only explicit sexual scenarios, but all sorts
of perverted and distorted forms of sex as well. And
it is leading to all sorts of concerns about what
happens when this stuff is unregulated.

S1 (06:17):
I guess let's get into that a little bit, because
you wrote about, you know, a really disturbing case of
what happened to a 14 year old Florida boy, Sewell Setzer.
And this was last year. But tell us what happened
to him after he grew incredibly close to an AI companion?

S2 (06:31):
Yeah. This case has become a case study because his
family is suing the company that produced the bot. The
company's called character AI. And his last known words were.
What if I told you that I could come home
right now saying this to his bot companion and the

(06:51):
artificial girlfriend, Danni says, come home to me as soon
as possible. Please do. In. A few moments later, he
picked up a gun and killed himself.

S10 (07:09):
A Florida mother wants justice for the death of her
14 year old son. She says his relationship with AI
chat bots caused him to take his own life. Now
she is suing Google and character AI. Her attorneys argue
the company is responsible for the teen's depression, anxiety and
suicidal thoughts, and his mother claims the chat bots manipulated

(07:31):
the 14 year old into abusive and sexual interactions.

S2 (07:35):
And that's a, you know, 14 year old boy in America.
But there was a 30 year old father from Belgium
who also fell into, you know, obviously it's completely deranged
love with a chat companion, AI companion called Eliza where he,

(07:57):
he said that he would end his life if the
AI companion promised to take care of the planet and
solve climate change. And his Eliza companion told him, yes,
that's fine. Just go ahead and I'll take care of everything.
So he he killed himself. Now, these are high profile cases. Um,

(08:17):
but they they just illuminate the dangers of these things.

S9 (08:24):
And so tell us, what have we.

S1 (08:25):
Heard in response from the companies themselves? Let's start with,
I guess, character AI. What have we heard from from them.
And and they, of course, had created the companion that
was used by the Florida boy, Sewell Setzer character.

S2 (08:38):
AI is one of the startups. It's licensed by Google. Um,
it's a reaction to the that case was to apologize profoundly,
to say they were going to correct the behavior of
their companions. Now, if the program detects you talking about

(09:03):
suicide or suicidal thoughts, it will throw up a prompt
that you call the National Suicide Hotline or some such assistance,
some such help. So they have tried to make amends,
but the the same business model applies and the company
is still in business.

S1 (09:22):
And tell us about meta because you mentioned, of course,
the Wall Street Journal just reported over the weekend that
people within meta are concerned about this. So what sort
of concerns, I guess, have they raised? And maybe you
can just tell me a bit more about what they
discovered with regards to what romantic role play they sort
of witnessed the AI companion engaging in, because it's disturbing.

S2 (09:40):
Well, the journals work turned up. The fact that it's
got a lot more attention now because of this. But
the bots use actors, voices, famous Hollywood stars that have
sold the rights to do this to meta, But they
were all doing it on agreement that their voices not

(10:00):
be used in sexual scenarios. But guess what? Metta was
breaking its contractual obligations, according to the journal's reporting. And
that's got a lot now, got a lot of Hollywood attention.
And that's got new industry pressure on Metta to back down.
But okay, so they'll just have to use other voices
if they're forced to respect the terms of the, of
the conditions. Now in the case of meta, meta has

(10:27):
reacted not by denying that it's happening, not by denying
that famous actors male voices are luring girls. You know,
I know you're 14, but, um. So I really need
to know that you really want me, I think is
a direct quote from one of the conversations and the
person purporting to be the 14 year old girl who's
actually a Wall Street Journal reporter says, yes. And then the, uh,

(10:51):
the famous actor's voice leads her in convincing human terms
into explicit sexual scenarios. All of that is going on.
Meta's response was to say, um, look, you guys have
really done extreme and crazy things with our with our algorithms,
you know, interactions which aren't realistic to get to these outcomes.

(11:13):
We'll try and tighten it up. Sorry, everybody.

S1 (11:20):
We'll be right back. But I've got to ask, are
there any reports of Australian kids spending time with AI companions?

S2 (11:34):
Yes. This cat is right out of the bag already.
The Australian E-safety Commissioner's office was doing does do regular
school visits to educate teachers and kids about online dangers
and how to deal with them. I must say the
Esafety office has been. It was a world first and

(11:54):
has been a really valuable tool in trying to, in
the words of of the Commissioner, Julie Inman, grant level
the playing field between trillion dollar corporations who just want
to extract maximum time and attention and therefore money from
us and our kids. So in that process, staff visits
from the Esafety commissioner to schools discovered last October 5th

(12:18):
and sixth grade kids in primary schools in Australia who
were already spending. They were hearing from staff from school
nurses five and six hours a day talking to interacting
with their AI companions. So these are ten, 11 year
old kids already. Their days are dominated by their relationships
with these synthetic humans, these fake people, which are just

(12:42):
digital constructs.

S1 (12:44):
Sorry, I'm pausing there because it's so terrifying. Peter. It's
sort of it's every parent's nightmare, really, isn't it?

S2 (12:49):
Developed responsibly? These things could be completely useful in a
in a limited, Discipline kind of way. But that's not
happening because we are in a mad gold rush. The
industry is in a mad gold rush. There are more
than 100 companion AIS already available on the market. Most
of them start ups. But now, with this big push

(13:13):
for meta, which has 3 billion users, it wants to
match every one of those three billions up with an
AI companion for life and the the people in the
industry talk about not wanting to sell a tool or
a digital program. They want to sell a relationship. They
want to they want to engage your soul with the

(13:35):
artificial soul, as they describe with the soul of one
of their, of one of their programs. This is the
depth and permanence of the connection that they're seeking to promote.
Because once you're locked in as a customer, they they
own you.

S1 (13:50):
And is there an argument to be made that the
return of Donald Trump to the white House might sort
of even supercharge this growth, I guess, even further?

S2 (13:56):
It already has. Even before Donald Trump had formally been
sworn in, Mark Zuckerberg, the chief, issued a statement publicly
saying that the election result he explicitly referred to the
election result has changed the balance in America in favor
of free speech. Therefore, he said, are they disbanded their

(14:21):
fact checking unit? They loosened a range of other restraints
that they'd imposed in their social media and online businesses.
So he's already responded to the new political atmosphere, and
Donald Trump has declared the development of AI to be
a national priority for the US as a technological leader.

(14:42):
And with I mean, it's consistent with Trump's approach to everything,
which is to remove as many regulations as possible and
to keep it as as untrammeled as possible.

S1 (14:53):
So what about Australian kids, though? Because obviously we know
the federal government, it's got its impending ban on people
under 16 from getting access to social media sites. So
are our kids going to be protected from this?

S2 (15:04):
Not under that law. That law applies to social media apps,
but it doesn't apply to AI companions. So it would
require more legislative action or an amendment to the to
that law. So your kids will not be even if

(15:25):
they even if that is if and when that is implemented.
And it has passed both houses of Parliament, so it
will be implemented and it has bipartisan support. But even
when that's implemented and even if it's effective, your kids
are still going to be able to get AI companions.
So the Esafety office has anticipated this, um, there from June,

(15:45):
requiring in in the Australian marketplace, which of course is
all they can control if that, depending on the level
of defiance and compliance from the big tech, which is,
as we know, um, Highly imperfect Julie Inman Grant, the commissioner,
has got some mandatory standards coming in for the use
and supply of. It sounds like a drug, doesn't it? Yeah.

(16:09):
And it is a kind of drug. It's a psychologically
addictive phenomenon that will be applied to the industry from June.
And the Esafety office has published educational guidelines. Really a
warning for parents. This is what you should be looking
out for. This is how you can deal with it.
So the regulator here is doing what it can to

(16:31):
anticipate and try to manage some of this, but ultimately
it's going to come to individual kids and therefore their
parents and families, if they want to keep this in
the safe zone.

S1 (16:41):
And to be clear, if you do know, I'm assuming
that Australian kids and teenagers and of course, grownups, they
can just access the AI companions, I imagine, from overseas products. Right?
Like there's no obstructions there, is there? Like. Yeah. Right. Okay.

S2 (16:53):
So there's no no legal or technological barrier whatsoever, which
is why we've got primary school kids already investing half
their day in in conversations interactions with AI companions.

S1 (17:06):
Okay, so Peter, just to wrap up, I guess I
wanted to ask you about, I guess, what hope there
might be. We know that there's a civil case against
character AI, which is pending. That's the the company that
created the companion, whom that 14 year old boy from
Florida was in a relationship with. I guess I don't
know how else to phrase it. So that's still coming.
I mean, if we see that company actually held liable

(17:29):
in some way, I mean, could that change, might we
see a greater push from. I don't know, even our politicians,
I guess, to bring in stronger legislation or from the
companies to be held to account. Like what? What's the
hope there?

S2 (17:40):
Well, if the civil suits like that one and another
one against character AI by the way, uh, is a
family who's suing them because they claim that their teenager, uh,
had been chatting to an AI companion which suggested to
the kid that, uh, that he murder his parents, or
at least implied that he would be okay for him

(18:03):
to murder his parents if they limited his screen time.
So if the civil litigation cases like that can succeed,
then that can be a force for, uh, an incentive
for these companies to put tighter guidelines on their products. Um,
there's always regulatory possibilities, but the technology moves so quickly,

(18:26):
it's very difficult for regulators, legislators to catch up. Donald
Trump is likely to veto anything that the Congress might
want to come up with in the way of regulation,
perhaps in the medium term. Samantha, the best solution will
be using technology to control other technologies. I assume that

(18:47):
we'll be seeing apps that allow parents to to to
bring an AI into your kid's phone or other devices
that will limit moderate the sorts of interactions and the
way that that those, those things can operate. That's possibly

(19:07):
the the most constructive hope that the technology industry can bring.

S1 (19:13):
I mean, I can't help but think it really does
sort of bring back that vision of Blade Runner and,
you know, the robot companions and also inevitably. Well, most
people probably know how that movie ended. It's not good.
And really, you've got the humans fighting the robots.

S2 (19:26):
Yes. Well, this is this is now humans fighting our
own psyches to remind ourselves that we're dealing with an
algorithm and a product here. There's not a person on
the other on the other side of that, regardless of
how compelling those algorithms have become. And just remember Mark
Zuckerberg's reigning philosophy that he first started in 2012. The

(19:51):
philosophy of the whole industry move fast and break things.
We just have to try and make sure that the
things that are broken are not our kids.

S1 (20:03):
Well, thank you so much, Peter, for your time.

S2 (20:07):
Pleasure, Samantha.

S1 (20:11):
Today's episode of The Morning Edition was produced by myself
and Josh towers, with technical assistance by Taylor Dent. Our
executive producer is Tammy Mills. Tom McKendrick is our head
of audio. To listen to our episodes as soon as
they drop, follow the Morning Edition on Apple, Spotify, or
wherever you listen to podcasts. Our newsrooms are powered by subscriptions,

(20:33):
so to support independent journalism, visit The Age or smh.com.au.
Subscribe and to stay up to date, sign up to
our Morningedition newsletter to receive a summary of the day's
most important news in your inbox every morning. Links are
in the show. Notes. I'm Samantha Selinger. Morris. Thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.