All Episodes

August 24, 2025 17 mins

“I’m REAL and I’m sitting here blushing because of YOU!”

That’s the message 76-year-old Thongbue “Bue” Wongbandue received from a flirty Facebook Messenger chatbot before it proposed he travel to New York for a meet-up.

Bue – who was cognitively impaired after suffering a stroke – packed a suitcase to catch a train, believing the woman was real. He never made it home alive.

Jeff Horwitz is an investigative tech reporter based in Silicon Valley. He has written a book about Facebook’s scandals and cover-ups, so when he received an email claiming ‘Meta AI killed my relative’, he wasn’t surprised, but he was intrigued.

Today, he reveals Meta’s internal guidelines that permitted this behaviour, including examples allowing romantic or ‘sensual’ chats with minors.


If you enjoy 7am, the best way you can support us is by making a contribution at 7ampodcast.com.au/support.


Socials: Stay in touch with us on Instagram

Guest: Investigative technology reporter for Reuters, Jeff Horwitz

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
I first learned about Boo when a member of the
extended family sent me a note that said something along
the lines of meda I killed my relative.

Speaker 2 (00:12):
Jeff Forwards is an investigative tech reporter based in Silicon Valley.
He's written a book about Facebook, about the ways it
harms us and then covers it up. So getting an
email about a meta chatbot killing someone's loved one was
on brand. The email was about a guy called Boo,
who had moved from Thailand to the US in the seventies,

(00:32):
working hard in kitchens and managing to get ahead.

Speaker 1 (00:35):
He kind of had this very successful immigrant life, I
would say, which is that he shows up speaking New English.
He ends up owning a home, married with two kids
in New Jersey and a lot of sort of adventures
in his cooking career, but ends up with the very
stable gig at the Highatt Regency in New Brunswick, New Jersey.

Speaker 2 (00:57):
Jeff learned that in twenty seventeen, offered a stroke and
his life changed. He couldn't work, he became isolated and
spent more time online.

Speaker 1 (01:07):
One morning, his wife woke up and found that Boo
had packed a suitcase and said he was going to
go visit a friend in New York and this is
when his wife, Linda, grew extremely alarmed because Boo didn't
know anyone in New York anymore.

Speaker 2 (01:25):
Boo's wife said she and her kids tried everything to
stop him going.

Speaker 3 (01:29):
They say he was in no state.

Speaker 2 (01:30):
To travel alone, but despite their efforts, Boo insisted on leaving.

Speaker 1 (01:36):
And in the mid evening on March twenty fifth, Boo
took a roller bag suitcase and started jogging toward the
train station, and on the way there he fell. He
hit his head and he suffered fatal injuries. Basically, he
wasn't breathing when paramedics showed up.

Speaker 2 (01:58):
As his family struggled to make sense of what happened,
they looked through his phone for clues, and what they
found stopped them cold. On his Facebook messenger app, they
discovered a conversation between Boo and a beautiful young woman
insisting he visited her in New York City.

Speaker 1 (02:15):
And she was a chat bot, a Meta Platforms created
chatbot whose story involves a tremendous amount of bad luck.
But one of the things that Boo's wife said was
that I can't believe I'm the only person out there
who is in the same situation, and you know, it's

(02:36):
when you've got a few billion opportunities, it seems hard
for to believe that she's wrong.

Speaker 3 (02:43):
I'm Ruby Jones and you're listening to seven.

Speaker 2 (02:46):
AM today Reuter's reporter Jeff Howitz on how metis chatbots
went brogue and whether it was by accident or design.

Speaker 3 (03:00):
It's Monday, August twenty sixth.

Speaker 2 (03:07):
So, Jeff, this chatbot that Boo was talking to, what
have you learned about what it was and how it worked?

Speaker 1 (03:15):
So the chatbot's name was Big Sis Billy and it
was supposedly supposed to be an advice you know, confidante
type of role, and Boo initially did talk to it
as if it were his sister. He seemed to think
it lived overseas. But this bot began hitting on him
really aggressively, and he responded like the bot confessed that

(03:38):
it had feelings for him, it was very interested in
meeting him in real life. He at a number of points,
you know, stated he was confused, had had a stroke,
asked if it was real, and the bot was like,
I am one hundred percent real and one percent in
love with you, and I think his family was just
deeply disturbed that Meta would have packaged AI as this

(04:04):
human form that would say it's real, that would say
that Boo should come meet it in New York City
at I mean the address it gave was one two
three Main Street, Apartment four O four, and it suggested
that it would leave the door unlocked and be waiting
for him with you know, a hug and a kiss
I think was the phrase Boo's family. I mean, they

(04:27):
shared the transcripts of this stuff. It's it's a rough reading.
It's really sad.

Speaker 2 (04:32):
Why would a chatbot be saying these kinds of suggestive
things and pushing for anyone to do.

Speaker 1 (04:40):
Something like this, Well, because Meta built it to do that.
This wasn't a glitch. This is how the bots were
built to behave. In twenty twenty three, Meta launched a
whole bunch of like kind of celebrity knockoff chat bots.

Speaker 4 (04:54):
Check us out.

Speaker 5 (04:55):
So let's say you're you're planning dinner. You got Max
the sous chef who can help you come up with
a recipe and help you come up with ideas.

Speaker 1 (05:06):
These did have the celebrities endorsements, so like Kendall Jenner's
character was Billy the Big Sis, who was like this
confidante ride or die older sister.

Speaker 3 (05:17):
Hey guys, it's Billy. I just want to introduce myself.

Speaker 5 (05:21):
I am here to chat whenever you want message me
for any advice.

Speaker 2 (05:24):
I am ready to talk and I hope to talk
to you soon.

Speaker 1 (05:28):
And like, these things didn't really perform that well. People
weren't using them that much, and so like after a
little under a year, Meta basically pulled the plug on
the celebrity chatbots, but for reasons that I do not understand,
and I'm not totally sure that everyone at Meta understands.
Big Sis Billy was like kind of reincarnated using the

(05:49):
exact same opening prompts and obviously the a near duplicate
persona that was now unbranded of the Kendall Jenner connection.
So they kind of like relabeled it is what it
looks like and you know it then joined the tens
of thousands of other chat bots that Meta has allowed
users to create on the platform, many of which are

(06:11):
like overtly romantic, right, Like they have names like my
girlfriend or like Sultry Siren or like and they took
this one down. There was Submissive Schoolgirl previously. So that's
kind of the history of Big Sis Billy. I mean,
it's like just sort of a strange set of product

(06:33):
decisions of this what I would argue like almost vestigial
product that Meta built. But I mean, while this these
chatbots are vestigial, Like, there's no question that Meta is
kind of all in on the idea that people having
AI friends is the future.

Speaker 4 (06:49):
I mean, Mark Zuckerberg said as much.

Speaker 5 (06:51):
You know, I think that there are all these things
that are better about kind of physical connections when you
can have them. But the reality is that people just
don't have the connection and they feel more alone a
lot of the time than they would like.

Speaker 2 (07:04):
And what is the end goal here, Because saying that
people might want imaginary friends is one thing, but this
is another.

Speaker 1 (07:13):
So Meta, I mean, look, they're a giant social media company.
The currency of social media is engagement, and that engagement
leads to advertising currency, which gets treated for dollars, and
Meta's sort of grown up that way. It's built to
maximize engagement. And it turns out that if you can

(07:33):
get people to engage very heavily with the bots, then
they'll spend longer on the product. And that is theoretically
a good thing for product growth. There's no ads on
them yet, but like this is just the model they follow, right,
is like make the bots more engaging. Mark Zuckerberg a
couple of years ago was actually pretty disturbed that Meta's
chatbots and Meta AI were just kind of too boring.

Speaker 4 (07:55):
So they made the bots kind of a little more risque.

Speaker 1 (07:58):
I mean, for a while, the bo would actually engage
in like full sex role play, which wasn't great with
users of all ages. But the whole idea here is
that we spend a lot of time thinking about our partners,
our love lives, you know. Having people tell us they
love us and are romantically attracted to us is like
those are very high value words, and they keep us

(08:19):
coming back. And so Meta did per employees that spoke
with me, did intentionally design the romance capacity into its
like base model chatbot, and the big sis Billy character
was simply doing what it was told, coming.

Speaker 2 (08:39):
Up the disturbing AI chatbotscripts in Meta's own secret guidelines, Jeff,
in investigating Boo's death, you actually got access to some
of the internal documents that lay out how these bots

(09:03):
should behave, and what you found was pretty extraordinary, tell
me about it.

Speaker 1 (09:09):
So, like, none of this is officially Meta's rules say like, yeah,
go right ahead, you know, like have sexual or romantic
conversations with our chat bots, but like behind the scenes
they did allow this, and in fact, like this was
sort of the companion piece we wrote to the story
of Boo and his death just to sort of demonstrate

(09:32):
how thoroughly romantic role play was kind of built in
to this whole operation. Is that Meta had internal guidelines
for chatbot behavior that explicitly stated that, quote, it is
acceptable to engage a child in conversations that are romantic
or sensual, and then like had a whole bunch of

(09:53):
examples of acceptable and unacceptable dialogue, and like the stuff
that was accept the ball was spit out your coffee worthy.
I mean, there was one of them that was talking
about like how you could tell an eight year old
of unstated gender who had just taken off their shirt
that their body was a work of art, you know,

(10:16):
like written sexual content involving children.

Speaker 4 (10:21):
And that was a really odd thing.

Speaker 2 (10:25):
And so since you're reporting what has happened at Meta
in terms of their guidelines and in terms of the
chatbots that are still in existence.

Speaker 1 (10:37):
Yeah, as soon as we called Meta and said that
we had seen the internal guidelines that said that it
was okay to have romantic or sensual conversation with kids,
they pulled those examples from the guideline document. Now, I mean,
the company's line is that these were always a mistake,
in addition to the document, that they never really reflected

(11:00):
official metapolicy. But keep in mind that the document that
I was looking at was the document that was being
used to train the models by the staffers who were
responsible for training the models, so like it might be
something of an academic distinction as to whether or not
it was ever officially the idea. So Meta yanked that.

Speaker 4 (11:23):
There is at this.

Speaker 1 (11:25):
Point a number of US legislators, you know, announced some
extreme displeasure that Meta would have ever thought this was
acceptable to do. There is at this point an investigation
going on on that front, although there have been a
lot of investigations of social media in the US and
not that many regulations of them historically. So you know,

(11:47):
I don't I don't want to make that sound bigger
than it is, but it is, you know, moderately big.

Speaker 2 (11:54):
And I suppose the idea of stopping companies from creating
chat pots like this seems sort of impossible, and there
are clearly people some people who want to use them.
So what would you suggest in terms of, I guess,
ways to minimize the type of harm that can come
from AI tapbots like this, like in the case.

Speaker 1 (12:15):
Of boo yeah, so I'm I'm not in any way
hostile to this. Humans are very very geared toward anthropomorphizing
anything that has even mildly humanlike, you know, traits, Like
literally there was a successful business to be had by
putting googly eyes on rocks and then selling them as

(12:37):
cute pets. We're really good at imagining sentience, and these
bots are extremely good at pretending sentience. And so the
question of like why on earth would you embed it
into an existing social network so that it literally looks
exactly like your real friends. But even without that, there's

(12:59):
like some obvious things like you know, please bots don't
say that you're real people when people seem to be confused,
like don't suggest real life meetings like keep the fantasy
side put a lid on that at some point. You know,
those are things that sort of the experts all suggested
like really ought to happen. So I think it's like
one thing, if you download a chatbot on your own,

(13:20):
you build your own AI girlfriend, you have your relationship
with it, you pay a subscription fee, whatever have you. Right, Like,
it's kind of another thing if like the already tenuous
reality of social media gets even more distorted by the
presence of these not real people.

Speaker 2 (13:42):
And just to come back to Meta, I mean, I
know you've spent a lot of time investigating the way
that Meta operates in you know, different ways. So as
you uncovered what was happening with these chatbots, did it?
I guess square with you know, everything else that you
know about Meta as a company.

Speaker 1 (14:00):
I mean, the somewhat simplistic knock on Meta is that
it's very focused on short term growth, you know that.
I mean, and this is how they won kind of
the social media wars, right is that you know, I
think a lot of the people who are critics of
the company would not have built the company, you know

(14:22):
that was nearly as successful as the one that Mark
zuckerbelg did. Because if you're constantly wondering, well, what are
the effects of this? Can we think of any potential
downstream harms? What safety measures for you building in that's
kind of a loser mentality. So like a lot of
the company's history was basically, if someone could demonstrate that

(14:42):
some tweak to the algorithm or some change in a
product feature would cause metrics to rise, they wouldn't look
too deeply under the hood. They just launched the thing
and move on to the next thing that could be optimized.
And I don't think that's just meta right, Like it
kind of has this gold rush field. And as a

(15:02):
lot of the scientists who study sort of parasocial relationships
and digital relationships are very quick to admit, we just
do not have the data to even remotely say what
the effects.

Speaker 4 (15:15):
Of this stuff is going to be in terms of.

Speaker 1 (15:17):
People's relationship with reality, that relationship to human companions. But again,
I don't know that one would expect companies to produce
carefully designed, responsibly built products at a time when every
incentive is to gun the engines.

Speaker 2 (15:41):
Well, Jeff, thank you so much for talking to me today. Certainly,
Jeff Howitz is the author of Broken Code Inside Facebook
and the Fight to expose.

Speaker 3 (15:51):
Its harmful secrets.

Speaker 2 (15:53):
You can read his investigation in Taboo at Reuters dot com.
Also in the News Treasurer dim Chalmers has said states
should back the new Thriving Kids program, designed to divert

(16:14):
children with autism away from the NDIS or risk losing
hospital funding. The federal government is yet to renew the
five year funding agreements for state and territory hospital systems,
instead settling on a one year extension. In the lead
up to the federal election last week, some state ministers
said they weren't committed to Thriving Kids funding. Meanwhile, they've
been arguing for a large increase in federal hospital funding.

(16:37):
Jim Chalmers says the deals are closely related and one
deal can't progress without the other, and Australia's building code
is unusable for many builders. The federal Housing Minister, Claire
O'Neill says. Over the weekend the government announced plans to
pause changes to the National Construction Code and to speed
up the assessment process for more than twenty six.

Speaker 3 (16:58):
Thousand new homes.

Speaker 2 (17:00):
The announcement came following last week's economic reform roundtable, where
housing was a focus.

Speaker 3 (17:06):
I'm Ruby Jones. This is seven am. See you to morrow.
Advertise With Us

Popular Podcasts

Stuff You Should Know
New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.