Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
There Are No Girls on the Internet, as a production
of iHeartRadio and Unbossed Creative. I'm Bridget Todd and this
is There Are No Girls on the Internet. Welcome to
There Are No Girls on the Internet, where we explore
the intersection of social media, technology and identity. And this
is another installment of our roundup of news that you
(00:25):
might have missed online this week. Quick heads up, I
have COVID, Mike has COVID. We all have COVID.
Speaker 2 (00:34):
That's right, Bridget. This episode is brought to listeners by we.
Speaker 1 (00:40):
I mean, I know you've had it a few times.
This is my sixth time getting it. And when we
were talking about it, we were like, oh, like, didn't
you feel like this strain it was more like body
and less like we were talking about it like we
were talking about a strain of marijuana, Like, Oh, it's
more of a body vibe than a head in this
in this strain.
Speaker 2 (01:01):
Yeah, this this strain, Yeah, it's more a little more
body than head. But there's a you know, little head
ache FuG stuff going on. Uh, It's my experience has
been more mild than previous times. Not to minimize COVID.
COVID is terrible. Don't get it. It's a pretty big bummer.
Speaker 1 (01:21):
Do you want to know what the strain, the new
the new strain that just dropped us called that we
probably have.
Speaker 3 (01:27):
It's got a name flirt Flirt.
Speaker 1 (01:30):
I know it sounds like I'm making that up, but
I swear to you I'm not. It's like Uppercase f
l Ert flirt. It's a flirty strain.
Speaker 3 (01:40):
Who is branding these things? What the hell?
Speaker 1 (01:43):
I don't know, but it's out there. Y'all wear your masks,
do what you gotta do. But if you're wondering why,
maybe we both sound a little bit loopy. It's COVID.
Speaker 2 (01:54):
Yeah, you're bridges right, wear your masks. Don't do stupid
things like go to a packed concert I don't know,
or maybe do if that's what you want to do.
Just be ready to experience a lot of regret if
you're like somebody that I know.
Speaker 1 (02:11):
I mean, you're being so clandestine. We got COVID together
going to a rave to see Fat Boy Slim, who
was amazing, by the way, did not This is how
I know we are about that life. Fat Boy Slim
there was two openers. He did not go on until
two in the morning and he performed until four thirty
(02:31):
in the morning, and you were like, yeah, I'll go.
I thought for sure you were because I was on
the fence. I was like, I don't know, packed packed warehouse,
rave up till five.
Speaker 2 (02:44):
I don't know, you know, every once every year that
sort of thing feels appropriate.
Speaker 3 (02:51):
I don't know. I really wanted to see me but
on a really good show.
Speaker 1 (02:53):
And just so we're clear, I misread the poster or whatever,
and I thought he started at ten, which I was like, oh,
a little bit late, but you know, everything want to
see this show. Ten is when the doors opened. Ten
was the earliest that you could even get into the facility.
He didn't go on until two. And you know what,
we stayed up. We danced, We had a good time.
(03:13):
We did it. Don't let anybody say otherwise. We did
get COVID that is that happened, but you know.
Speaker 2 (03:21):
We did it pluses and minuses.
Speaker 3 (03:23):
The big plus cee in Fat Boy Slim. He really delivered.
Speaker 1 (03:27):
So we have a little bit of a rule here
on the show where we don't talk about Trump unless
it's something really big, really major, something that we have
to talk about. But I think since we're both sick.
Let's have a little Trump as a treat.
Speaker 3 (03:41):
All right, we can have just like a little Trump
as a treat.
Speaker 1 (03:44):
So to introduce this conversation, let's play a little bit
of what I think is going to be the song
of the Summer by Mad Mcferrinucksucks, giftykill the killjkill.
Speaker 4 (04:07):
So.
Speaker 1 (04:07):
Earlier today, Trump was convicted of all thirty four charges
three four of falsifying business records in the hush money
case involving Stephanie Clifford, who performs under the stage name
Stormy Daniels. What are your thoughts?
Speaker 3 (04:22):
Uh, they're complicated.
Speaker 2 (04:24):
It's nice to see some justice and the shun freud. Uh,
it's certainly nice to see like the rule of law applied. Uh,
since like he clearly did it and like didn't even
bother to say that he didn't do it. But it's uh,
(04:45):
it's it's a small comfort.
Speaker 3 (04:47):
It's just really.
Speaker 2 (04:49):
Uh, like people are still going to vote for and
people are talking about how he's still getting donations and
this is going to like cause his donations to go up.
Speaker 3 (04:57):
So it it's mixed. It's mixed.
Speaker 2 (05:01):
I mean, I guess this is better than the alternative
of him getting off for clearly doing it.
Speaker 1 (05:06):
We had such different takes on this because I was
like scrolling Blue Sky and threads and being like memes, memes, memes,
like celebratory dances, celebratory JITs. I was like, you know,
if anything, I was like, man, I really missed the
days of Twitter when news like this would have really
like hit different. You were like, at least the rule
(05:27):
of law still upheld in this country. I'm like, how
many nity leaks jeeps can I use?
Speaker 2 (05:35):
Well, I think you're having a healthier reaction. It sounds
like you're at least getting a little bit of joy
out of it.
Speaker 1 (05:40):
You know, fine joy where you can, I will say so.
One of my favorite tweets was by Zach Silberberg, who tweeted,
so is Trump cooked now? Or is it like air
blood logic where it's like, there ain't no rule that
says a man with thirty four criminal convictions can't be president?
And not only do I like that, because famously, if
you've seen Airbud, there's no rule in the rule book
that says a dog can't play basketball. But Zach is
(06:02):
actually right here. I was like, well, certainly this means
that he cannot be elected president. That is not correct.
There is no rule, just like there is no rule
that says a dog can't play basketball. There is no
rule that somebody with this many felony convictions cannot be president.
Speaker 3 (06:17):
Yeah, he could run for president like a dog.
Speaker 1 (06:21):
That's something else about Trump is that, like he always
said things were like a dog, and I always wondering, like,
what does he think about dogs? It will be like, oh,
he lost the election like a dog. It's like that
what dogs are known to do.
Speaker 2 (06:36):
Yeah, in his vocabulary, being a dog is one of
the worst things. Like, I don't know why he hates
dogs so much.
Speaker 1 (06:42):
It's one of the worst things someone can be as
a dog.
Speaker 2 (06:45):
Yeah, I guess because they're good and kind and loyal
and some of them are smart.
Speaker 3 (06:50):
That's why he hates them.
Speaker 1 (06:51):
So I did see this tweet that made me laugh.
That was a screenshot of the headline that I assumed
was fake that said if Trump's conviction lands him in prison,
the service goes too. When I thought, haha, that's so funny,
looked it up to confirm and it's real. From the
New York Times. This is true. Side note, everything I've
read suggests that he probably will not say leave. But
(07:13):
if he was sentenced to jail time in July during
his sentencing, the Secret Service has to go.
Speaker 3 (07:19):
Now that's a movie I would watch, Uh, what would
it be titled.
Speaker 1 (07:24):
Secret Serve in time And on the cover it's like
Trump in the middle with his hands under his chin
and the Secret Service agent sort of like back to
back on either side of him with the sunglasses.
Speaker 3 (07:37):
That's pretty good.
Speaker 2 (07:38):
You came up with that right on the spot, Listeners,
we did not plan this Atle's pure bridget magic right there.
Speaker 1 (07:47):
Thank you, thank you. I'll take that.
Speaker 2 (07:49):
For some reason, I'm picturing a sequel where the cover
looks kind of like the cover of Sister X two somehow,
like like he gets out and then right as he
gets out, he gets convicted in the Georgia case or something,
and then he has to go on the run with
the Secret Service.
Speaker 1 (08:05):
Well, I applaud you for really remembering the plot of
Sister RAC too, because that's kind of what happens.
Speaker 3 (08:12):
It was an important movie.
Speaker 4 (08:18):
Let's take a quick break at our back.
Speaker 1 (08:33):
So there's no real good transition from Sister RACT two
to what I want to talk about first. But we
have to talk about that viral all Eyes on Ratha
image on Instagram. So, like a lot of people, I
reposted the all Eyes on Ratha AI generated image on
my Instagram story and I am not alone. According to
NBC's Cat ken Barge, the image, calling for people to
(08:55):
pay attention to Israel's ongoing war on Gaza, has drawn
more than forty four million shares on Instagram in less
than forty eight hours, highlighting a renewed social media push
by supporters of Palestinians following a deadly Israeli airstrike. So
Rafa is a city split between Egypt and the Gaza
Strip that serves as the only crossing point between them,
so it's an important spot for aid and supplies and
(09:18):
food to go into Gaza. So, if you haven't seen it,
the image is AI generated and it shows tents in
a camp arranged to spell out the phrase all Eyes
on Ratha. Rafa is a city split between Egypt and
the Gaza Strip and serves as the only crossing point
between them, so it's an important spot for aid supplies
and food to go into Gaza. The area is in
the southern part of Gaza and it's filled with refugee
(09:40):
tent camps where local officials said that at least forty
five civilians died after an Israeli strike on Sunday. But
the image that went viral on Instagram is not a
real image of Ratha. It is pretty clearly AI generated.
Speaker 3 (09:53):
Yeah, it's pretty clearly AI generated.
Speaker 2 (09:56):
It's shows like thousands of tends in a very orderly
way that does not match any of the actual photos
that we've seen come out of that part of Gaza.
But so if it's pretty clearly AI generated, then why
didn't it get labeled on Instagram?
Speaker 1 (10:18):
Great question? I don't know. My best guess is body moderation.
But honestly, at this point, your guests is as good
as mine. Meta has really talked up their efforts to
label AI generated images, although there are like plenty of
gaps in those rules. According to the Washington Post, the
all eyes on Ratha image appears on Instagram without a
(10:39):
label calling it out as AI generated. The company does
not appear to have taken any action to remove it,
and then when the Washington Post asked Meta why they
didn't label it, Meta did not reply. So we don't know.
Speaker 3 (10:50):
Strange, Okay, So where does the phrase come from?
Speaker 1 (10:54):
Well. According to Mashable, the slogan all Eyes on Ratha
has been repeated for months. It was initially coined by
doctor Rick Pepercorn, a World Health Organization director. Pepercorn said
this back in February, when an Israeli invasion into Ratha
was still then just a possibility. It hadn't happened yet.
In a press briefing from RATHA, Peppercorn said that this
would be a quote unfathomable catastrophe, further of expanding the
(11:17):
humanitarian disaster beyond all imagination. All eyes are on Ratha,
declared Pepercorn. We all watch the news and we all
get the stories about this possible incursion, and military activities
are getting closer. This should not happen. There's no place
for people to go. This is a desperate plea. Yes,
contingency plans are being made, but they would be completely insufficient.
(11:38):
According to four or four Media, the image originated with
a photographer in Malaysia's Instagram account. The user's own Instagram
links to charity fundraising pages for Palestinian Aid and has
a mix of real images and video and then like
highly shareable AI generated images kind of similar to that
All Eyes on Ratha image.
Speaker 2 (11:58):
Why do you suppose it took off on Instagram specifically?
Speaker 1 (12:02):
So, there are a couple of reasons I think this
happened on Instagram specifically. Instagram has really emerged as an
important platform for amplifying information about Palestine, which is kind
of fraught given the way that content about Palestine and
news content in general is being moderated on the platform.
For instance, earlier this month, Instagram's oversight board was debating
(12:22):
whether or not the phrase from the river to the
sea violates the platform's rules, including their ban on hate speech,
and like just the other day, Adam MOSERI was trying
to clarify new rules about how political content would be
moderated in general on the platform, which I have to
be honest, I don't fully understand what their plan is,
what they're trying to say. I don't feel like I
have a good read on what they consider political content.
(12:44):
But Instagram has really played an important role in amplifying
information about what's happening in Palestine right now.
Speaker 2 (12:51):
Just one more example of how bananas it is that
we live in a world where Instagram and Meta get
to make these enormous decisions about what kind of content
we see related to extremely serious events happening around the
world that like really do have an impact on all
(13:11):
of us. Like it feels far away, but there are
a lot of ways that you know, it really.
Speaker 3 (13:18):
Impacts us all.
Speaker 2 (13:19):
So this, uh, you know, there's obviously been a lot
of content online about what's happening in Palestine.
Speaker 3 (13:28):
Why do you suppose that this particular image went so viral.
Speaker 1 (13:31):
Well, probably one of the biggest reasons is just the
ease of posting it. It was posted using Instagram's ad
Yours feature, which, like, without just that one click or
really I guess two clicks, you can quickly and easily
add an image to your story without even really thinking
about it. This is true for me, but like your
mileage may vary. I also think there's something about the
(13:51):
fact that people were sharing this mostly on their Instagram
story as opposed to adding it to their Instagram grid.
I think that things can really take off on stories
because of the quickness that you can post them, and
because they only last for however long, they don't last forever,
and so I think that people are more prone to
share things on the story because it's just so quick. Also,
(14:15):
a lot of critics of the image. Have pointed out
that like, journalists have died or been hurt to get
images about what is actually happening in Gaza online, and
that it's like disrespectful to share an AI generated image
when those real images exist, or like why not use
art from an actual human Palestinian artist as opposed to
(14:36):
AI generated art. But I actually suspect that one of
the reasons why the image is AI generated is specifically
to be able to skirt Instagram's moderation policies. Like I
think had they used a real image of what's happening
in Rapha, there is no way that that image would
be shared by millions of people because it would probably
be moderated off of the platform.
Speaker 3 (14:55):
Wait, that's did you just say that?
Speaker 2 (14:58):
You think it's possible that had this bit real image
it would have been more heavily moderated, but specifically because
it was AI generated, it was able to spread more widely.
Speaker 1 (15:10):
I think that's exactly right. So NBC reports that while
the all eyes on Ratha image has spread quickly, video
from Ratha posted by Palestinian journalists has been restricted and
in some cases removed from social media for depicting the
graphic aftermath of the Israeli strikes. Two of three Instagram
posts showing burned and grievously injured and dead bodies after
the recent strike were removed, and one had a sensitive
(15:32):
content filter for quote graphic or violent content placed in
front of it. An Instagram spokesperson said that the company
removed that content due to its violent and graphic nature,
which it said violated the platform's policies. So it does
seem to be that these platforms are saying that in
some cases, the reality, the actual images and video and
content and footage coming from Palestine is deemed too graphic
(15:56):
to be shown on these platforms. So I do think
that had this been a real image from Ratha, I
don't think that millions of people would be able to
post it, just by the nature of what we already
know around the kinds of content coming out of Palestine
that Instagram and Facebook and Meta are currently moderating off
the platform.
Speaker 3 (16:14):
Yeah.
Speaker 2 (16:14):
Boy, that's a really interesting take, because I think a
more sanitized version is exactly what it is.
Speaker 3 (16:20):
You know, it looks.
Speaker 2 (16:23):
Cold and hopeless and desperate, but there's no people in
the image, Like there's no people in the image, which
is just one of many cues that clearly it's AI. Yeah.
Speaker 1 (16:36):
I mean it's certainly more sanitized, definitely a more sanitized
vision of what's actually happening in Ratha, and I do
think that's part of why it went viral. As Friend
of the Show Ryan Broderick over at the newsletter garbage
they put it. If you're desperate for a super concise
explanation as to how this random Malaysian user ended up
creating a post of the moment, it's because they basically
(16:57):
managed to do the impossible. They generated a pro Palestine
solidarity image vague and abstract enough to bypass both sensors
and filters on one of the biggest remaining social networks
that real people still use. So there's been a ton
of debate about this image, you know, after it went superviral,
like whether or not people should be reposting it. Full disclosure,
I reposted it. I remember when I reposted it, I
(17:21):
my finger hovered over the ad yours button and I thought, like,
should I repost this? Ultimately I did? I think judging
from the photographer who first shared this image, I do
think that image was initially shared with like good intentions.
The person who shared it. You know, they share on
their own Instagram. They share a lot of resources and
(17:43):
information about Palestine. But as for media points out, even
assuming the best intentions for the original poster, the success
of that image still led to the creation of hundreds
of copycat AI generated images made by other accounts basically
just trying to get to go viral, to the point
where when you search all Eyes on Ratha on either
Facebook or Instagram, the majority of what you get is
(18:06):
like AI generated copycat content. Far for media, they're out,
we'll put the piece in the show notes. They found
that already images sort of created in the style of
that AI generated one have really taken off, And I
think it illustrates a truism about the way that AI
generated content strategies on social media, like move now, how
that kind of content works and moves on social media now,
(18:29):
Like a lot of the people posting it are just
trying to see what sticks or what hits. They're not
posting it because they care about whatever the image purports
to show or be about. They're just like, oh, this
image purporting to show Ratha hit it. Well, then I'm
going to make a million different versions of that and
see if any of those stick too.
Speaker 2 (18:47):
Yeah, it's such a different universe with these cheap, accessible
AI tools now because you can do that, you can
make literally a million and upload them all and you
only one to go viral. Right, It's just like spamming
the entire internet exactly.
Speaker 1 (19:05):
It's one of the reasons why shrimp Jesus ai generated
images have gone viral, because one person made the AI
image of half shrimp, half Jesus, and that went viral
on Facebook. And now there's just like viral shrimp Jesus
ai generated content all over social media by copycats who
were like, oh, well, that one really hit it. Maybe
(19:28):
half shrimp, half Jesus is the sweet spot. Let's make
a million more.
Speaker 2 (19:32):
God, it's like every day gets a little bit more
clear what it means to live in this new world
where anything that anybody creates can immediately and almost for free,
be recreated like a million times.
Speaker 1 (19:47):
Yes, So I want to talk about sort of the
debate and the ethics of this image. So a whole
bunch of folks smarter than me have been weighing in
on this image, with some of them criticizing it basically
saying that it's another version of that black square that
people posted in twenty twenty that we actually did one
of our first episodes of the podcast about where people
(20:08):
just post this thing on social media and then feel
like they've done what they need to do. It's a
kind of a self serving thing where they post it,
they feel good and it's done.
Speaker 3 (20:19):
I can see that. You know.
Speaker 2 (20:20):
It's like it feels like you're doing something by posting
or reposting. Like you said, is that you don't even
have to create a post. You just hit the add
to stories button.
Speaker 1 (20:32):
An Instagram user with the handle islam MC said quote
it gives a space to lazy activism, to people who
never actually wanted to talk about the cause or don't
want to be controversial. It gives in the space to say, here,
you wanted me to talk about it, here, I posted
about it, now shut up. Honestly, I get that criticism,
and I want to speak carefully here because I also
(20:53):
think that we're in this time where people have maybe
gotten this message like there is a risk to speaking
up about this issue. So when people are given away
to do so in a way that feels like sanitized
or safer, and everybody else is doing it, maybe that
makes it feel easier or safer to also add their voice.
But I think that what some of the people who
(21:14):
are criticizing this are saying is that you know, this
image doesn't have any resources. It doesn't have any links
to more information, it doesn't have a link to donate.
It's not even really Ratha. It doesn't have any actual
Palestinian incident. Why are you posting this? How does this
help anybody?
Speaker 2 (21:31):
Like?
Speaker 1 (21:31):
What does this do other than makes me feel like
I'm adding my voice? But then I saw other people saying, well,
anything that shows that people are against what's happening in
Palestine is a good thing. Jason Akunde, the author of
the book Revolutionary Acts, Love and Brotherhood and Black Gay Britain,
tweeted the AI all eyes on Ratha post might feel
performative or frustrating, but honestly, Israel Palestine is a war
(21:55):
of public opinion and consensus. If it has become quote
trendy to stand against massacre of Palestinians, then that is
a net good. I don't think it can compared to
the BLM Black Squares, which were weirdly differential and based
on this nebulous aim of quote listening and learning the
premise of this protest against the Onslaught and Rapa is
that we want the genocide to stop. That is a
(22:16):
clear achievable demand. Israeli politicians know that the war of
a consensus is key to victory, which is why they
have a militarized communication strategy. If a random influencer is
sharing a post about Rapa after months of mindlessly posting,
this is an opportunity to seize not to be superior.
And I do think that that might be part of
the conversation, right that social media by definition is always
(22:40):
going to be somewhat performative, and I think that because
of this feeling that people are watching to see if
you share the right thing the right way, it maybe
creates this tension. And because this is all happening in public,
because it's social media, I do think it risks overshadowing
the real issue. Like, as much as I have really
been tuned into some of the media and robust debate
(23:01):
about this image, I do not want a conversation about
the ethics of one AI generated post to overshadow conversations
about what is happening in Palestine. And I guess I
do think that we're in this space where people really
maybe don't know what to do and feel powerless and
feel like we're all witnessing something horrifying, but you're also
(23:22):
struggling to find agency of like, well, what can I
do and listen? We can post on social media, we
can donate money, we can protest. If you're going to protest,
use our guides on how to do it safely. We'll
put them on the show notes. But I think that
that's why you're seeing these online movements that are calling
for people to, you know, block Kim Kardashian and solidarity
(23:43):
with Palestine that we talked about after the met ball.
I think people are looking for ways to feel like
they have agency. And I can understand not really knowing
what action to take and just smashing that share button
on Instagram like I did, even while not knowing if
it was the right thing to do or not. And
I say all that to say that if you do
have extra money and you're thinking like what can I do?
(24:05):
You could give money to the Palaestindian Children's Relief Fund.
If you're in DC. One of my favorites, Boss in
the City Sun Cinema shut Out to SUNS is donating
all bar money to the Palestinian Children's Relief Fund that
they take in on Saturday, June first. So I certainly
wouldn't act like I know the exact right thing to
be doing here, but I can tell you I can
(24:25):
share what I am trying to do, which is listening
to Palestinian voices, folks who are actually on the ground,
amplifying what they put out, really trying to engage with
primary sources as much as I can, donating money to
organizations like the Palaestidian's Children's Relief Fund where I can,
and doing my best to educate myself and others.
Speaker 4 (24:50):
More.
Speaker 5 (24:50):
After a quick break, let's get right back into.
Speaker 1 (25:03):
It, and speaking of AI generated content on social media,
new research from Google researchers and several fact checking organizations
have found that the most image based disinformation online right
now is now AI generated. However, the way that researchers
(25:26):
are collecting data might actually be undercounting the problem. Basically,
AI generated image based disinformation was not really a thing
until late twenty twenty three when AI image generators became
widely available and popular, But now they have basically replaced
all other forms of image based disinformation. So these AI
(25:46):
generated pictures of Rapa, even while they might might have
been initially shared for good, it is important to know
that that's not happening in a vacuum. And if images
like that go megaviral and encourage other people to copycat
AI generated images, we will just be a wash in images,
most of which are not real about a conflict that
(26:07):
is pretty serious.
Speaker 2 (26:09):
Yeah, gets harder and harder to know what really is,
like true, you know, even mentioned that the account that
shared that photo had a mix of real photos that
were you know, photos that were taken of the world
and AI generated images, and the blurring of that line
(26:30):
is scary, right, Like, maybe one of the other things
that we can do is really try to put forth
that effort that's needed to use primary sources, like you say,
and make sure that we do everything we can to
have an accurate sense of what is actually happening in
the world and not just the narratives that are being promoted.
Speaker 1 (26:53):
Absolutely, trying to engage as much as I can with
primary sources has been really fruitful. And you know, you
can read this, you can read the summary. I mean,
I guess you're listening to me summarized stuff for you, right,
now keep doing that, but as much as you can,
we put the primary sources in the show notes so
that you can engage with them. You know you should
(27:14):
do that?
Speaker 3 (27:15):
Yeah, oh absolutely, you know.
Speaker 2 (27:16):
I'm yeah, and I think listeners have a right to
expect that. And all those other shows that aren't doing it,
listeners should stop listening them.
Speaker 3 (27:24):
They should listen to our show.
Speaker 1 (27:25):
Are you starting like a podcast?
Speaker 3 (27:27):
Is there?
Speaker 1 (27:27):
Is there a show in particular that you're thinking of,
because I'm I'm down for a podcast beef. I've been
saying it has been too long since I had a
good old fashioned beef. So if you are throwing down
the gauntlet and we're gonna kick up a beef with
another podcast, I'm here for it.
Speaker 2 (27:42):
I didn't have anybody in mind, but you know, yeah,
maybe they should be unnoticed. Put your sources in your
show notes.
Speaker 1 (27:49):
We're coming for you. PJ. We didn't forget about what
happened at reply All.
Speaker 3 (27:54):
Shit, Yeah I went there. Yes, we've got all these
AI generated es.
Speaker 2 (28:02):
Is that it is that the only dangerous thing that
AI has made cheaply and easily available that is poisoning
the minds and lives of a generation is that it.
Speaker 1 (28:13):
That is not the only harm out there on social media. God,
I wish, because we have to talk about this really
upsetting report from BBC that is upsetting but maybe not
totally surprising, that found that criminals are producing and selling
guides to sextortion on social media. The guides show people
how to pose as young women online and trick victims
(28:35):
into sending sexually explicit material over and then they can
use that material to financially blackmail them.
Speaker 2 (28:42):
My god, this is one of the darkest things I
think we've talked about on this show in a while.
Speaker 1 (28:47):
Yeah, and we've talked about how these kinds of sextortion
schemes work and how they are mostly being used to
target young men and boys, mostly being perpetrated by gangs
based in West Africa. However, what is new here is
that that criminals have added this extra stream of income
by creating and selling these guides to help other people
(29:08):
do this as well. That it's becoming a marketplace where
it's like, oh, you too can give me money and
I can show you how to make money by praying
on youth. So this is pretty serious. Kids are dying
because of it. In the UK. Two British teenagers are
known to have died by suicide since October twenty twenty
two after becoming victims of sextortion schemes.
Speaker 2 (29:28):
Yeah, and even that underestimates the real scope of the
problem because a lot of people who are target in
these schemes don't die, but are still severely harmed by it.
Speaker 1 (29:39):
Yeah. Paul Rafel, an intelligence professional and export on sextortion,
says that this whole thing is a massive threat to children.
Paul says internet scammers over these past two years have
found out that they can get very rich, very quickly
by scamming an untapped market, and that's teenagers. He also
says that big tech companies are not doing nearly enough
to stop extore in surprise surprise, saying this crime has
(30:02):
really exploded on Instagram and Snapchat over the past two years.
These platforms need to aggressively go after these criminals. Now.
Snapchat did tell BBC we've been ramping up our efforts
to combat it, including a reporting option specifically for threats
to leak sexual content, and in app education for teens.
In a statement, Meta, which owns Instagram, said it offered
quote a dedicated reporting option so people can report anyone
(30:25):
threatening to share private images. We default teens under eighteen
in the UK into private Instagram accounts at sign up,
which hides their follower and following list into stricter default messaging.
Facebook said, and you know, I guess I agree that
they're just like not doing enough. We already know from
like multiple episodes that Instagram really is aware the way
(30:48):
that their product is being used to harm and target
use in sexualized ways, and like they really just like
aren't newing enough. I will agree that they're that like
not enough it's being done. And I think this is
really heartbreaking because creeps are preying on the shame and
silence and secrecy that a young person would feel getting
mixed up in this. Like if you're a young person
(31:08):
who has gotten caught up in a sextortion ring, it's
got to be isolating and tough to talk about with
the adults in your life. And I think that people
who are doing this they know this, and they are
using that dynamic to prey on kids. Meanwhile, platforms aren't
doing enough to stop them. I do have another kind
of dark Internet thing to discuss, which is whether or
(31:30):
not parents whose children die by suicide have a right
to examine those kids' social media footprints. So over in
the UK, Ellen Rooum, the mother of a child who
died by suicide, has gathered more than one hundred thousand
signatures on a petition calling for social media companies to
be required to hand over data to parents after a
child has died. Under the current law, parents have no
(31:52):
legal right to see whether their child was being bullied
or threatened, was looking at self harm images or other
harmful content, or express suicidal feelings online, or searched online
to help with mental health problems. Room says this is wrong,
that her son left no indications of his motivation before
he died. The last time she saw him he seemed
(32:13):
happy and well adjusted, and that now she just has
no idea what happened. She says, it's really awful if
a child died of an illness, you do a post
mortem and work out what was wrong. It's not going
to bring my son back, and I'm not going to
stop grieving, but maybe just to understand what happened in
those last few hours, she said, because an hour and
a half before he left the house and there's video
of him saying goodbye to his friend, he was fine.
(32:33):
So what changed or what was going through his mind?
Any social media may give me the answers. Because her
petition got more than one hundred thousand signatures, there is
likely to be a debate in Parliament on this issue.
She was part of a group of parents that met
with both the government and Offcom, which is the UK's
communications regulator that regulates social media, radio, TV and other sectors.
(32:55):
The group included people like a man whose daughter died
by suicide after viewing harmful content online, and the parents
of another child who died after possibly taking part in
a social media challenge. As part of the Online Safety Act,
which went into effect on April first at this year,
coroners are now granted new powers that give them access
to social media and online gaming data when investigating a
(33:17):
child's potential suicide. However, even under that new Online Safety Act,
parents are still not entitled to access the data themselves,
and the ruling only applies to children who died by suicide,
not those, for example, who were maybe murdered by somebody
that they met online.
Speaker 3 (33:34):
It's a tough one because.
Speaker 2 (33:39):
Obviously you have to sympathize with these parents who've suffered
a terrible loss and have a right to want to
know what happened with the kid's life. But you know,
it does also open up a whole bunch of cans
of worms. I think of making kids social media posts available,
(34:09):
you know, like, where does the line of privacy for
kids get drawn?
Speaker 3 (34:16):
It's, uh, it's a tough one, but yeah, this is.
Speaker 1 (34:20):
Why I wanted to include it because I really cannot
imagine the grief of losing a child in this way,
and I then can't imagine the added grief of not knowing,
like if something impacted them in their last moments and
just not having insight into that. But I also don't
(34:41):
know enough about, yeah, the privacy ramifications of giving parents
that kind of access, and so it's a it's I
think it's I think it's a conversation that really illustrates
the complexity of where we're at when it comes to
social media right now, and kind of like what you
were saying with the Palestine story, how much platforms are holding,
(35:03):
how much weight their decisions and their policies have, how
they impact youth and the mental health of youth, Like
that is a lot of power that we're giving people
like Mark Zuckerberg and Ada Mosseri and Elon Musk.
Speaker 3 (35:17):
Yeah, that's a great point, you know, and it's.
Speaker 2 (35:22):
These are tough, complicated issues, and historically politicians have just
not dealt with them at all and left it to
the Zuckerbergs of the world to figure out how much
they can get away with before it starts cutting into
their bottom line. I guess it's good to see it
being debated in uh, you know, in a regulatory context
(35:46):
like this bridget We've got COVID retired and it hurts.
We have covered some pretty difficult stories today, Like I
don't know if this is just how you deal with
being sick, or maybe this is it's just been a
rough week. Do we have anything lighter that we can
(36:07):
end this episode on?
Speaker 1 (36:08):
Can I offer you some pizza made with glue? Michael? Yes?
Speaker 3 (36:13):
Please? I love pizza, so.
Speaker 1 (36:15):
You and I I should. People might hear some tension
or we're discussing this because we feel differently about this
next story, right we do.
Speaker 2 (36:23):
We talked about this ahead of time. Listeners might be
interested to know that during the week we keep like
a running list of like a Google doc, where we're
just adding ideas of stories that we might want to cover.
I try to add a bunch of stories, and then
before the we record the episode, Bridget goes through deletes.
Speaker 3 (36:43):
All of my stories, add some new ones.
Speaker 1 (36:46):
That's not true. I would say, like, I think you've
got like a solid like fifty fifty. And also it's
because if you were in charge, it would be such
a fixed stories about hard tech that everybody will be like,
what are they talking about? But I'm why do they
still it's like fifty to fifty half stories that you pick,
(37:08):
half stories that I pick. I don't. I don't delete
your stories, not unless they're really bad and like stupid
and I don't want to talk about them. So yeah,
you know what I do. I do delete your stories sometimes.
Speaker 3 (37:19):
And that's good.
Speaker 2 (37:20):
This is your show, it's and thank goodness it is
because you're so much better at this than me. I'm
just I'm pleased some of them make it through the filter.
That's not the point here. The point is I put
this story of the list thinking that it was gonna
go one way, and then I looked through the notes
that you wrote and I was like, oh, Bridget like
really likes this, which is surprising to me because I
(37:41):
had a different reaction.
Speaker 1 (37:42):
Okay, why don't you tell people what we're talking about,
because they're like, I don't even know the story? What
else talking about? Maybe they do know the story, but
why don't you fill us in.
Speaker 2 (37:50):
The pizza glues story? So, uh, you know, people probably
eat or maybe no. I think it was last week
Google rolled out their new AI summary feature at the
top of search. So when you search, instead of well,
you still get your list of ten links buried behind
a bunch of sponsored nonsense, but at the very top
(38:13):
of your results, you're going to get an AI summary
generated by Google's AI bot, Gemini that attempts to answer
the question that you Googled for, and then you can
scroll past it to go look at the links.
Speaker 3 (38:25):
But there. I think we maybe talked.
Speaker 2 (38:28):
About this a little while ago that there's a lot
of concern that this is going to really just fundamentally
change search because you type in a question to Google,
you hit enter and boom, there's an answer right at
the top of the page. And so why would you
look beyond that to go to those primary sources that
we were just talking about a couple of minutes ago,
(38:51):
And they rolled out this product, and the Internet, or
at least my little corner of it has just flooded
with screenshots of Google's AI summary getting it wrong because
they scraped a lot of Reddit posts to teach the model,
(39:12):
like what the world is, so it can have information
to spit back to us. It really highly ranks what
people wrote on Reddit. But Reddit, a lot of redditors
use humor and sarcasm. That's the thing that exists on Reddit.
And so like this one screenshot that was going around
(39:34):
was about how to make a pizza, and it had
a line that was something like, you know, if you
want a thicker sauce, you could add a little bit
of glue.
Speaker 1 (39:45):
So I gotta stop because a verge collected some of
the best incorrect answers that Google now provides to questions.
Google claims that former US President James Madison graduated from
the University of Wisconsin, not once, but some times. That
a dog has played in the NBA, NFL, and NHL.
Shout out to Airbud. There's no Rubbi.
Speaker 2 (40:08):
There's gonna talk about basketball playing dogs in this episode.
Speaker 1 (40:13):
Listen, I have COVID, let me have this. There's no
rule that says a dog can't play in the NHL. Also,
my personal favorite that according to geologists that you see Berkeley,
you should eat at least one small rock per day.
Speaker 3 (40:29):
It's like, we're all chickens. Chickens eat rocks.
Speaker 2 (40:34):
Yeah, chickens eat rock that rocks. That little like gizzard
thing that hangs down from their throat. They got little
rocks in there to help grind up their food.
Speaker 1 (40:41):
Oh Lord and case listeners don't know microw up on
a farm. Is this your type five about growing up
on a farm.
Speaker 2 (40:47):
No, it's just a joke about eating one small rock
per day.
Speaker 3 (40:51):
Like if you were a chicken, that might not be enough.
Speaker 1 (40:56):
So even for chickens who are using Google, they're like,
I don't know if these responses are on the money,
it should be more than one small rock.
Speaker 2 (41:05):
Yeah, And that just illustrates the perils of trying to
design a product for everybody.
Speaker 3 (41:09):
You know, you're just not gonna please anybody, humans or chickens.
Speaker 1 (41:12):
Okay, so when you asked Google how to get your
cheese to stick better to the crust when you're making pizza,
Google said to add glue. The reason why you put
it on the agenda to talk about last week, which
we didn't talk about, was that a tech journalist actually
tried this recipe business insiders. Katie anopolists said, I knew
my assignment. I had to make the Google glue pizza.
(41:36):
Don't try this at home. I risked myself for the
sake of the story. But you shouldn't. So you put
this on the agenda. You come to look at the
outline that I've come up with to make to like
do the fact check, blah blah blah, You're like, oh,
I see your position on the glue eating is a
you're pro. I was like, yeah, she's eating glue, awesome,
(42:00):
But what were your thoughts?
Speaker 2 (42:02):
My thoughts were like, what a ridiculous little stunt, Like
it's just silly, Like you're not supposed to eat glue.
The reason that those screenshots are funny is because it
says to eat glue, but everybody knows you're not supposed to.
Speaker 1 (42:14):
Well, Mike, if you read the primary source, you would
know that Katie actually has a history of eating glue.
She says, yes, since I know you're wondering, I did
eat paste as a kid. I loved it. So this
is not This is like not a stunt. This is
like her returning to form.
Speaker 2 (42:30):
That does nothing for me, though, because it's not like
she's continuing to eat glue today. She's just like a
glue eater. Like I did read the story, I clicked
into her bio. She's a real journalist who writes like real.
Speaker 1 (42:46):
Journalist is and I like her. And if she wants
to come on the show and talk about the glue,
I would love to have. You won't be a part
of that episode, Mike, since you lis such an attitude
about it. But Katie, if you're listening, I like that
you ate the glue. I'm not gonna shame you for
eating the glue. Come on the podcast and talk about it.
Mike won't be on. I want us talk about the glue.
Speaker 3 (43:05):
But like, really, why did she eat the glue?
Speaker 1 (43:07):
Because Google told her to.
Speaker 2 (43:10):
Google tells us to do so many things that we
don't have to do them. We're humans, we have agency
for a little bit.
Speaker 3 (43:15):
Yet she was.
Speaker 1 (43:17):
Proving a point that about search, that search has gotten
so bad that the examples that they're giving like it
be it'd be foolish for somebody to do it.
Speaker 3 (43:27):
So she did it, but to what end?
Speaker 1 (43:30):
Right?
Speaker 2 (43:30):
Like she was the internet was full of memes about
how bad Google Search was or at least you know
the AI summary. It wasn't like she was adding on
or adding to understanding of how this new thing is
gonna change search.
Speaker 3 (43:46):
She was just trying to get some clicks.
Speaker 1 (43:47):
Listen. I think she took it to the next level.
And honestly, when she accepts her pulitzer, where are you
going to be? I'll tell you where you're gonna be.
Not eating glue and pizza, That's what. Also, she didn't
just eat glue. She mixed glue into pizza sauce and
then made a homemade pizza. And I guess because she
was using like a jarred pizza sauce. She said, for
(44:09):
anybody who feels compelled to point out that I shouldn't
have used jarred sauce or pre shrided cheese, please keep
in mind, I'm eating glue here. How do you not
love that?
Speaker 2 (44:20):
Yes, it's kind of funny. It's like it's somewhere between
Jackass and that video of did you ever se the
video of some local journalists who went down to the
police station to test out a taser and he was
gonna like show how tasers work and he had them
taze him.
Speaker 1 (44:41):
He like falls over it.
Speaker 3 (44:43):
I have to because it looks.
Speaker 2 (44:45):
Like it really hurts, and I have to, like, I
feel like eating glue for the story is kind of
like getting tased for the story.
Speaker 1 (44:56):
So I guess you don't want any of this glue
pizza that I I made.
Speaker 2 (45:00):
No, I don't. I love pizza anything on it, but
not glue. Glue's not food. Glue is not food, Bridget.
Speaker 1 (45:08):
It's a bold stance, but I guess it's a great
one to end on. Hey, real quick, before you go,
this is future Bridget and I have a little bit
of breaking news glue vindication for you, because the day
after Mike and I recorded this episode, Google announced they'd
be taking steps to limit the use of joke replies
and user generated responses from places like credit in the
(45:29):
AI overview summaries now appearing in search. Katie Annapolis, who
ate the glue, had a bit of a victory lap
about it, and a new piece for Business Insider called
sometimes to save the Internet.
Speaker 3 (45:40):
You must eat glue.
Speaker 1 (45:42):
Katie writes, there was a whole lot of attention paid
in the past week about how bad many of these
AI generated search results were, particularly because they were wacky
and funny. Was Google's response to tamp down it's big
AI search ambitions just because of a few jokesters on
x made silly queries. Maybe did it be because Google
took the feedback seriously and realized that there were use
(46:03):
cases they had not expected and they needed to retool
based on this new information. Maybe could it be a
combination of those two things that a small minority of
trolls abusing the system for laughs revealed some serious flaws
and dangers of putting AI in search results. That it
wasn't just a pr disaster for a week, but made
Google seriously rethink the safety of the aioverbuse product and
(46:24):
what it would actually be used for. Most likely, but
Katie writes, let's not overlook one crucial factor here. Me
I actually ate the glue pizza. It did not taste good,
and please do not do this at home. The fact
that Google rolled this out with such easily exploitable flaws
that was bad. But fixing it that's good. And I
(46:44):
like to convince myself that my eating glue pizza was
part of the noise that prompted Google to act. Please please,
I don't need your thanks. I'm just doing my job.
As they say, not all heroes wear capes, some just
eat glue. Katie, I agree, you are the hero the
internet needs. Mike. Thank you for being here. Even though
(47:06):
your opinion about Katie eating glue is bad, we still
like to have you around. Thanks so much, and thanks
to all of you for listening.
Speaker 2 (47:12):
Thanks for having me, Bridget and I hope you feel
better soon you too.
Speaker 1 (47:16):
I feel like our COVID miss people are listening. I'm sorry,
That's all I can say.
Speaker 4 (47:24):
This was a silly one, I think.
Speaker 1 (47:30):
If you're looking for ways to support the show, check
out our March store at tangody dot com.
Speaker 4 (47:34):
Slash Store.
Speaker 1 (47:36):
Got a story about an interesting thing in tech, or
just want to say hi? You can reach us at
Hello at tangody dot com. You can also find transcripts
for today's episode at TENG Goody dot com. There Are
No Girls on the Internet was created by Me Bridget Hi.
It's a production of iHeartRadio, an unboss creative edited by
Joey pat Jonathan Strickland as our executive producer. Tari Harrison
is our producer and sound engineer. Michael Amada is our
(47:58):
contributing producer. I'm your host, Bridget Todd. If you want
to help us grow, rate and review us on Apple podcasts.
For more podcasts from iHeartRadio, check out the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.