Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, this is Anny and Samantha and welcome to stuff
we never told your prediction of iHeartRadio.
Speaker 2 (00:18):
And welcome to another edition of Monday Mini. Quick content warning.
We are talking about pornography, sexual harassment, and stalking and exploitation,
all of the gross stuff. I'm not necessarily going to
get into specifics. I am going to talk about certain
incidents and what's happening around the world.
Speaker 3 (00:39):
So yeah, there you go.
Speaker 2 (00:41):
And in the world of this is why we can't
have anything nice.
Speaker 3 (00:46):
We're talking about AI again.
Speaker 2 (00:48):
So we've had several episodes about the world of AI
and how it's recently grown rapidly with all the talks
about the advantages and disadvantages, including some of the backlash
we talked with bridget about, who also talks a lot
about it on her show. There are no girls on
the Internet, and yeah, just overall, the usage of AI
has grown with that continued backlash and concern so recently
(01:11):
in South Korea. I know, I think my feed has
a lot of South Korean information, but hey, a major
incident occurred where it was discovered hundreds of men and
boys were using AI to create deep fakes in order
to harass or blackmail women. So here's a bit more
from the BBC. Authorities. Journalists and social media users recently
identified a large number of chat groups where members were
(01:32):
creating and sharing sexually explicit deep fake images, including some
of underage girls. Deep fakes are generated using artificial intelligence
and often combined the face of a real person with
a fake body.
Speaker 3 (01:44):
And it continues.
Speaker 2 (01:45):
The spate of chat groups linked to individual schools and
universities across the country were discovered on the social media
app Telegram over the past week. Users mainly teenage students,
would upload photos of people they knew, both classmates and teachers,
and other users would then turned them into sexually explicit
deep fake images. The discoveries followed the arrest of the
Russian born founder of Telegram, Pavel Durov, on Saturday, after
(02:09):
it was alleged that child pornography, drug trafficking and fraud
were taking place on the encrypted messaging app. And this
is from The Guardian with a little more detail. Police
will quote aggressively pursue people who make and spread the
material in a seven month campaign due to start on Wednesday,
the Yonhap news agency said, with a focus on those
who exploit children and teenagers. After a long struggle to
(02:32):
stamp out MLKA secretly filmed the material of a sexual nature,
South Korea is now battling a wave of deep fake
images deep fake videos targeting unspecified individuals who've been rapidly
spreading through social media. President told a cabinet meeting. According
to his office, many victims are miners and most perpetrators
have also been identified as teenagers. He called on authorities
(02:54):
to quote thoroughly investigate and address these digital sex crimes
to eradicate them. According to the country's police agency, two
hundred and ninety seven cases of deep fake crimes of
a sexual nature were reported in the first seven months
of the year, up from the one hundred eighty last
year and nearly double the number in twenty twenty one,
when data first began to be collated. Of this, one
hundred and seventy eight people charged one hundred and thirteen
(03:16):
or teenagers. But the problem is believed to be more
serious than the official figures suggest. One telegram chat room
has attracted about two hundred and twenty thousand members who
create and share deep fakes images by doctoring photographs of
women and girls. South Korean media said the victims include
university students, teachers, and military personnel. So on, one analysis
(03:38):
about South Korea's han Kiore newspaper to highlighted Telegram channels
itself were being used to share deep fakes of female
students and high school and middle school students. The Korean
Teachers and Education Workers' Unions said it had learned of
sexual deep fakes involving school students and had asked the
Education Ministry to investigate. The investigation into sexually explicit deep
fake images is expected to inflict further damage on Telegram's
(04:02):
reputation in South Korea, where the app was used to
operate an online sexual black mel ring which was a
whole different conversation that was super gross and super disgusting
and nobody being punished hardly. So as this incident just
recently came to light, it isn't anything new, and it
sounded like they have been investigating it. It took them
a very long time to get to this point. In fact,
(04:23):
South Korea is saying that the incidents like this have
created a need to declare a national emergency. It hasn't
yet though in South Korea alone. These deep fake sex crimes,
as they are called have only increased. It's interesting because
it says teenagers were responsible for a main amount of
these offenses, so there's a lot to be said. They
(04:44):
have talked about how there were men using their wives
pictures as well, so that there was enough that they
found that happening, which is odd.
Speaker 3 (04:51):
I'm like, I've just been recently hearing.
Speaker 2 (04:54):
More and more stories of their own husbands doing this
to their wives, but then also taking videos, explicit videos after.
Speaker 3 (05:03):
Their wives are asleep.
Speaker 2 (05:04):
I don't know why this is popping up all my stuff,
which is really concerning and wondering how often this happens,
and it's just being talked about now and being found
out about. And again, though this incident is specifically talking
about South Korea, it's a bigger issue. We know this
all around the world. In fact, as I was researching,
I was trying to get some statistics. The first site
(05:27):
that popped up was a site completely dedicated on deep
fakes of celebrity porn videos. That was my first and
like they was advertising like a porn hub saying, look,
we have all of these, come and find your video,
Come and find your celebrity you want to watch, and
also how to find deep fakes online like showing all
the taglines and where to find these taglines. So those
(05:48):
were the first that popped them. So yeah, I've stubbed
my algorithm for y'all.
Speaker 3 (05:53):
I'm just kidding. Anyway, Please, if you're listening to this,
police people, I'm not trying to find it. I was researching.
I'm only researching for those surveilling me. You know there
would have known what I'm talking about.
Speaker 2 (06:13):
Sorry, There's so much that we need to talk about.
For example, there are incidents here that happened in the US.
Speaker 3 (06:29):
I just haven't seen.
Speaker 2 (06:30):
It hasn't popped up on my feet, which is really unfortunate,
including a student creating a deep fake porn of other
students at a New Jersey high school and another incident
in a Beverly Hills middle school. And I am very
very disturbed by both of these things, obviously, but the
fact that we're not talking about it, the fact that
there's not really being much done or handled here.
Speaker 4 (06:52):
Uh.
Speaker 2 (06:52):
And by the way, as I as I was writing
this one, the word deep fake kept being caught on
my word check spell check, and I'm like having to
add this to my dictionary, and I'm like, wow, Google's
not even upon the fact, even though they have a
whole AI company that's not doing well. Apparently, the Beverly
(07:14):
Hills Unified School District says it's working with the Beverly
Hills Police Department in investigating this incident and those responsible
will face a disciplining action. A mother of one of
the students at the schools that many girls were targeted
by the deep fake pornographic images, and she goes on
to say her daughter was not one of the victims.
She still feels victimized knowing that this is her friend.
(07:36):
And of course the school came out with the statements
and they would thoroughly punish. This is a form of
bullying and we understand this. This is not going to
be tolerated in the schools. And if other people have
been involved, or you find images or know that someone
has those images, please let us know so all those
things are happening.
Speaker 3 (07:55):
However, again there are no real state laws about this.
Speaker 2 (08:02):
We know that it's difficult to go after people for
bullying online, which we can show that is bullying. We
know it's difficult to go after people for a revenge
porn where their tape that they have been they had
made may they didn't know they were being made, or
they did know but was only consented for personal use,
and then they use that against them or to blackmail
(08:25):
people again, and there's no real law stopping this, Like
there's laws that you can try to interpret, but that's
a really sleazy game to play for someone who's been
a victim of this. And again, if you're an up
to date listener, you may remember us talking about the
Taylor Swift deep fake that caused a flurry of conversation
earlier this year, and how that did initially push lawmakers
(08:47):
to taking a deeper look and present a bill to
make a law allowing for victims to sue the people
who have created these deep fakes under some circumstances. As
reported by Time magazine. They say the disrupt Explicit Forged
Images and non Consensual edits, or the Defiance Act, allows
victims to sue if those who created the deep fakes
(09:10):
knew or recklessly disregarded that the victim did not consent
to its making. But the reality is there's no federal
law making this illegal at all. And even with this law,
it's very limiting, and honestly, I'm assuming only rich people
people with some type of income are going to be
able to take advantage of this type of law and legislation.
(09:31):
Which again what mm hm that whole like take is
like they're allowed to sue. Yeah, if it's recklessly disregarding,
that's that's almost an excuse.
Speaker 4 (09:44):
I know.
Speaker 1 (09:44):
It's like I wasn't allowed to sue before. I feel
like I should have already had that right now to rightee.
Speaker 3 (09:52):
And all they did was a mistake.
Speaker 2 (09:53):
It sounds like the way we're saying is that the
person who created thiss just made a mistake.
Speaker 3 (09:56):
It was just it was just a reckless a little
bit reckless.
Speaker 1 (10:00):
Yeah, it is interesting because our laws are so behind
on a lot of technology things. But as you've probably
heard listeners also related to Taylor Swift, Donald Trump released
some AI images of her and they're trying to figure
out how to deal with that because there is a
law that says you can't do that. But it's like
(10:23):
not for technology, right, So we're just behind. We're just
behind everybody. Yeah, everybody's behind.
Speaker 2 (10:30):
I mean the fact that South Korea does have a
digital sex crime unit, that's great, but it also was
because there has been so many complaints of women being like, hey,
I'm being illegally taped. Yeah, and so they still really
don't stop that, and it's only like a fine, right.
Speaker 1 (10:47):
And we've talked about this, like I believe we talked
about it in that episode with bridget about this kind
of thing is that it is largely women that this
is impacting, and I think that points to a lot
larger issue, like a systemic societal issue. But the fact
is it is impacting more women. And it's once again
(11:08):
another instance where it seems like a lot of the
perhaps men in power sort of like, oh, well, that's
the price of being online as a woman. That's just
the price of existing as a woman. Doesn't impact me,
So it's not a priority.
Speaker 3 (11:23):
I will say.
Speaker 2 (11:24):
The one thing that might happen now because the Lincoln
Project apparently released an ad which I'm very surprised by,
of an AI image of Donald Trump crying. So I
have a feeling Donald Trump's gonna be pissed about that. Yeah,
and his campaign, So they're going to try to do
something now that's been released.
Speaker 3 (11:44):
But again, and this is where I felt icky and
like I don't know. I don't I will.
Speaker 2 (11:49):
Say from jomb I know nothing about the Lincoln Project,
and from what I understand, they are very capitalistic opportunists
that doesn't like Donald Trump.
Speaker 3 (12:00):
So I don't know what that means. I don't believe that.
Speaker 2 (12:03):
I think if I look deeper into it, and I
know enough that I don't agree with what they are
doing completely. Like I think some of the things could
be funny, but at the same time, I'm like, hmm,
you are not my cup of tea. I have a
feeling we are on the posing side still type of conversation,
so put that as a caveat. And with that, like,
I don't like AI images anyway, and I feel like
it should just be done with, Like there's no need
(12:25):
for any of this unless you're making me a cute
image of a raccoon or a squid with a hat,
or an octopus with a hat, or a cute mini
animal that I'm like, I wish this existed.
Speaker 3 (12:38):
I don't understand.
Speaker 1 (12:39):
Have I told you about search?
Speaker 2 (12:41):
Yes, you have you yet to send me. You have
not sent me any pictures eighty or real, so I'm
still waiting. I've watched a clip with the octopus bargaining.
We've all talked about this so many times. Part of
our conversation even about deep faces. But yeah, I like
either way, it's still kind of concerning. But as a reminder,
(13:03):
here's some more information from Time about deep fakes. Non
consensual pornographic deep fakes are alarmingly easy to access and create.
Starting at the very top, there's a search engine where
you can search how do I make a deep fake?
Then they will give you a bunch of links. Carry Goldberg,
an attorney who represents tech abuse victims, previously told Time
deep fake software takes a person's photos and face swats
(13:25):
them onto pornographic videos, making it appear as if the
subject is partaking in sexual acts, and a twenty nineteen
study found that ninety six percent of all deep fake
videos were non consensual pornography. Ninety six percent. Yeah, I
can't imagine the number's gonna be. Well, I guess the
(13:46):
percentage may not change because the percentage works that way,
But the actual numbers, I bet are huge. And yeah,
there's a currently no law or legislation to regulate deep
fakes federally, but there is an act to research it.
So here's some information from Thomson Ruters dot com. Currently,
there's no comprehensive enacted federal legislation in the United States
(14:10):
that bans or even regulates the fakes. However, the Identifying
Outputs of Generative Adversarial Networks Act requires the Director of
the National Science Foundation to support research for the development
and measurements of standards needed to generate ga in outputs
and any other comparable techniques developed in the future. Real talk,
(14:33):
I don't know what that's telling me other than they're
just researching.
Speaker 3 (14:37):
Am I wrong?
Speaker 1 (14:38):
They're saying that they're gonna like make Yeah, they're gonna
support research for maybe coming up with standards maybe later.
Speaker 3 (14:48):
They don't what they do with what they used to
do before it got like outlawed.
Speaker 1 (14:52):
With gun violence all the time, they're like, well, we'll
look into it, and then they never.
Speaker 2 (14:56):
So we're just on the same same route as being
the gun regulation.
Speaker 3 (15:00):
Great great sounds, And.
Speaker 2 (15:03):
Back to that possible federal legislation we mentioned earlier, Thomson Ruters'
Rights Congress is considering additional legislation that, if passed, would
regulate the creation, disclosure, and dissemination of deep fakes. Some
of the legislation includes the Deep Fake Report Act of
twenty nineteen, which requires the Science and Technology Director in
(15:25):
the US Department of Homeland Security to report at specified
intervals on the state of digital content forgery technology. The
Deep Fakes Accountability Act, which aims to protect national security
against the threats posed by deep fake technology and to
provide legal recourse to victims of harmful deep fakes. The
Defiance Act of twenty twenty four, which would improve rights
(15:49):
to relief for individuals affected by non consensual activities involving
intimate digital forgeries and for other purposes. And the Protecting
Consumers from a Deceptive ai I Act, which requires the
National Institute of Standards and Technology to establish task force
to facilitate and inform the development of technical standards and
(16:09):
guidelines relating to the identifications of content created by GENAI
to ensure the audio or visual content created or substantially
modified by GENAI includes a disclosure to acknowledging that Genai
origin of such content and for other purposes. So specifically,
we're talking about four different acts that could help possibly regulate. Now,
(16:35):
this again comes with the fact that we know that
standards of regulation the Supreme Court set is no longer
necessary in the US essentially, and they don't have to
regulate anything. So that's a little concerning. So just as
that as a reminder, also, this is four different ways
of looking at it. It really doesn't seem like it's
helping victims as well as it's punishing companies.
Speaker 3 (16:56):
But not even that hard, right.
Speaker 1 (16:58):
And we've seen that a million time, Like a company
paying a fine that for them is to drop in
the bucket. It might to us who don't have millions
of dollars, if not billions, look like oh my god.
But to them that's like nothing. And then they get
to continue doing what they're doing, right, and probably making
money off of the fact covering for that fine. Right,
(17:21):
But they get to continue what they're doing.
Speaker 2 (17:23):
Yeah, it makes more sense and money just to pay
the fine and continue on like they're making so much
again and then protecting the consumers from the sept of
AI act.
Speaker 3 (17:32):
I really have a feelion this is going to go
along the lines of the standards of like.
Speaker 2 (17:36):
Oh, let's put uh an FDA level of this so
we can protect people's self. Nah, we don't need it.
Speaker 3 (17:43):
They're smart. If not, oh well they'll get over it.
Speaker 4 (17:46):
Yeah.
Speaker 1 (17:57):
I bet there's going to be some arguments about I
know there already have been, but I bet there's continued
arguments about like freedom of speech and this right right
infringing on freedom of speech, and.
Speaker 2 (18:08):
There's so many conversations and then even when you start
like going down the slippery slope of like, but it's
not really harming this people. We're not actually touching these
world and better that we do this and actually O
kidnap somebody.
Speaker 3 (18:19):
I mean that is absolutely a conversation.
Speaker 2 (18:22):
Yeah. I remember an SVU episode and this kind of
reminds me of this, in which they are shooting porn
and they get a report saying that she's a fifteen
year old having to shoot porn, so they go and
bust it down. It turns out she was actually nineteen
and they just use not They didn't say AI, they
said photoshop, which is essentially the same thing. They use
photoshop to de age her, and they're like, look at
(18:44):
this technology. It's gonna save so many children.
Speaker 3 (18:46):
You're like, oh, no, that's not what's happening.
Speaker 2 (18:51):
And the debate wasn't that SVU wasn't trying to law
and order. SVU y'all if you know, you know hard
to tell you, I love you, Yes, But like there
were they weren't arguing for either they were like, ehh,
and this is the bigger debate of like what is
going to happen with the children and the porn industry
and child abuse and all this, and by the way,
also as big as that conversation about save the children.
(19:14):
This is not being mentioned much. Yeah, this is not
being mentioned much. Uh, Like, I'm really wondering the level
of like, is anybody taking this seriously?
Speaker 3 (19:28):
Why are you not taking this seriously? Literally?
Speaker 2 (19:30):
They were using this type of technology in South Korea
to black male women and to sexual coercion, Like they
were using this to be like if you don't do
this with me, I'm gonna release this and shame you,
and I will say this is in South Korea, and
South Korea it's like, no, they're going to go after
the women. They are more than they are going to
go after the perpetrators, no matter what, no matter what,
(19:52):
just for the sake of the victim. Blaming is heavy,
and it's not just in South Korea. Just existing like
people saying you put her on the internet, what did
you expect? You have a picture on the internet, what
did you expect? That level of conversation is so disheartening,
but it's too real.
Speaker 3 (20:14):
It's just real.
Speaker 2 (20:15):
And the fact that they are not caring enough that
all they can do is will give you the power
to sue.
Speaker 3 (20:20):
How about that? How about that?
Speaker 2 (20:22):
And you may or may not win because you have
to prove that it was a reckless disregard of your safety.
Speaker 3 (20:30):
What does that mean?
Speaker 1 (20:32):
That's I mean, I can already tell that's designed to
be difficult to prove it.
Speaker 2 (20:39):
You really think you're doing something here, like what is
this that you're doing? So I will say as of today,
around ten sakes in the US have placed their own
regulations on it. But three of those says seemed only
to be concerned about how candidates and elections will be affected,
so literally specifies to say it if it damages the
(20:59):
candidate's campaign. M three out of ten are those I
was like, wait, wow, wait, w let's all run for
off right. I guess that's the only way you're going
to be protected. Do it every year, no matter what,
just to coming in as then if you win, sorry.
(21:21):
And around five of them are specific. Two usage of
sexual content, but it's more specific related to minors, which
is necessary, yes, but I feel like, again.
Speaker 3 (21:32):
A victim is a victim as a victim.
Speaker 2 (21:35):
Yeah, so they had to put that under like, well,
first let's care about the children, right, then we'll talk
about everything else. Again, this is a bit confusing on
in general and the others. When I looked it up,
was just generic. It was just a generic conversation about
like don't do this, don't do that. But it really
wasn't about none. Nobody talked about jail time. No one
(21:57):
really talks about criminal charges.
Speaker 4 (21:59):
Mm hmmm.
Speaker 2 (22:00):
So there's no conversations because we know that if you
possess child porn, you should be on the sexual Fender registry,
which has its good and bads too, y'all. Like that's
a whole separate conversation that we can have. And I'm
not saying that we don't need to tell people about
sexual offenders.
Speaker 3 (22:17):
We definitely do.
Speaker 2 (22:19):
But this has been a power play and is a
racist play that needs to be talked about as well,
because people can come off of it really quickly if
you have money. Anyway, Oh, the system is corrupt. This
system is corrupt. But AI is not helping in any
least way. The fact that the first things that popped
out my site was that like, here you have access
(22:39):
to all this porn was so deeply fit like by
I just kind of sat there and stared at it.
Speaker 1 (22:45):
Yeah.
Speaker 3 (22:45):
First I was like, I messed up. I done messed up.
That was that moment. I was like, oh, I didn't
messed up.
Speaker 2 (22:51):
Second and I was like, why why is this at
the top of the search. Shouldn't it be like something else,
Like shouldn't it be like like, yeah, this is a concern.
And when you look it up, it does show the
South Korean incident as the biggest headline, which I think
is interesting as well because we're so easy to like
pinpoint other areas first when there's a huge issue here, y'all.
Speaker 3 (23:16):
It's a lot, it.
Speaker 1 (23:18):
Is, and we're definitely going to have to come back
and revisit this because technology is changing so quickly. And
I do think, as Bridget always puts so well in
her episodes she comes on and does with us, I
think a lot of the victim blaming also comes back
to you agreed on that thing, the contract that you're
(23:43):
going to use this you're going to use this product online,
and therefore you basically have agreed to it.
Speaker 3 (23:50):
But it's not true.
Speaker 1 (23:51):
Like the way that we live, you can't live without
a lot of those things. You can't succeed without a
lot of those things. And to say like a company
is just kind of shrugging at you and your safety
and the way that you exist online. I don't think
is legit. I don't think that should be a thing.
(24:12):
So yeah, we have a lot of a lot of
conversations that we need to have about this, and we will.
But in the meantime, listeners, if you have any thoughts
on this, any any stories or topics we need to cover,
any resources, please let us know. You can email ust
Stuffani mom and Stuff at iHeartMedia dot com. You can
(24:32):
find us on Twitter at mom Stuff podcast, or on
Instagram and TikTok as stuff I ever told you. You
can find us on YouTube as well, and we have
a t poblic store and we have a book we
can get wherever you get your books. Thanks as always too,
our super producer Christina, executive ducer My and our contrict Joey.
Thank you and thanks to you for listening Stuff on
thever Told You is production. My Heart Radio for more
podcasts or my Heart Radio, you can check out the
Heart Radio, Apple podcast or wherever you listen to your
(24:54):
favorite shows.
Speaker 2 (25:00):
No Mo