Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You have been deluged probably and delighted at the same
time as we all have by if you're on social
media of all of the pictures parents posts, particularly the
little kids going back to school, and uh, and then
they'll do another one in the fall on the last
day of school. It's all. It's all pretty sweet and charming, except,
(00:20):
of course, that we live in the in a time
of dirt bags right and left, and a lot of
parents now are reconsidering whether or not the post pictures
of their kids up there, because AI makes things even
even more easy to to just do disgusting things. And
(00:44):
we're joined for a few minutes this morning by Kathy HadAM,
who's the communication director with Enough is Enough and has
been active on this issue. Kathy, good morning, Good morning
to you. What's the danger here there? They can they
can take a picture of a kid, and there's one
thing to try to trace them and find them in
a stock on that would be brutal. But another thing
(01:04):
I can do with AI now is distort their images
and embarrass them, humiliate them, make them look like they're
naked and all that.
Speaker 2 (01:15):
Right, Yeah, you're absolutely right, and this is definitely an
alarming trend. The New York Times recently featured an article
on this. This has been going on now for a
few years, unfortunately. But you know, like with any technology,
and especially in talking about artificial intelligence, it can be
(01:36):
used for good. But as we're discussing, it can also
be used to exploit children in the most vile ways.
And one of the ways that this is being done
is what's called new toify apps. And these apps are
extremely and unfortunately easy to access, but they're used to
digitally strip child if you would, by using an image
(01:59):
that is uploaded. This image can be taken from either
the image that a parent posted of their child on
a website, or they just snap a picture of somebody
a child walking by, and within minutes they have a
photo of a child who is nude. And and it's
really just like I said, of disturbing the trend that's
(02:20):
happening in schools around the country. Sadly, these images are
being uploaded and shared exponentially only just to literally humiliate
and destroy, you know, children.
Speaker 1 (02:31):
Is it just that or is it also to potentially
extort money out of the parents.
Speaker 2 (02:37):
Yeah, Well, in all this, of course, can be any
type of art of what we call artificial intelligence, child
sexual These material can be used, that can be taken
and a child can either be forced or careers to
send it, or like I said, someone can use it
through just snapping or using a photo and then later
(03:00):
storted what we call sex started in return for either
sending more explicit images of themselves or for sending money.
Speaker 1 (03:10):
YouTube is testing a new AI tool to determine if
a kid is watching in order to better protect them
from harmful content. That's a good step.
Speaker 2 (03:21):
Right, yes, yes, and the step we're certainly happy about
considering that it is one of the most widely used
social platforms by teams. Almost nine to ten teams use YouTube,
and so what they're doing they're rolling out a new
age verification system in the US and it's using AI.
(03:43):
This is the way we talk about using it for
good to help differentiate between adults and miners that are
on the site, really based on the types of videos
that we're watching, because we know kids lie to get
on these sites, but most of them have a minimum
of age thirteen to get on these But this is
certainly a good step in the right direction.
Speaker 1 (04:05):
Where would where would a parent go or anybody go
to say you because you guys have new safety guidelines
that you've posted. What the site should we visit for that?
Speaker 2 (04:14):
Yes, certainly so. So our website is enough dot org,
but we also have a companion site called Internet Safety
one oh one dot org. And when when you go there,
our space that enough is Enough is on prevention, and
so we want to educate and equip parents and caregivers
everywhere about the tools and resources and even the greatest
(04:36):
trends like we're talking about today. So if you go
to enough dot org or Internet Safety one oh one
dot org and even go up and sign up for
our newsletter and that way you can receive, you know,
weekly or drop the month information on all these trends
and it's almost helpful resources, including our guides on.
Speaker 1 (04:56):
These areas, right, Kathy, I'm not particularly tech, but there
have to be. I think one of the one of
the emphasis, one of the focus of our law enforcement
people has to find a way to track these disgusting
people down. And it's very difficult. It's very difficult. But
if you find if you can find the source of
(05:19):
these deep fake nudes, for example, there ought to be
severe penalties.
Speaker 2 (05:23):
Well there are, there are, and I'll tell you there's
been tremendous political and public pressure from advocacy groups like
US and parents everywhere who've been just really focused on
this for years, letting letting you know, our policymakers know
that we really got to do a better job of
things like verifying ages, and big tech plays a tremendous
(05:47):
role in this. They've really got to step up to
the plate, make their apps, make their equipment and digital
devices safer by design, so parents aren't happy having to
shoulder the burden of doing all this, which is really
what they're having.
Speaker 1 (06:00):
And it did now, Kathy, thanks for the info this morning.
Good stuff. Appreciate it absolutely. I have a good thing.
Kathy had him from. Enough is enough