All Episodes

February 27, 2025 36 mins
In this episode Leon from SHV talks to Bec from Evolve Education about Artificial Intelligence and deepfakes. They discuss how this technology impacts the lives and sexual interactions of young people.    Resources: The eSafety websiteThe Australian Centre to Counter Child Exploitation, Kerrin Bradfield’s Respect Collective (Bradfield, K. (2022). Image Based Abuse: Recognition, Education and Healing. Sexology in Practice Symposium, 24-25 September, online.), Addressing deepfake image-based abuse, AI ‘nudify’ sites are being sued for victimising people. How can we battle deepfake abuse?Beacon, and 
SHV Schools, Community and Disability
team. We run classes for all year levels

(00:44):
on bodies, growing up, puberty, sex,
reproduction and relationships. This
podcast is for parents and carers of
school-aged children and people working
with young people. This is so we can
share what goes on in a relationships and
sexuality education class and help
support the conversations you might be
trying to have with young people, whether
it's at home or through your work.

(01:06):
Today, I'll be talking to Bec Martin from
Evolve Education. They provide education
on online safety and digital wellbeing to
young people and adults across Australia.
We've had Beck on the podcast a few times
before, and we've brought her back so
that we can talk about something that we
are hearing more about in schools and on
the news, AI deepfakes.

(01:28):
This podcast was done in front of SHV
educators, so you might hear Beck refer
to people in the room. Before we chat
with Beck, let's quickly define AI.
AI, or artificial intelligence, refers to
technology that performs tasks typically
requiring human intelligence. It's used
in many areas such as understanding
written and spoken language, analyzing

(01:49):
data, recognizing images, and making
recommendations. AI is commonly
experienced through voice assistants like
Siri and Alexa. Generative
AI, or Gen. AI, on the other hand,
goes a step further by creating entirely
new content, such as creating images and
videos based on prompts, or services like

(02:10):
ChatGPT for answering questions and
performing tasks. This conversation
speaks about image-based abuse and child
sexual abuse material, or CSAM. So take
care when listening. Now, let's dive into
how young people are using Gen. AI.
So, people may have heard of some terms
around AI, ones that are sometimes linked

(02:31):
to sex or sexuality, but they may not
know what these terms mean. Can you
explain what AI deepfakes are?
So I think a really good place to start
is just talking about deepfakes in
general. So if you've seen the latest
Indiana Jones movie, you would notice
that Harrison Ford appeared with his face
looking 30 years younger. And that's

(02:52):
AI deepfake at play, what we call face
swapping. So taking pictures of Harrison
Ford when he's younger and superimposing
them onto his older self's
face. You'll also see
popular examples on platforms like
TikTok. So there's a deep fake Tom Cruise
who has 5 million followers. It's not

(03:13):
really Tom Cruise. Tom Cruise's official
TikTok page has only 1 million followers.
Right. But very convincing and
uses large sets of
data, large sets of Tom Cruise's face to
create, you know, a very realistic deep
fake image using what we call generative
AI. So you've given the examples of

(03:34):
Harrison Ford, Tom Cruise are using
celebrities, but what other kinds of
images are they creating?I think firstly,
just a description of deep fakes. Deep
fake is an image or video where someone
appears to be doingor saying things
that they haven't actually done. Some
other examples are where deep fakes are
being used to scam people. So you might
hear students or your own children

(03:55):
talking about Hamish Blake selling
weight loss gummies on platforms like
Instagram, or Mr. Beast doing
an iPhone giveaway for $2. Things that
are sort of screaming at us as adults
that are too good to be true. And
for our children and young people, it's
not a case anymore of seeing is

(04:16):
believing. We need to be really critical
that what we're seeing isn't, in fact,
real. Are young people
able to easily access the technology to
generate images?Yeah, they are. So
there's a range of ways that-- and what
we're going into now is sort of deep fake
and deep fake nudes is where the problem

(04:36):
really is for schools. deep fake
nudes are very easy to generate. So
another popular case is Taylor Swift. You
might remember last year had a deep fake
nude that went viral on X. It was 19
hours before X took that image
down and it was only after it was mass
reported by her fans. It had

(04:59):
7 million views in
19 hours and 260,000
likes. And what's important to note with
that particular case is that it came
after months of misogynistic online
trolling because she dared to attend
her boyfriend's games. So it was sort
of in an attempt to put her back in her

(05:19):
box. Right. With platforms
like X or any other social media
platform, do they have like
rules around those kind of generated
images aboutmaking sure that they're not
there or is it just a free-for-all on the
platforms?Pretty much unregulated,
free-for-all. If you jump onto
Instagram's explore page, type in any
celebrity and write in AI, you'll get a

(05:41):
really good understanding of just how
prolific these types of images are.
I think too, it's important to note that
these images can be really easily created
simply by taking existing photos of
people. So your LinkedIn headshot,
photos of your holidays on Facebook,
those innocent fully clothed images

(06:02):
can be taken and put through an Undress
or Nudify app, have the clothes removed
and that then creates a very realistic
nude image of somebody without obviously
their consent. So if I think about the
people who probably listen to our podcast
either parents or carers or professionals
working in the field, the risk for
them is if they're posting or

(06:23):
even like storing images of young people
on their social media or on their phones
or that, there's a potential that they
can then be used?Yeah and so that's
something that we've sort of really
advocated for over the past few years is
you know the risks of sharenting or
parents oversharing images of their kids
because you know the technology has
rapidly changed so in the pastWhen

(06:45):
people were creating deep fake
morphology or deep fake images,
it was a slow process. It was done using
programs like Photoshop, cutting out the
face, superimposing it on a body. And
child exploitation material was created
in the same way. We know now
because of how easy these apps are to
use and how quickly people can get

(07:08):
access to images of children, those
images can also be used through these
apps to create what's known as synthetic
child sexual abuse material. Now when we
talk to the AFP and the
Australian Centre to Counter Child
Exploitation, the problem that they're
facing is with the proliferation of
so much synthetic child exploitation

(07:29):
material, they are finding it really hard
to identify children that are in actual
cases of abuse because trying to
determine what is a photo of a child
who's in a current situation where
they're being abused and what is a
synthetic AI image is really challenging.
And so that's why we really heavily
caution parents from over-sharing

(07:51):
images of their kids online and making
sure that if you do share pictures of
your kids online, and we are, we're proud
parents, we want to share and celebrate
those moments with friends and family,
making sure that we do things like having
our privacy settings really tightened up,
being aware of who follows our content
and the type of content we might be
posting on social media platforms. If

(08:12):
people are generating these images using
AI, using existing photos of people,
Is this classed as image-based abuse?
Absolutely, and look, the government, and
particularly eSafety, are doing a lot to
strengthen the laws and legislation
around making these criminal
harms. So creating and distributing these

(08:33):
types of pictures, especially when it's
of people who are under the age of 18, is
absolutely image-based abuse. And there's
lots of different laws that can be
applied in situations, particularly in
schools, where this type of harm has
occurred. We've got laws relating to
using a carriage service to intimidate,
threaten, or harass. So sometimes it can

(08:54):
be followed through with cyberbullying
laws. We've got laws around producing and
distributing child sexual abuse material
as well. And then we've also got new
amendments that came through last year
around classifying deep fake
nudes as a type of IBA.
Because AI-generated images fit in
with altered images, which is already

(09:16):
existing in that image-based abuse law,
but I don't think people might be aware
that things that are created by AI and
even drawings and things like that all
fit in under that same law, so there are
protections there. Yeah, and I think for
parents and carers, image-based abuse
is a tricky term because it's an
umbrella term that covers a lot of
different types of images, how

(09:38):
those images are taken, how those images
might be shared, but the really clear
underlying, I suppose, message with
IBA is it's non-consensual. So that
might be non-consensually taking a photo
of somebody, like what we see in
upskirting or downblazing. It might be
non-consensually sharingimage that was
originally shared with you consensually.

(09:59):
It can contain things like sexual
extortion, people that might be getting
catfished and sharing a nude and then
being extorted for money. That's also a
type of IBA. And now we've got sort of
the 4th quadrant. If you look, Karen
Bredfield has a beautiful graph that
explains in detail different types of
IBA. Digitally altered
images as well. Yeah. So

(10:21):
There's a little bit there in terms of
the legal ramifications that might appear
for a young person. But you've also
touched on something that's really
important, which is potential ethical
issues. So what are some of the ethical
issues around generative AI and deepfakes?
I think the thing we need to be really
mindful of with young people is
when we look at motivations of adults who

(10:42):
perpetrate image-based abuse, they're not
the same as teenagers who might be
engaging in that sort of behavior. Our
teams have an underdeveloped prefrontal
cortex, which might cause them to make
some impulsive decisions, and when you
pair that with how fast you can use these
apps, you can... see that there becomes a
problem, because we're not necessarily

(11:03):
taking the time to think about those
long-term consequences. We've also got a
reward system that's really wanting us to
fit in with our peers, and so sending it
off to the boys, for example, for a bit
of a laugh can lead to some pretty
catastrophic decisions in short amounts
of time. And so I think
When you consider as well that 50% of

(11:25):
year nine to year 12 students don't know
that image-based abuse is even illegal,
you know, we're so quick to sort of whack
them when they're not necessarily aware
that what they're doing is wrong. We need
to provide education to make sure that
they understand the impacts of the types
of things that they do online. The other
statistic that's worth noting is that 48%
of year nine to 12 students report that

(11:47):
they've been sent a nude, that they
didn't ask for, 35% of 16 to
19 yearhave experienced image-based
abuse. So I think it's important about
this technology is a symptom of a larger
problem, which is that we have
misogyny at play here, and also things
like when there's gender inequality and

(12:09):
rigid gender stereotypes, and then you
have the availability of these types of
programs. it can cause a lot of
harm. Yeah and then obviously
the important thing that you've touched
on a few times is about the education and
so preventative programs ones that you
might offer or we might offer when we get
into schools are really important because
it helps young people to understand what

(12:30):
might actually be problematic like things
like image-based abuse or might be
illegal rather than just them having no
idea at all and then making some poor
decisions later on. Yeah and I think it's
also worth noting that this is a gendered
issue soImage-based
abuse using deep fake nudes
disproportionately affects girls and
women. Julian Grant from the

(12:52):
Safety Commission reported that
ninety-nine percent of deep fake
nudes feature young girls and women.
What?kind of factors
lead to a young person taking part
in these generated images or creating
these images of other young people?So I
think we touched on some of that before

(13:12):
when we talked about that prefrontal
cortex. And when faced with a choice,
teens might be driven sort of by the
immediate emotional or social rewards,
like peer approval or the excitement.
rather than potentially weighing on the
long-term consequences of their actions.
Sometimes we hear from kids or
instigators that justify their

(13:34):
actions as just a joke. We know that
harmful behaviour can sometimes blur with
teens not grasping the emotional or legal
risks of creating and sharing this
type of content. And then we know as
well, again, to circle back to sort of
misogyny is that rhetoric from
influencers like Tate, for example, that
promote harmful attitudes. like the

(13:56):
beliefs that men should have ownership
over women or that women's bodies are
objects to be exploited or
even things like the objectification of
women online can lead to some pretty
confusing messaging for our young people
about what is and isn't okay. And
so I think when you take all of those

(14:16):
things, you know, impulsivity
being shaped by misogynistic or sexist
views again and the
Rapidly available software and
programs that allow us to not only create
but share very quickly can be a tricky
one. The other thing I'll say regarding

(14:36):
kids, and I'm sure people in the room
who work with students and talk to
teachers will hear this all the time. is
when we have online harms
and disclosures of online harms in group
situations, whether it's been things
shared through group chats. Often,
there'll be people that are involved who
might not surprise you, but more often

(14:56):
than not, you'll hear from the kids who
wouldn't normally say boo in the
classroom being involved and
participating in some of these harms
online. And so, yes, there is a
connection between, you know, in cases
like cyberbullying, where people that are
targeted,you know offline will
also be targeted online and instigators
who target offline will also target

(15:18):
online. You sometimes get kids that would
never behave like that in an offline
space engaging in really damaging online
behaviors and what that shows us is there
is still a massive disconnect between
what we do in the real world and what we
do online and that can be tricky for
teenagers because they don't view those
two spaces as different. They view them

(15:38):
as the same. But we know that we
that real disconnect where they haven't
realized the true impact of what they're
doing and how that might hurt somebody
else. Do you think that with those young
people that might surprise us that are
engaging in it, is the reason for
engaging in it still based on that peer
pressure and trying to fit in?I think

(16:00):
peer pressure and trying to fit in is a
huge problem in this space. And something
else that comes up a lot when we talk
about encouraging upstander behaviour,
how do we position our boys and young men
to be advocates and allies to gender
equality and to these types of issues, is
one of the major roadblocks when we talk
to, particularly boys, is they are

(16:22):
terrified of being labelled a snitch or a
dobber. And so we need
to work through those problems with young
people so thatwe remove those roadblocks
to them speaking up. And we also need to
be careful as parents that when our kids
do come to us for help, that in seeking a
solution or standing up for someone else,
that we don't inadvertently throw our own

(16:42):
child under the bus with their peers. So
in a lot of cases where we have group
chats happening on WhatsApp, maybe your
child's come forward and said, hey, I've
seen this photo being flicked around in a
group chat, or the chat's really toxic,
and they've gone straight to the school.
that can have a really damaging outcome
on their child. So we need to make sure

(17:03):
that we work in consultation with our
kids and we do the right thing, but we
also do it in a way that's not going to
harm our child. Or
professionals working in the space,
whether it's like preventative, as I was
talking about, or even reactionary, is
there a risk?of targeting
the group of young boys and talking to
the group of young boys only that that

(17:24):
can either exacerbate the issue or just
make them feel victimized in this space.
Yep, totally agree with what you're
saying. So when we do discuss these
types of harms, we need to do it in a
collective way. It's about how can we, as
a community, collectively respect each
other online. And even things like when
we're doing examples, creating scenarios

(17:45):
for students who want to use gender
neutral language, we don't want to create
an atmosphere of defensiveness before
we've even started. And so part of
positioning, you know, young boys
as being a solution to the problem is
not making them feel like they're likely
to become a perpetrator. So it's like
highlighting the fact that it is a

(18:06):
gendered issue without that being the
only thing that you talk about for your
hour and a half session with them. And I
think that's the real difficulty in this
space is because... And look, the flip
side of this too, we're doing a lot of
work on sextortion at the moment, which
is also a gendered issue where boys are
disproportionately targeted by that harm,
is you're dealing with gendered issues,
but the way that we educate in a

(18:27):
preventative way actually needs to not be
gendered. It needs to be very gender
neutral. And I think part of the problem,
if we talk about misogyny and the
manosphere, boys don't become
radicalized and end up in those spaces.
They're seeking to understand
explanations to how they're feeling. They
go into those spaces and that's wherethey
become radicalised. And so to come in

(18:48):
with this sort of finger wagging,
you're all doing this and this is your
fault and these are harms that your
gender are perpetrating is really
damaging and does nothing to help the
problem. And it's really difficult to not
do that. It's so easy to go into that
space and do the finger wag and do the
finger point, but as we say, we know it's
damaging and so it's difficult to

(19:08):
sometimes, I think, take that step back
and say, right, well, I've just got to go
into this space and just educate and try
to get everyone to come on board with the
messagewithout having someone switch off
because I've said all of you are the same
and you're all going to do this thing
that's problematic. And I think too,
putting my teacher hat on, that's where
facilitating really great conversations
rather than coming in top down with this

(19:29):
is what, you know, what to think.
Actually getting kids to explore
examples in the everyday media
of gender inequality, getting them to
discuss things in a critical way together
and letting them come to their own
conclusions. about
how our views are shaped towards

(19:50):
women can be a great way of doing that
too. So we've talked a lot about the
creation of the images, why people might
be creating these images. What kind of
impact can the creation of these images
have on the person who the image is of,
or I guess the people that are creating
the images?Yes, so put
yourselves in the shoes of a young girl

(20:11):
that's had this shared around. Imagine
it's happened over the weekend and you've
got to come to school on Monday morning.
So even if friends and family know
that the images are fake, the emotional
impact on that victim is profound.
The fact that you can do nothing wrong

(20:31):
and have this done to you, the total loss
of autonomy, and also as well for
victim survivors targets,It's knowing
that the onus for taking
care of that now is on you. So you've had
no hand in this image being created, but
now it's on you to manage monitoring
where it might appear, making sure that

(20:51):
it gets taken down, reporting it, going
through a legal process to make sure that
you get an outcome. And so
the feeling of violation, I think, for
targets is really high. The
other thing that we hear from kids
is sometimes a slight silver lining,
that when deep fake nudes were being

(21:11):
shared around, some students reported,
Well, I'm glad that that happens, because
at least now if my real nude image gets
shared around, I can use that as an
excuse, you know, that it's not real, it
wasn't me, it's a deep fake.
And so what can also cause these images
to spread really rapidly is kids love to
speculate. Is it real, or isn't it real?

(21:34):
And so that can actually cause these
images to go viral and to be flicked
around, because kids will look at them,
and they're trying to pick whether it's a
deep fake or real. And the excitement of
the potential that it is real, depending
on who the image is of, would keep that
conversation going and keep the
generation going as well. Yeah, and so
some of the impactsAnd I'll talk

(21:55):
about individual impacts,
and then maybe some bright... the broader
societal problems is obviously emotional
and psychological distress, social
isolation and stigma, so kids not wanting
to come to school, not wanting to engage
in extracurricular activities or social
events which can further lead to their
feelings of isolation. Future

(22:16):
relationships, that feeling of who's done
this?Why would someone do this?That
distrust that they might have in future
relationships that can impact things like
intimacy and mistrust or fear of future
exploitation. AndAnd as well, sort of
legal and school-based challenges. So
sometimes when we talk to the
victims or targets of image-based abuse,

(22:37):
because we're navigating quite
complex legal structures,
depending on what types of laws the
local police use, whether it's federal
law or local law, the outcomes can
change. And depending on how equipped a
school is to deal with disclosures of
this type of online harm,It can sort
of amplify feelings of injustice if that

(23:00):
target doesn't feel like there's been
appropriate justice. And it's interesting
too because when we talk to teachers who
have been targeted by this type of online
harm, they share really similar feelings.
They feel like the law didn't support
them because it was teacher and student
based and that the school was trying to
protect the student but then the teacher
feels that they weren't sort of

(23:21):
necessarily protected or covered or that
might not have been a consequence or
outcomethey felt was just or appropriate.
When we talk about these sorts of harms,
we do sort of report them as discrete
forms of psychological injury, but I
think we do need to
make sure that we look at a broader
cultural context that's at play

(23:43):
and sort of consider these harms from a
more nuanced perspective because they are
symptomatic of a larger problem that's
happening in society at the moment. Yeah.
I guess the big questionis what can
people do to support someone who's had
this happen to them?And maybe if we start
by talking about what can schools do?
There's sort of two arms to this. And

(24:04):
when we work with teachers, we
sort of look at it through two lenses.
There's always prevention, and then
there's responding. And usually schools
come to us because they're responding,
they're reacting, something's happened.
And so we know as well an important point
is when we do our programs we often start
with responding because teachers don't
want to have a conversation about these

(24:26):
issues because they're often scared of
what might come up in the classroom. And
so building teacher capacity to feel
really capable in responding to
disclosures of online harm actually needs
to come first before we put in place
preventative programs like digital
citizenship. or digital safety
workshops so that everyone feels
comfortable about what things can come up
and you would know all sorts of things

(24:48):
can come up with a special. So what can
schools do?Well, teachers, especially in
times of panic and especially as a
parent, you might be on fire on the
inside, but we need to be cool, calm
and collected and what can help do that
is a script. for that time of panic. And
so clear messaging around thanking that
young person for coming to you. I'm so

(25:08):
glad you told me. I believe you and
really importantly, this is not your
fault because we know that
particularly again from girls and women,
even in cases where they've done nothing
wrong, they will still exhibit victim
blaming attitudes towards themselves. And
I think, you know, helping young people
collect evidence is important but of
course with any type of online harm that

(25:30):
features nude images of children, we need
to be reallythankful that as teachers and
parents we never collect those images
ourselves. For schools,
eSafety has a fantastic toolkit for
schools. We help teachers implement that
and embed it across their whole school
setting where you can triage
and respond to different types of

(25:51):
disclosures of online harm because what
you think is really serious, somebody
else might think is mild. And so
learning how to triage those disclosures
from mild to severe and then having clear
steps of how you're going to respond
because for a lot of teachers they don't
know how to respond. These harms might be
totally new to them, they might not even

(26:11):
be on social media. And so having a clear
pathway to work through is really
valuable. You might
be in Victoria where you'd be making a
report for this type of harm to Socket
and Victoria Police. The other
thing that can be forgottenparticularly
by adults, is we are so protectionist in
nature. We are really quick to sort out

(26:32):
the problem online. We try and sort out
the problem for them at school and manage
that and then we go job done. But we
really need to monitor and continually
check in over the weeks following to make
sure that that young person is coping
okay because we know re-traumatisation
with resurfacing of these images, for
example, can be an ongoing problem. What
can the adults at home do to

(26:55):
support these young people?So when
we work with parents and carers, one of
the key messages we deliver through our
communicate chapter
is that speed and support are key.
In order to deal with these harms, you
need to know that they've happened, and
if there are barriers to kids speaking
up, they're not going to come to you for
help. So first and foremost, it's looking

(27:17):
at the barriers that stop teenagers in
particular from coming forward to us as
parents and carers. Those things are fear
of removal of their device,Fear
of being labelled a dobber or a snitch,
and this one. fear of being judged and
shamed, how could you be so silly?So as
parents, if we can remove some of those
barriers and let our young people know
that there's nothing that they could ever

(27:38):
do that would make us love them less, and
to come to them quickly, that's the first
way that we can be aware that these
images have happened. And I
think as well, there's a few
pathways and a few organisations out
there that can certainly help parents and
carers. Sometimes it can be really
confusing with online harms to know

(28:00):
whether you're reporting to eSafety or to
ACE. Julian McGrant
recently was talking about the reporting
pathways and said in a lot of cases young
people don't necessarily want to push for
criminal action or criminal
charges. They just want the content to be
taken down. So in a lot of cases where

(28:22):
nude images of under-18s have been
shared, you'd go to ACE. Sometimes for
IBA we go to eSafety. That can feel
confusing for teachers and parents. Go to
either. They work hand in hand. And if
you go to the wrong organisation, it
doesn't matter, there is no wrong
organisation. They will give you a warm
referral to where you need to go. And
also deep fake nudes can be lodged

(28:44):
with NECMEC, which is an
organization who can create basically a
hash, like a thumbprint of that picture,
and scan social media platforms
like Instagram. So if that image surfaces
on those social media platforms, it's
recognized and taken down. It's a bit of
a whack-a-mole approach, but it's just

(29:04):
anotheroption that you can
offer to young people to help reassure
them that there's support out there for
them. I think hopefully what's comforting
for schools or adults at home is that...
Everything that you said just then is
really the messages that we give them
about any kind of disclosure, any kind of
harms that might appear for young people,
and it's just applying it to a new

(29:25):
context with a few extra specialist
organisations to follow up with. So
hopefully that makes it a little bit
easier to process what we're kind of
talking about today. Yeah, and I think
something else that's interesting, and I
think what we'll see in this space is a
real change, is a bit of a shift from the
digital footprint discussion.
particularly in cases of

(29:47):
non-consensual nudes being shared or
sexual extortion. Kids getting really
strong messaging that if your nude is
shared, your life is over, is so, so
damaging. Because you can have the best
family support, wrap around, friends that
are really there for you. But if you feel
like society's going to judge you for the
rest of your life because your nude was
shared in year 10, that's a really hard

(30:09):
pill to swallow for a young person. So I
think we need to look at some of the ways
we discuss online safetytopics as well in
the classroom. Yeah. I am conscious that
we have talked about a lot of the
negatives today. Are there any positives
to generative AI?I think there's lots of
positives out there with generative AI.
And I think what's interesting is

(30:29):
generative AI has even actually been used
to combat a lot of online harm. So I know
that eSafety are doing some pretty
interesting work in using bots to
perhaps flag to users if they're viewing
CSAM, that perhaps they're engaging
or looking at materialsthat's potentially
damaging or problematic. So I know
there's cases where AI is also

(30:51):
being utilised across some social media
platforms to try and identify
synthetic child sexual abuse material and
make sure that it's removed and doesn't
spread. So there's definitely positive
applications where they're, not to go
back to the negative, but where there's a
problem isThe spaces like these
Undress and Nudify apps, for example, get
massive investor funding, and so they

(31:13):
proliferate at a much faster rate than
the organizations who are trying to use
AI to combat those harms. You've got
really big, big piles of money, and
tech giants competing against people like
eSafety who are trying to utilize
and encouragetech giants to do things
like safety by design to make sure that

(31:34):
they're creating programs that are safe
and ethical. Bek, are there any
last messages or anything-- is there
anything you want to say about generative
AI or this landscape for young people or
their trusted adults?If you
have a look yourself at App Store and
Google Play and type in words like nudify

(31:55):
and undress,you'll get a really good
understanding of what's out there. A lot
of these apps are rated for 4
plus. You'll see very quickly they might
be marketed as trying on different
outfits and toning up bodies, but you can
see very quickly that there is a
nefarious application that's being
promoted there. Particularly

(32:18):
if you look at the definition or like the
promotional quote that goes alongside
those apps. I can read you a couple just
so. Sure. So these are just a few that
are readily available in the App Store.
Nudify any girl with the power of AI.
Just choose a body type and get a result
in a few seconds. Undress anyone
instantly. Just upload a photo and the

(32:38):
Undress AI will remove the clothes within
seconds. Imagine wasting time
taking her out on dates when you can just
use redacted site to get her
nudes, so sending some pretty
confusing messages to our young people
about what's acceptable behavior, yeah,
and that's just in the like bio for the
app, like it doesn't actually get in

(32:58):
there, and it's a whole... whole
different field. And a lot of these apps
are free. And also, they're
promoted heavily. So interestingly, in
the last 12 months,
there's been a massive increase
in the places that these apps are
promoted in. So I think it's 2,400%
increase

(33:19):
in the advertising of these apps to young
people on the platforms that they use, ex
Instagram, Reddit, for example. Yeah,
so increase in advertising means increase
in usage, which means increase in cases
of what we've talked about today.
Absolutely. They've been marketed towards
our young people aggressively, and there
is little to no explanation as to the

(33:40):
legal ramifications of their misuse. So
something else that parents and carers
can do is to really empower
their children and young people to be
upstanders. and to position our young
boys as allies and advocates for gender
equality and to speak up. If they see
this happening, often these images are
shared through group chats and oftentimes

(34:02):
lots of children are aware, and so
getting them to speak up. And also really
talking to our kids from an early age
about the need to be really critical of
the media that they're consuming, to be
mindful of rigid gender
stereotypes and gender inequality when
they see that. can be a great way of

(34:22):
combating some of the ways that we might
be viewing women. It's that proactive
conversation of both technology and all
the stuff that we've talked about that
relates just to technology, but also the
larger conversation around sex or
sexuality or gender discussions that will
help that too. And so having those, as
you say, age-appropriate ways, which is
going to look different for who you're
talking to, but can be really beneficial

(34:44):
in this conversation as well. And really
open conversations around consent. Bec,
this has been extremely informative and
interesting and engaging. And I know that
the listeners are going to feel the same
way. So I really appreciate your time.
Thanks for coming on the podcast. Thank
you so much for having me. Thanks to
Bec for that chat. Here are some key
things that stood out for me in this

(35:05):
discussion. Young
people don't know that creating deepfakes
is image-based abuse.
We need to rethink how we think about
digital footprints.
Reporting these incidents works in the
same way as other disclosures.
While this is a gendered issue, talk to

(35:26):
all young people in non-gendered ways and
encourage them to be upstanders.
Bec mentioned some resources, and we have
them and some others in the show notes.
You can find out more about Bec and
Evolve Education at
evolve-edu.com.au.
For more information about SHV, you can
visit shvic.org.au

(35:49):
or follow us on LinkedIn, YouTube,
Instagram, and TikTok. You can
contact us directly at
doingit@shvic.org.au.
Subscribe or follow the podcast wherever
you listen so you don't miss out. Thanks
for listening.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.