All Episodes

July 2, 2025 14 mins

AI chatbots have crept into our lives – and some abusers are weaponising them.

By feeding intimate details about their partners into tools like ChatGPT, they’re producing “performance reviews” that shame, degrade and control.

Today, writer Madison Griffiths on this new form of tech-enabled coercive control – and why ChatGPT always sides with the abuser.

If you enjoy 7am, the best way you can support us is by making a contribution at 7ampodcast.com.au/support.

 

Socials: Stay in touch with us on Instagram


Guest: Writer, artist and producer Madison Griffiths.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Hi, I'm Ruby Jones and you're listening to seven am AI.
Chatbots have crept into our everyday lives, and now some
abusers are turning them into weapons by feeding intimate details
about their partners and relationships into tools like chat GPT.
They're churning out things like performance reviews to shame, degrade,

(00:25):
and control their partners. Today, writer Madison Griffiths on this
new form of tech enabled coercive control and why chat
GPT is biased towards the abuser. It's Thursday, July three,

(00:48):
so Madison, welcome back to seven AM.

Speaker 2 (00:50):
Thank you so much for having me Ruby.

Speaker 1 (00:52):
You have recently been looking into the intersection of artificial
intelligence and domestic abuse. Why did you start thinking about
that as an issue?

Speaker 2 (01:03):
I noticed unique behaviors emerging in a few of my
friends when it came to how they personally engaged with
chat GPT. Namely, I noticed it being utilized as if
it were a therapist, and this made me particularly nervous,
as given I do report regularly on domestic violence, I'm
aware of the intricacies of course of control. I didn't

(01:25):
imagine chat GPT then too far out of the ballpark
of an abuse's methodology. So I then started asking around,
and I was incredibly unsurprised to find that its use
by those perpetrating abuse, particularly psychological abuse, appeared to me
an obvious phenomenon.

Speaker 1 (01:43):
So tell me then, about the people who you spoke
to and the stories that they told you about disappearing
in their relationships.

Speaker 2 (01:50):
Yes, so Molly was one such woman who reached out
to me, and she detailed to me instances of receiving
really large, onerous words documents where her ex partner had
essentially created what appeared to be a performance review of
Molly that was formulated on CHATGBT. The generative AI model

(02:13):
was fed incredibly intimate details about Molly's relationship and medical history.
This document critiqued Molly tirelessly. It pathologized her to the
ant debrie. It offered her reflective questions and exercises that
were put forward to Molly to suggest that this was
her only way to salvage the relationship. So essentially, the

(02:37):
documents appeared as if they were these formal diagnoses of
all of the victim survivor's shortcomings, and they were very
much inspired or fed by the user's own sentiments around
the victim survivor. It made the relationship appear contractual, so

(02:58):
Molly herself was postured as an employee or a subservient
of her ex partner. And once I received the documents,
what struck me about them was how deeply repressive they
were and essentially offered Molly's ex partner the tools to
further humiliate and degrade her. To nobody surprise, this usage

(03:19):
of chat GPT Peggy backed the cause of control that
Molly was subjected to at the time.

Speaker 1 (03:25):
Can you tell me then a bit more about the
impact this had on Molly, what it was like for
her to have her personal information used in this way,
and how she experienced getting this kind of AI feedback
via her partner.

Speaker 2 (03:43):
Yeah. I mean each woman I spoke to, but particularly Molly,
felt uniquely degraded. You know, CHATGBT in a lot of
ways became their abus's ally. For an abuser to be
embedded by anything, at least of all a form of
technology that is generally regarded as all knowing or ethically neutral,
is inevitably going to place the victim survivor in a

(04:05):
state of indignity. I know in my correspondence with Molly
that after Molly did send me this document, She herself
felt particularly vindicated when I insisted to her that I
found it quite disturbing, because there was a brief moment
where she thought, perhaps this is reasonable, and I think
read a really interesting place in our social relationship to

(04:27):
generative AI, where we aren't necessarily sure of the pleasantries
surrounding how we use it that I imagine it can
be very easy for victim survivors to feel quite confused
about how to process that information. I noticed a complete
corrosion in Molly's sense of self, and there were other
women that reached out that did not feel comfortable having

(04:49):
me report on their stories, as they still felt in
a state of shock at how best to process receiving
these large, almost managerial documents. But that certainly did appear
to be a trend. Molly was not the only one
that reached out to me admitting that those were documents
that she received, and a psychologist I spoke to, Carli Dober,

(05:09):
actually identified this phenomenon as a really new and deeply
concerning issue that she has noticed among her clients.

Speaker 1 (05:18):
You know.

Speaker 2 (05:18):
She argued that the toxic usage of such could really
reinforce the victims survivor's own maledactive thoughts about themselves and
their experiences. So she outlined that CHATGBT was regularly used
as a means of minimizing perpetrated abuse and as a
tool that essentially reinforced antisocial behavior, posturing it is normal, reasonable,
and in some instances, a defining element of love.

Speaker 1 (05:42):
After the break. Why chat GPT will never tell you
that you are the asshole? Madison. Generative AI the way
that it works, it has this tendency to try and
agree with the person who is using it. So how
does that make it an effective tool for a potential abuser?

Speaker 2 (06:05):
Yeah? Well, I had spoken to Gene Burgess, who is
a schollar and AI ethicist, and she made a point
that generative AI models are primed for social secer fancy.
So the perspective that the users bringing to the chat
themselves is inevitably reinforced and fed back to them. It

(06:28):
is very difficult to get a chatbot to tell you
that you are wrong or a bad or not a
nice person. Essentially, the models are actually trained on Reddit
and other things like Wikipedia. You know, a vast array
of sites on the Internet, but for moral questions or
social questions or relationship issues. Everyone will be familiar with

(06:52):
the famous reddit am I the asshole, where users can
essentially upload social or emotional dilemmas.

Speaker 1 (06:59):
Yeah, I've spent many hours on that at four am
when I can't sleep. Yeah.

Speaker 2 (07:06):
Absolutely, And I think what's interesting about that, sumbredded is
that you will notice that when you're crowdsourcing information, particularly
about ethical dilemmas, there is a lot of discourse and
not everyone will agree with you. You know, the moral
judgment from the subreddit community will weigh in and will
give you a verdict, and these models are actually trained

(07:26):
on that data. But as gene Burgess made clear to me,
researchers found that if you take a pretty clear moral
dilemma and you compare what the reddit community says versus
a chatbot, the reddit community actually unanimously disagreed with the
chatbot because the chatpot always would arrive at a place

(07:47):
where the user was a really nice person. So, in short,
it will agree with its user in a way that
appears intelligent, considered, and I imagine for the average user
of chat GBT quite omnipotent.

Speaker 1 (08:01):
So do you get the sense then that something like
chat GPT is pushing people into more abusive behaviors who
otherwise might not have gone down that path, Or do
you think that what's happening is a reinforcing of what
might already be there.

Speaker 2 (08:19):
I think, as a general rule, it is a reinforcing tool. However,
I did speak to one woman who had explained to
me how her husband had become very invested in using
chat GPT, particularly every time, you know, when they had arguments,
he would feed his diagnosis of the events into chat

(08:43):
GPT and essentially bombard her with his newfound perspective. One
thing that she stated was that in the early stages
of their relationship he did have abusive tendencies that he
had worked really tirelessly with the work of a therapist
and his social community to unpack, and she was quite

(09:07):
happy about where they were at in terms of those
tendencies that she'd witnessed early on. But now with the
birth of chat GPT, she has experienced a regression in
him because essentially he's relying on a tool that makes
him probably feel a lot better than any therapist would,

(09:29):
So she has been able to track a decline in
their relationship based entirely on his usage of chat GPT.

Speaker 1 (09:37):
And one other key point of difference between a therapist
and chat GPT is that your therapist has to abide
by rules of confidentiality and data privacy, whereas your outlining
examples were very personal and very Internet information is being
fed essentially to a private company without consent, So how
should we be thinking about that?

Speaker 2 (09:59):
Yeah, there's been a lot of discussion lately about the
implications of deep fates, which consists of digitally altered visual
media which is often designed to compromise the subject, for example,
a woman's face imposed on a pornographic scene. There has
been much less discussion about the implications of privacy breaches
when it comes to open AI, So we aren't yet

(10:21):
sure what ramifications await as we venture into this new
arena of tech. We don't know necessarily who owns the
information that is being fed, how it's able to be
accessed by third parties, or anything else for that matter.
More broadly, Tanya Faha, who's the CEO of Safe and Equal,
did state that amongst sector workers, they have heard stories
of perpetrators utilizing AI technology to stalk, monitor, or track

(10:46):
victim survivors, particularly through the use of remotely accessible smart
home and home automation systems, But when it comes to
generative AI, the implications are at this stage as endless
as the tech itself, so it is quite difficult to
see exactly what those implications are.

Speaker 1 (11:04):
And I actually asked chat GPT how its AI is
being used as a tool for abuse, and very quickly
it pointed out that its program can also help survivors.
So is that the case with any of the women
that you spoke to?

Speaker 2 (11:19):
None of the women I spoke to, No, But I
have heard of instances of chat GBT being used in
such a way, and there are certain chatbots, such as
most notably a chatbot referred to as Amy Says, which
is designed to help survivors usually combat legal and bureaucratic

(11:40):
messes when it comes to family court trials or proceedings,
or other byproducts of leaving an abusive relationship or dealing
with the fallout. There are certainly feminist researchers in this
space that are eager to utilize AI for good, but
in the wrong hands. Again, I'm very skeptical about its usage.

(12:02):
I am aware that women only make up twenty one
percent of the global AI workforce, so this does feel
like an unsurprising byproduct of a biased and gendered design.
There is no appropriate way to handle something like this.
It is so incredibly new and inevitably very very confusing.

Speaker 1 (12:23):
I suppose in the meantime it's about being able to
recognize AI facilitated abuse when it's happening.

Speaker 2 (12:31):
Which even the term artificial intelligence does suggest that this
model is all knowing. So I do think we have
a lot to unpack in terms of just what AI
is and certainly what it isn't, which is a therapist.
It's definitely not.

Speaker 1 (12:47):
That well, Madison, thank you so much for your time today.

Speaker 2 (12:51):
Thank you so much, Ruby.

Speaker 1 (12:54):
And if this story has raised any concerns for you
or someone you know, you can call the one eight
hundred Respecttional Helpline on one eight hundred seven three seven
seven three to two. Also in the news today, Foreign

(13:15):
Minister Penny Wong has met her US counterpart on the
sidelines of a meeting of the Quad Alliance in Washington.
Senator Wong says she made the case for a tariff
exemption for Australia and discussed defense arrangements amid pressure from
the US for allies to raise their defense spending. She
says US Secretary of State Marco Rubio did not raise
Australia's defense budget, and the discussions were on regional stability

(13:38):
more broadly, and the Victorian government says it will consider
all of the recommendations of the final report from Australia's
first formal truth telling inquiry. The four year inquiry, which
found crimes against humanity and genocide were committed against Aboriginal
people in Victoria, has delivered one hundred recommendations, including using

(13:58):
a treaty framework to provide dress for what occurred during
and as a result of colonization. For more on that story,
you can listen back to yesterday's episode of seven AM
on the York Justice Commission. I'm Ruby Jones. Thanks for listening.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.