Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, this is Sanny and Samantha.
Speaker 2 (00:06):
I'm welcome to Steffan never told you a production of
iHeart Ratio, and today we are once again so happy
to be joined by the incredible, the irreplaceable Bridget Todd.
Welcome Bridget, thanks for having me so lovely to be back. Yes,
(00:29):
thank you so much for being back. You and I
were chatting about how stressful these times can be, so
the holidays, those times being so thank you for making
the time. We always love talking to you. How are you, Brigid,
How have you been?
Speaker 3 (00:46):
I'm not bad. We were talking off Mike about how
it is a known issue with me.
Speaker 4 (00:51):
I'm a grinch. I don't like the holidays. When people ask,
you know, this time of year, people are like, what.
Speaker 3 (00:57):
Are you doing for the holiday? I like my fur.
I can feel my fur go up. You know, I
just don't like it.
Speaker 4 (01:03):
My heart beats a little bit faster, and I just
I just find the holiday season extremely stressful. And especially
when you're someone who is self employed, it's like, hey,
you know the work that you usually do, What if
you did that work plus a week's worth of work
on top of it, and then that'll buy you a
week off what will you be doing with that time?
(01:24):
Going to see your family? Notably not stressful activity. It's
like I just find it like a stress sandwich. Essentially,
I can't handle it. I cannot hang with the holidays.
Speaker 2 (01:35):
Well, as I said, Samantha, you and I would agree
on this. I mean I also think they're very stressful.
I yesterday we had a we went to the office
for a Sminty team meeting. We go, like Samantha and
I go once a year, and our our producer Christina was.
Speaker 3 (01:55):
Like, how are you? And I was like, I'm so
tired just a deep sigh. Yeah.
Speaker 4 (02:04):
And then it's the end of the year. It just
I just always feel like it's like everything catching up
with you all of the year. It's been a heavy year.
It's something about like by the time it hits December,
I'm just so freaking done, just done, spent.
Speaker 2 (02:23):
Yeah, and then you have the new year barreling in
on you and you're like, well, it's still gonna be here,
but as all these expectations of maybe something will change
and not really Yeah, the.
Speaker 4 (02:38):
New year I'm actually okay with because you get to
have those couple of days where you buy a new
journal and some new pens and if you for a
brief moment you're like, this is gonna be my year,
this is the year, then it all I get it
all together. It doesn't usually last for me. I usually
get about a week out of that. But you get
that week, which is not bad. And you and Smanita
(02:59):
did Bondo.
Speaker 1 (02:59):
For last time.
Speaker 5 (03:01):
Oh yeah, I got a new set. I was gifted
a new set, Bridget, just to give you an.
Speaker 3 (03:05):
Update, and how are they writing.
Speaker 5 (03:08):
I had to get used to it because it was
a lot free flowing, a lot more free flowing than
my previous one. So I had to practice with them
so I didn't mess up my tiny coloring books.
Speaker 4 (03:19):
A good set of pans, a good set of markers.
It really will have you believing this is going to
turn everything around.
Speaker 5 (03:24):
It's for a split second I was like, I can
color within the lines, and then I was like, no,
I cannot.
Speaker 2 (03:33):
Well, the topic you've bought today for us, Bridget is
so so timely. I've been thinking about this because I mean,
it's hard to escape. But also you get to the
end of the year and you have, like Miriam Webster
chooses their word of the year and it's slop as
an AI slop and then you have time chooses their
persons a year and it's like the architects of AI
(03:56):
and it's like all dudes. And yesterday when we were
at the office, we were having kind of a team
meeting and it came up we were talking about AI
and our jobs and what does that look like?
Speaker 1 (04:10):
So what are we talking about specifically today, bridget.
Speaker 4 (04:13):
Today you're talking about the reality that women are using
AI tools a lot less than men in the workplace.
There's a definite, well studied, well documented gender gap when
it comes to the usage of AI. And I feel
you because I think as a podcaster especially, but any
kind of creative the first conversation that comes up when
(04:36):
it's a bunch of podcasters together is are you using
AI and your work? How are you seeing AI show up?
Or people asking are you afraid of AI? Do you
see AI taking your job?
Speaker 3 (04:46):
One day? Like you won't have a job.
Speaker 4 (04:47):
People will be listening to AI podcasts all of that, right,
Those conversations are everywhere, but and I think often time
those conversations can be grounded in a kind of hype.
And so I think the the act that AI is
not being adopted equally by people of all genders at
the same rates.
Speaker 3 (05:05):
Is pretty interesting.
Speaker 4 (05:07):
So I wanted to talk through what we know about
the data when it comes to the gender gap in AI.
But I think I might be coming at this from
a little bit of a different place, right. You know,
in a lot of the research, they automatically assume two
points that I want to call out early that I'll
return to one, women adopting AI less is going to
(05:28):
be bad for women. I'll talk about this later, but
I don't know about treating that like a foregone conclusion.
And then this is kind of an offshoot of something
that I hear a ton when talking about AI. AI
is inevitable. If you don't get with the program, you're
going to be left behind. I have people who tell
me that not being into AI right now is like
(05:49):
not using email. You know, it will be the same
kind of thing in a few years, and you know,
if women are adopting it less, the idea that we're
going to be less employable, less competitive, making less money overall.
And so I want to be clear that I don't
really have the answer here, but and I want to
look at what the research says, but I don't like
this idea that frames widespread AI adoption as an inevitable
(06:14):
thing that everyone is going to be doing, because it
really doesn't leave a lot of room for criticism away
around the way that AI is being adopted. It's just
repeating it's inevitable, it's inevitable, it's inevitable, get on board.
I do think that when it comes to adopting any
new technology in this way, we shouldn't just be being
told to get on board it's inevitable, just fram it
down people's throats, especially when people are saying, hey, maybe
(06:37):
we don't actually like this technology.
Speaker 3 (06:38):
Hey, maybe this technology isn't good.
Speaker 2 (06:41):
Right.
Speaker 4 (06:41):
I love technology. I'm an advocate for knowing the basics
of all kinds of technology. But if a technology is
showing itself to be ineffective, biased, problematic, full of lies,
actually adds work to your plate. I don't know that
we should be just to be telling people get on
board with that it's inevitable if that's actually what they're experiencing,
you know.
Speaker 2 (06:58):
Yes, yes, And this has come up a lot at
our office of the at least right now, it seems
to be adding more work. It's not like actually helping,
but that you're told like, no, it's going to really
help you out and streamline things. I have to say,
I had not considered this gender gap in AI usage.
(07:21):
So I'm really interested about the research.
Speaker 1 (07:24):
That you found.
Speaker 3 (07:25):
Yeah, let's get into it.
Speaker 4 (07:28):
So I took a look at some of the studies
and they all basically confirm that women are using AI
in the workplace a lot less than our mail counterparts.
Let's look at a meta analysis called Global Evidence on
Gender Gaps in Generative AI published in the Harvard Business Review.
So they took a look at eighteen surveys and studies
covering one hundred and forty three thousand plus individuals across
many countries, sectors, and occupations. They also combined this survey
(07:51):
data with Internet traffic and mobile app download data for
major generative AI platforms like Quad and chackbut platforms that
have one hundred of millions of users. They found a large,
nearly universal gender gap. Here's what they found. They found
that the gender gap in generative AI usage is global, persistent,
and not fully explained by access or occupation, pointing to
(08:14):
some deeper social, cultural, and institutional frictions.
Speaker 3 (08:18):
So overall, it just seems like women.
Speaker 4 (08:20):
Are not the ones when when people are saying it's inevitable,
get on board, you got to use it.
Speaker 3 (08:25):
It sounds like, by and large that is not coming
from women.
Speaker 4 (08:30):
I'm curious, do these findings surprise you, like as women
in media, as women in tech and podcasting, Like, is
this surprising information for me?
Speaker 1 (08:41):
It's not.
Speaker 2 (08:42):
I don't think i'd ever clocked it until I was
looking at your outline. But once I looked at the outline,
I was like, yeah, yeah, because most people I know
who are into AI are men, and they usually have
a very like but it's this global technology thing about it,
and I can't say I have really anyone I can
(09:04):
think of, but as a woman, that's like that. And
I mean, we're going to go over some of the
reasons why this is, and I thought a lot of
them were interesting, but they did resonate with me, and
I thought like, yeah, that makes sense, Like I get
I can't why there's a hesitance, but I guess I
(09:26):
hadn't until I read this, I hadn't picked up on it,
but I do.
Speaker 1 (09:30):
It resonates with me now, and I'm like, yeah, that
seems right, that seems right.
Speaker 5 (09:34):
I think being on like the ex lineal community. That's
kind of one of those things that I don't think about.
But my partner, who was heavily, like even employed into
the industry of AI, the difference between him and I
are so vast that it's not surprising the way that
he has integrated that into his systems, into his life,
(09:55):
but also understands how he has to be cautious and
how it is one of those things that has to
be operated with responsibility. I think that there's a level
of understanding, but he is also very very excited by
the possibilities of what AI could do. So there's this
level like in our in my life specifically now our
(10:16):
responses to AI as me being a little more like
dismissive and very much more like negative about the whole
thing as where he is trying to play it to
his cards. That makes sense.
Speaker 4 (10:28):
That makes so much sense, And it's funny because you
are basically living a microchasm of what the data suggests.
Speaker 3 (10:34):
And that's kind of what it feels like when I'm
like thinking about it.
Speaker 5 (10:37):
I'm like, honestly, i feel like I'm living as the
stereotype that is just or like in this conversation.
Speaker 4 (10:43):
Absolutely so in this meta analysis, they found that across
almost all the data sets they looked at, women are
about twenty to twenty five percent less likely than men
to be using generative AI. The meta analysis shows women
have twenty two percent lower odds of using AI than men.
The gap appears across regions, industries, education levels, and occupations.
(11:04):
So you might hear that and think, okay, but maybe
the people who did this these surveys got it wrong,
looked at the wrong information. That gap, however, cannot be
explained by things like survey bias, because they also, as
they said, looked at who is downloading tools like Claud
and chat sheebut, and that data showed that women compromise
(11:24):
about forty two percent of chatjept website users globally, twenty
seven percent of chat sheepet mobile app users, and even
lower shares of women are using the tool Claud for manthropic.
So this really suggests that the gender gap is at
a pretty massive global scale.
Speaker 3 (11:42):
A couple of notable findings. If women are more senior
or have more experience.
Speaker 4 (11:47):
It does narrow that gender gap a little bit, but
it does not completely eliminate that gap. Senior women in
technical roles sometimes match or exceed men's usage, but also
junior women and women in non technical roles show much
larger gaps. So there are a couple of interesting, notable
outliers there, But in general, just like what's happening in
(12:10):
Samantha's household, men are even cautiously sort of gung ho,
experimenting with it, thinking pretty optimistically about what AI might be.
Speaker 3 (12:19):
Able to do. Women are a little more skeptical.
Speaker 1 (12:23):
We'll say. I wonder.
Speaker 2 (12:38):
Just because we've been talking a lot on the show
about like the quote male loneliness epidemic, which women are
also very lonely, but that's what we're talking about. But
I see a lot of stories of men like forming
these bonds with AI and these relationships with AI, and
I have no idea, but I wonder if that's also
(13:00):
part of it, because women usually have more of a
support group than this.
Speaker 3 (13:03):
Yeah, yeah, I mean, I absolutely think that.
Speaker 4 (13:09):
Some of the way that AI is talked about by
people who make it, some of the marketing language and
decisions around it, I think reinforce some attitudes that I
think are are kind of sexist.
Speaker 3 (13:23):
Right.
Speaker 4 (13:23):
I remember very clearly it was like a grand opening
when sam Altman the CEO of open Ai, was introducing
this new vocal chatbot for open AI's chatbot named Sky,
and he basically said that he wanted the experience to
be like the experience that people had in the movie Her,
(13:44):
which is Folks seventeen. That movie Looking Phoenix is basically
having a romantic and sexual relationship with an AI chatbot
voice by Scarlett Johanson.
Speaker 3 (13:53):
And I love that movie. It's one of my favorite movies.
But obviously, within the framing of.
Speaker 4 (13:57):
The movie, it is about a man you and technology
to satisfy his emotional and sexual needs on top of
his you know, admin that needs that you might be
using AI for already. And so when the head of
open Ai gets on a stage and promises people that
they will be able to use technology the same way
it is used in this movie, well, I can understand
(14:17):
why that might hit women in a kind of a
weird way, right, I can understand why it's like, well,
what are you really saying about this technology that that
is your frame of reference for how you envision people
using it?
Speaker 3 (14:29):
Do you know what I mean?
Speaker 2 (14:31):
Yes, yes, absolutely, And going back to one of the
other points he made at the beginning that comes up
in these arguments of why women aren't using AI as much.
It did kind of remind me of the argument around STEM,
like a lot of STEM things, where it's like, well
they just that's not how their brain works, or that
(14:53):
they don't pursue those films because of whatever X y Z.
So that feels very similar to me.
Speaker 4 (15:01):
Oh yes, I'm so glad that you brought that up,
because when you look at how the other conversations around
gender gaps that we know exist in workplaces more broadly,
in leadership roles more broadly, stam rolls more broadly. Often,
just as you said, the pushback is that women maybe
are picking less lucrative.
Speaker 3 (15:17):
Work or fields.
Speaker 4 (15:18):
If you're Larry Summers, maybe it's the idea that women
are just innately less good at these fields or something. Now,
never mind that that really discounts the question of why
fields that are associated with women are historically devalued and underpaid.
So even if you know there was a time where
computing was seen as women's administrative or secretarial work. When
(15:40):
women were associated with it, it was lower paid ad
men work. When men became associated with it, very clearly,
it became a different kind of work. So that's clearly
not the women's fault for you know, for what roles
they happen to be choosing. But in any event, not
only is that not what's really going on and not
telling the full story of what's happening in these workplaces,
nor is it really what's going on or telling the
(16:02):
full story of AI adoption. That's not me saying this,
that's according to this meta analysis, as social scientists Katie
Jin puts it in a piece called There's a reason
why women aren't swooning over AI like men are, which
I love that title, She writes, The off proposed explanation
is that women understand this new technology less, largely because
(16:23):
they work in roles with lower exposure to it. Women are,
after all, still outnumbered in STEM degrees and careers, including
AI specific roles. The same is true in AI leadership,
women hold fewer than fourteen percent of senior executive positions
in the industry. But Harvard's study also found that the
usage gap remains even when women are explicitly given opportunities
(16:44):
to learn and use AI roles. So it's not just
that women have less training or less access to AI.
Even when the people who did this meta analysis equalized
for that, they still found that this AI gender gap persist.
They looked at a study out of Kenya where women
were exuplicitly offered training and access to AI, yet those
(17:05):
women still used chat GPT thirteen percent less than men.
Speaker 3 (17:09):
So this seems to suggest.
Speaker 4 (17:11):
These barriers aren't really about access or awareness about AI,
it's something else going on.
Speaker 5 (17:21):
I feel like there's so many things as I'm trying
to work out when it comes to this conversation about
whys that seem like, well, obviously A equals B, or
A is because of B. And one of the things
that I'm thinking about is like maybe just what I'm
seeing when it comes to things like chat gpt, which
feels like just an advanced version of Google where we
(17:41):
made that that the thing in researching, But it gives
the mediocre man who has a lot more confidence than
women in general, even more confidence, I guess in a
way because they think they're getting that answer without having
to go to the people who actually found those answers
in some weird ways, Like there's so many thoughts like
maybe what are some of these reasons that we are
(18:05):
finding that men are more likely to use it and
not for me like, that just seems like a rational
I think that's true.
Speaker 4 (18:12):
I think twice now, I've gotten into a back and
forth with somebody on Reddit, and I guess I don't
know their gender, but I can tell that they were
responding to me using chat sheep put and I'm like, Oh,
you didn't.
Speaker 3 (18:22):
You couldn't. It wasn't worth it.
Speaker 4 (18:24):
You just wanted to keep the like argument going, but
it wasn't worth it for you to come up with
your own answer. We're going to take the chat sepeute
really right. So the meta analysis gave some answers as
to what might be going on here, what might explain
some of these gaps. I'll tell y'all what they said,
and then i'll kind of give you my thoughts on it,
because I'm not some of these I'm not so sure about.
So the first, it's a lower familiarity and knowledge with AI,
(18:46):
So they said women are more likely to report not
knowing how generative AI works or.
Speaker 3 (18:51):
How to use it.
Speaker 4 (18:53):
I'm not totally sold on this. You know it is
in their meta analysis, so I wanted to include it.
But this is just my tabe. AI is not difficult
to understand. It is not difficult to use, nor is
it terribly complex. There is nobody listening to the sound
of my voice who could not figure it out who
it is outside of the realm of their ability to
(19:14):
comprehend it is not terribly complex. I think that what
this study is actually highlighting here is I think this
tendency for the people who make technology and talk about
technology to do so in a way where it seems
like the technology they're talking about is so complex and
your puty little lady brains could never figure out what
(19:36):
we're talking about here. Think about the way that people
like Elon Musk, Mark Zuckerberg, Sam Altman talk about technology.
They talk about it in this way where it gives
it this gravity toss that I think is oftentimes unearned,
because you don't need any kind of special training to
understand the basics of AI. And so I think the
fact that tech leaders, who, let's be real, often happen
(19:59):
to be not just me, but a specific kind of man,
not just white men, but a specific kind of white guy.
I think the fact that they are consistently talking about
this technology in a way that makes it seem like
you're ordinary person could never understand it. I think it's
sort of at play here, and I think that is
with intention. I think they do this because they want
people to feel I couldn't possibly understand what they're talking about.
(20:20):
How could I hold these people accountable? How could I
expect things to be better for me? How could I
even you know, know, you know, Oftentimes they are telling people, oh, no, no,
you want AI.
Speaker 3 (20:31):
AI is good. People will say, no, I know what
I like, and I know.
Speaker 4 (20:35):
What my experiences are, and I know that I'm not enjoying,
you know, opening my email, not even be able to
use it because AI tools are being foisted on me
non consensually, and they're like, no, no, no, it's actually
great for you. Right that if people truly innately felt
like they understood this technology, they would tech leaders wouldn't
be able to do that anymore.
Speaker 5 (20:55):
Right. It really does feel like they're just inflating the
importance of it so that we'll believe it, and then
that when we are forced to use it because we
have no way of pushing back and there's actually no
regulation anymore on how it's being used, that will just
accept how great it is without actually knowing how great
it is. In the midst of the conversation of knowing
how inaccurate it has been in the past. Also, how
(21:17):
hard is it when you literally do not have a
choice if you use what used to be Google and
what you thought would be something that would give you
trusted sources, and the top of it is literally AI
results that you did not ask for. So how hard
is it to use when you're not given a choice
to whether or not you have to use it?
Speaker 3 (21:35):
Right? And this is what I'm saying.
Speaker 4 (21:37):
It goes back to what I was when I started
the conversation with of how we talk about it as
this inevitable technology and we all have to get on board.
Things that are good, you don't have to talk about
them that way. Things that people want you don't. That's
not how that's not how you have to frame them.
So Sam Wiles said on threads If AI is a
good thing, why are we constantly told it's inevitable. You
don't have to do that with good things. No one's
(21:58):
ever like iced coffee is inevitable, So you may as
well get used to it whole. This is so correct, right,
This is things that are good. You do not have
to continue over and over and over again to tell
people that they better get on board with this good thing, like, no,
that's not how we talk about things that people like
and are good.
Speaker 1 (22:15):
Right.
Speaker 5 (22:16):
Also, it's not forced upon us by the government, like
we are not the ice coffee being told you have
to buy ice coffee when it gets to this temperature,
and no state can regulate whether or not you can
use it, even though it can be harmful for you.
And we have seen harm done by this.
Speaker 3 (22:31):
Sorry exactly that.
Speaker 4 (22:45):
Another thing this meta analysis found was lower confidence with
it and sort of persistence or lower levels of persistence,
and so, according to the meta analysis, women report less
confidence in prompting and are less likely to persist after
poor AI outputs. Men, however, are more likely to experiment repeatedly.
So I think this is one of those things where
(23:08):
essentially what is being said there is women have tried
this technology and have identified that it can be janky
and have reasonably concluded that maybe it's not worked the
pain in the butt to keep trying with it, and
men are like, I'm going to keep on trying.
Speaker 3 (23:21):
I think that it's like framing women taking like gleaning.
Speaker 4 (23:27):
From their experiences of AI being full of vies hallucinations
actually being more work for them, which has been my
experience when I when I've tried to use AI a
lot of times, I'm like, Okay, well, I could have
done this in the time, in the time it took
me to get to finally get AI to do, I
could have done this myself and done a better job
at it.
Speaker 3 (23:43):
So why should it is on that to begin with
wasted time?
Speaker 4 (23:46):
But I don't like how this is framed as women
being risk averse, when in reality, I think it's more
like women making reasonable deductions about the time they want
to spend on something that has proved itself to be ineffective,
and like, that's that's not risk averse.
Speaker 3 (24:01):
That's something else.
Speaker 5 (24:02):
That's being tired of arguing with an inanimate object, which
is what I've had to do when I'm yelling back
at Google I want you Siri, but any of those
days because they did not understand what I was saying
and I just stare at it and like, why am
I doing this with you? You're and then being told
also they remember when you're reading them.
Speaker 4 (24:20):
Listen, I told this story on my own podcast, there
are our goals on the Internet. After I did the
episode with you all about Larry Summers, who was formerly
on the board of Open AI, stepping down because of
those emails that were revealed between him and convicted child
sex criminal Jeffrey Epstein.
Speaker 3 (24:37):
I was trying to get.
Speaker 4 (24:38):
I was like, oh, maybe chatchepet can help me come
up with like metadata and all of that. So I
put in what I have to Chattebaut and Chattebet says
this is not correct. There is no link between Larry
Summers and Jeffrey Epstein. And I was like, oh, really, wow,
we don't have emails linking the two and they're like
and then Chattebet is like, Okay, well, yeah, maybe there's emails,
(25:01):
but you're suggesting that something illegal happened if you don't
know anything about that.
Speaker 3 (25:05):
And I was like, I said the emails were creepy.
Speaker 4 (25:08):
How would you define emails where a grown married man
is going to a convicted pedophile for advice on how
we can have text with his mentee.
Speaker 3 (25:16):
You wouldn't define those as creepy. We got into a
huge argument to the point where I asked it. I
was like, cut the crap. Are you being so cagey
because Larry Summers used to be on the board of
open AI.
Speaker 4 (25:31):
That makes you CHATCHYPT. CHATCHYPT says, let's be clear, Larry
Summers was never on the board of open AI. Larry
Summers has been on the board of open ai since
like twenty twenty three, so it's.
Speaker 3 (25:42):
Been kind of a while.
Speaker 4 (25:43):
So it's like, not only was that infuriating, think the
work it took for me to to like take stop
my work day and have an argument, a real no
argument with CHATCHPT. This is the technology they're saying is
the legiten of our king economy right now.
Speaker 3 (25:59):
I don't think so.
Speaker 5 (26:00):
Oh, you literally spent emotions with a computer who is
just trying to protect their reputation, which.
Speaker 3 (26:05):
Is weird because they're not a human.
Speaker 4 (26:08):
This is what I'm saying, Like, like, I'll tell you something,
I have not asked chat gpt a single thing since then.
Speaker 3 (26:14):
That was a little experiment to see if it was
gonna work.
Speaker 4 (26:16):
It went so astronomically left and I'm like, okay, well
that's a waste of my time and I got my
blood pressure up, so it's a double bad.
Speaker 5 (26:23):
I don't I don't need this from you too.
Speaker 4 (26:26):
If I'm gonna argue with someone, it's gonna be a
flesh and blood human in my life, that freaking chat GPT.
Speaker 2 (26:31):
Well, I think that's also a good point of like
we've we've talked about this before several times, who makes
this technology and the fear of like the biases and
all that stuff in there. And I know, like Elon
Musk and Gronkipedia has had a lot of things come
up with what it is saying that are just flat
out ridiculous and like these flight out lies, and so
I do think that that is a concern, and I
(26:57):
would I would pause it that that influences like I
would guess that women are not getting the answers they
want more often than men are getting the answers that
they do want.
Speaker 3 (27:10):
Yes, And something to remember about AI.
Speaker 4 (27:13):
If there's one thing, even if you're thinking, I'm not
a teche, I'm not a computer person.
Speaker 3 (27:18):
This is not my forte. If there's one thing to
remember is that AI.
Speaker 4 (27:22):
It's easy to think of it as robot computer brains
that know everything's all knowing.
Speaker 3 (27:28):
It's not.
Speaker 4 (27:28):
It's trained and built by all of us humans and
so the same kind of biases that we know humans have,
AI is just trained on all of that to replicate it, right,
And so keep that in mind when you're thinking about AI.
And we have so much documentation about the ways that
AI is biased when it comes to reflecting women in workplaces.
It's more likely to reflect women as younger than they
(27:50):
actually are. And so if the majority of let's say,
for instance, if the majority of women who are engineers
are over forty, it's like a documented fact. AI will
will tell people, oh, they're actually younger than they are,
because it's reflecting a bias that against more mature women.
You know, there have been studies that when women ask
(28:12):
CHATGBT for negotiation advice, it regularly tells them to ask
for less money. Right, so all of the biases that
exist in society, AI is simply replicating those because it
was trained on all of us. And so really keep
that in mind, Like I can understand why women are
not keen about going to this technology to help them
understand the world around them and their workplace when we
(28:35):
know it has these biases.
Speaker 5 (28:38):
Right, I mean it comes to the other point and
that we already understand and know that the data research
are based on men and there's not enough based on
women or those marginalized communities at all. Like we just
got to the point of realizing, hey, we need more,
so more people have been doing it, but all of
that information is not there to provide a background for chat,
(29:00):
GPT to pick up or whatever AI company. And that's
really concerning that we're acting like the information we know
now is all we need to know when we know
that's not true exactly.
Speaker 3 (29:09):
That is such a good way to put it.
Speaker 4 (29:12):
And another concern that women report about using AI why
they're kind of skeptical about it, is that women have
stronger what the meta analysis called kind of ethical concern.
So women are more likely to view using AI, especially
in professional settings or education settings, as unethical or cheating.
Speaker 3 (29:29):
So I don't disagree with this. This totally makes sense
to me anecdotally.
Speaker 4 (29:34):
However, I think it's more complex than that, because I
don't think it's just that women see AI is cheating.
It is that, But I also think that in workplaces
and in educational settings, women are more likely to be scrutinized.
Speaker 3 (29:47):
That our male counterparts, right, and that's just the way.
Speaker 4 (29:51):
Then it is, like women aren't wrong for being like, oh,
I feel like everybody's checking my work extra hard and
harder than they're checking Joe's work or whatever. The BBC
spoke to techlogist Lee Chambers, who said women are more
likely to be accused of not being competent, so they
have to emphasize their credentials more or demonstrate their subject
matter expertise in a particular field. There could be this
feeling that if people know that you, as a woman,
(30:12):
use AI, it is suggesting that you might not be
as qualified as you actually are. Women are already discredited
and have their ideas taken by men and passed off
as their own, So having people knowing that you use
AI might also play into this narrative that you're not
qualified enough. It's just another thing debasing your skills, your
competence and your value. Now that makes total sense to me.
Speaker 3 (30:32):
And I am in.
Speaker 4 (30:35):
Professional spaces of all kinds, and in some of these
professional spaces, I've been around very accomplished men in tech
who will not blink an eye at submitting like an
op ed or something, or a piece of writing that
was clearly written by AI. And it's just like and
I would never like the way. I would be so
(30:57):
embarrassed to have anybody be like, oh did you submit this?
And it was generated? I can tell by whatever, whatever,
I would crawl into a hole and die. I would
be so embarrassed. And the way that I can confirm
that there are a lot of men in high up
positions who do not feel that level of scrutiny. And
it goes back to like marginalized people in these settings.
(31:19):
Are we're so used to carrying that the extra eyes,
you know, extra scrutiny of why we're here at all,
our work, our value, our competence. Why would you want
to inject another reason to give somebody to continue.
Speaker 3 (31:33):
To do that when you're already facing that in multiple levels.
Speaker 2 (31:37):
I have to bring this back to fan fiction, you
know I do. So there's there's a big, like concern
in the fan fiction space of like using AI, and
it's become such a thing, and as discussed, a lot
of fan fiction is mostly written by women or non
(31:58):
binary people. People will go out of their way to say,
like in the tags I use M dashes, but it's
not AI. They go way out of their way to
be like I sawyar, it's not AI. And it's just
it's so clear to me that people don't want you
to think like, oh, this is not worth Like I
didn't even put in the time to do this, whereas yeah,
(32:19):
I do think I don't think men would see it
necessarily in that way. They would see it as like, oh, yeah,
it just saved me some time. But meanwhile, and now
I know that AI uses a lot of M dashes
because of fiction.
Speaker 4 (32:33):
Yeah, that doesn't surprise me at all, because I think
the fan fiction community is definitely a community that values
human creativity, Like you wouldn't be in the fan fiction
community if you didn't value human creativity, and it's not.
It doesn't surprise me that that's a community that it's
also full of women.
Speaker 2 (32:51):
Yes, yes, And then people always put this thing too
at the top where they're like, please don't feed my
work to AI, and I'm like, that's just not going
to help you, but I appreciate the effort.
Speaker 5 (33:01):
Well, that's the other part to that is when you
start thinking about using chat GPT or any of the situations,
you don't know who you're taking from, and there is
this level of guilt for me on the end of
like not being the creative. But if I'm researching things
and I'm just basing it on one small bit of
information that's you know, from CHATJPT or Gemini or whatever whatnot,
(33:22):
which is constant, and I can't stand it. It feels
unethical in that manner of like we already know artists
are really really on edge because their work is feeling
constantly stolen. Creators who are writers, who are part of this,
researchers who are doing lots of work to get their
educations and to be in this field like using that
information and like oftentimes it's not cited. It just becomes
(33:45):
one big blob of information. It feels so gross to
see that play out.
Speaker 3 (33:50):
Yeah, said such a good point.
Speaker 4 (33:52):
There was this like micro controversy online a couple of
weeks ago about this literary festival, the Black Romance Literary Festival,
where the question was, do the people who run this festival,
the organizers, will they accept people who use AI in.
Speaker 3 (34:06):
Their work at this organization? Will those people be allowed
to be included in panels?
Speaker 4 (34:10):
And so it sounded that they were really talking about
illustrations like cover art, and so the organization gave what
to a lot of people in the community felt like
not a great answer, just sort of what they wrote.
The people the organizer said, we don't use AI in
our work, but we're not turning away people who do
use AI. We want to provide resources and education to
(34:31):
those folks, but we don't want to disclude them. One
of my listeners made a very good point that it
seemed like that was a kind of attempt, a harm
reduction kind of position, But maybe they didn't say it
like that, and they perhaps they shouldn't have quarified that.
But you know, I think at a time when artists,
particularly marginalized artists, are really having a hard like anybody
(34:54):
who tries to make make money from making a thing
is having a hard time of it right now. And
I I think when, as you said, Sam, we know
when you ask AI to generated image, it's not generating
a new piece, it's just taking.
Speaker 3 (35:10):
From what's out there, right.
Speaker 4 (35:11):
I remember there was a time where people were using
an AI tool to make those futuristic selfies. Half of
them had watermarks or logos in them somewhere. That's how
much they were just taking from other artists copyrighted work.
Speaker 3 (35:25):
So I don't think that.
Speaker 4 (35:26):
The creatives who are saying, hey, we need this to
be clarified, Hey, we need to put a line on
the stand of how what we're going to do when
it comes to taking other artists work in this way.
I don't think they're wrong for being anxious about that
and expecting and deserving some answers around it.
Speaker 3 (35:43):
Frankly so, when it comes to kind of the.
Speaker 4 (35:56):
Reasons why women might not be adopting AI like their
male counterparts, I did find this meta analysis pretty useful
and under like giving us the scope of the issue,
giving us the scope of the gender gap, and some
reasons as to why that might be. However, I just
don't think that some of the answers really spoke to
the nuance that I see in that experience as a
woman that's a little bit skeptical of AI. So I
(36:18):
found this piece in the Stanford Social Innovation Review by
Mara Bolis, who is the founder of First Prompt and
a fellow at the Harvard Kennedy School, called the AI
gender Gap Paradox, and I think that this piece does
a really good job of getting at some of the
nuances that's not just women aren't smart enough to get
AI or whatever right. And so one bit that she
(36:38):
pointed out is that women are already doing a lot
of work. She writes, more than half a professional say
that learning AI feels like a second job, which for
most women is actually a third job when you consider
the continued disparities in time spent on childcare and housework.
And so that I thought was such a good point,
(36:58):
right that doing your current job or your current educational
workload or whatever is already a lot of work. If
you're a woman, you're probably taking on more of the
labor at home, or of the emotional labor, or of
the care work, all of that. And on top of it,
now people are telling you that if you don't take
the time to learn how to integrate AI into that work,
you don't really care about your career and you're holding
(37:20):
yourself back. I just, yeah, it's like that, Like that
has to be part of the conversation if we're going
to actually address the gender gap in AI usage. Yes. Yeah,
And this piece I think also just does a great
job of reframing fear or knowledge gaps into what I
suspect they actually are, which is the ability to competently
(37:41):
analyze risk. Women aren't looking at AI and saying I'm
afraid of it or like I can never learn it.
I think what they're actually saying is I have analyzed
the risk that using it might bring into my work,
and I've made a decision based on that that I'm
not gonna be using it like that. That piece in
the Stanford Innovation Review looked at studies from Deloitte and Pew.
(38:04):
Both of them showed that women predict AI will bring
less benefit and do more harm across personal, professional and
public life. Men, however, tend to be more optimistic, confident,
and self assured in their competence.
Speaker 3 (38:18):
Doesn't that really say at all?
Speaker 5 (38:20):
Well, there's so much Like when I'm thinking about the
amount that we research and the amount that we try
to make sure that we are noting who is you know,
referencing and giving credit to whom ever, so we're not
doing any violent violations, copyright law, any of those things.
And again we know that chat, GPT or any of
the AI really care.
Speaker 3 (38:42):
They don't care where the sources are coming from.
Speaker 5 (38:44):
And so as people who know A that it is
not one hundred percent proof and we are trying to
be doing our due diligence and give correct information. Yeah,
that is more work to go back to whatever we
thought we could use with AI, to actually verify that
where this information is coming from. Like it's kind of
(39:05):
funny when we talk about having research help and having
someone else do the research, it feels like more work
because I'm like, I don't know if this is true.
Speaker 3 (39:13):
I gotta go read the whole article again, you know.
Speaker 5 (39:14):
What, I'm like doing more work. And it feels the
same way in this type of conversation with AI, because
I don't completely trust once again and like analyzing the
risk that this is actually true.
Speaker 3 (39:26):
This might sound way off dates, but when I was
reading some of the research about why women don't use
AI as much, I couldn't help thinking about what a
lot of heterosexual women who are married to man or
in relationships with men report, where if you ask a
man in your life to do something, what you're actually
creating is like work for you later when you have
(39:48):
to answer the follow up or correct what he's done wrong.
I hate that for us, find women who find themselves
in close relation with men hate I hate that for us,
But like that that is what we got, that we're
served that all the time we would be lying, Like I.
Speaker 4 (40:06):
Think every woman listening has had that experience, and it's
so frustrating, But I think that's it really mirrors to
me what women are talking about when they talk about
how AI is used in their work. When you have
to go and correct the mistake, ask again, give the
ball whatever it's gonna be. That's just adding more and
more to your emotional load. It's been easier for you
(40:27):
to do it yourself in the first place. Right.
Speaker 5 (40:29):
Also, that does feel like if we get something wrong,
as women are marginalized people, we're going to be penalized
a lot more than when men do. Men are given
excuses and are like, eh, you're fine as a one
time thing, or they feel like they can brush it off.
Whether it's just maybe they feel like they can do
that way as where women have the anxiety of like, no,
we're going to get the worst punishment everlash martinalized people.
(40:50):
So maybe that's also that level of confidence that we
don't have, which is delusional.
Speaker 4 (40:55):
Yes, And I think this speaks to something else that
I saw the Stanford piece and I just thought was
so interesting, which is that women's self report not feeling
confident with how AI works. We might be offered AI
skills training, but what we're not really offered is a
chance that like actual knowledge into this technology, all the
issues that go into it, like an actual fundamental understanding
(41:18):
and crash course on how it works. The Peace Sites
a Federal Reserve Bank of New York survey that shows
that when women say they want generative AI training, they
are not just looking for skills. They're signaling awareness of
this technology's opacity and their unwillingness to trust a system
that they don't fully understand. A systemic review of gender
and AI adoption found that women consistently cite things like
lack of transparency and the opaque nature of AI tools
(41:40):
as barriers to trust.
Speaker 3 (41:42):
And so it's exactly what you dis laid out, Sam,
And again, that's.
Speaker 4 (41:46):
Not the same thing as not feeling smart enough or
competent enough to understand something. That it's reasonable to say
that people that make AI tools have intentionally designed them
and they intentionally talk about them with such a lack
of transparent garancy that I don't trust it. That's that's
that is a really reasonable position. That's not unreasonable. I
don't like the framing that this like totally reasonable way
(42:10):
to be is you know, bad in some way, or
that women are shooting themselves in the foot economically by
showing this like reasonable lack of trust.
Speaker 5 (42:20):
Right, And you know, another thing to that is again,
I keep coming back to how we've seen the mistakes
and maybe that's what it is. This has been so rushed,
but the constant times that like we see things that
are not good are usually really bad for women. Like
I keep thinking about the AI videos of the deep
fakes that happened with that twitch switch streamer that we
talked about. Was it two years ago? And I don't
even remember we had, like cause we were really talking
(42:42):
about the fact things like this is super concerning because
it does go after women for like just being there.
Speaker 4 (42:50):
Yeah, I mean that speaks to what I was talking
about earlier when we were talking about the way that
Sam Altman, the way that Sam Altman was presenting talking
about the chatbot they were creating Sky that there are
so many examples of the vibes just being suss for
women in these spaces, right. You know, I make a
(43:12):
podcast about the intersection of gender and technology, and oftentimes
when we talk about AI, we're talking about things like
non consensual sexualized deep fakes. We're talking about things like
Sam Altman, you know, saying people should be able to
have the kind of relationship that Joaquin Phoenix has with
the chatbot in the movie Her. We're talking about things
(43:34):
like Elon Musk creating a anime teenager kind of sex
fantasy chatbot. We're talking about men who build technology, who
fill their teams with other men. They lack gender diversity
and racial diversity or any kind of real inclusion, and
then they talk about women in these like misogynistic ways.
(43:57):
It's not surprising to me that women are being lower
to adopt this technology that is made in such a
misogynistic soup. And as Mara Boll has put it in
that Stanford Review piece, as in financial systems, women are
attuned to the weaknesses in generative AI systems that designers
didn't notice or prioritize things like bias, privacy risks, unreliable
(44:18):
outputs before putting their products out into the world, which
speaks to that that rushed quality that you were just
clocking earlier. Some of the industry's more misogynistic offerings see
rocks Annie fantasy chatbot, or disturbing policies see Facebook's Leach
policies on children and a list of content are enough
to send most users into a kindatonic depressive spiral. But
(44:39):
for women, beyond being offensive, such outputs are evidence of
what gets built when developmental teams lack gender diversity. When
women engage with systems that they've been largely left out
of creating, the products can feel foreign, awkward, or even hostile.
And so I think that is absolutely what's going on here.
The vibes around how this technology is made, who makes it,
(45:00):
who is in the room, who's who has the power
over it, and how how that power is being youthed
and wielded to shape the world. All of that sucks
for marginalized people. So yeah, surprise, surprise, those same marginalized
people are saying no, thank you, y'all can keep it, yep.
Speaker 2 (45:15):
And it's not like we don't have historical, societal, systemic
evidence of this going wrong for us.
Speaker 1 (45:22):
I could.
Speaker 2 (45:22):
I love how men are like confident, optimistic. Oh, it's
gonna be great. I'm like, because its historically as for.
Speaker 3 (45:29):
You, Yeah, guess what. Not so much over here, not
so much at all. And I wanted to end on this.
Speaker 4 (45:39):
When I was researching this, I found this post on
Reddit that I thought really did a great job of
summarizing my thoughts. So there was a post in the
ask Feminists subreddit, and the post asks why are women
using generative AI lesson men and the redditor trooper s
JP answers.
Speaker 3 (45:53):
I'm a university professor.
Speaker 4 (45:55):
If female students are using chat EPT less than male students,
it is the male students should be worried about. There
have been studies showing that there is a negative cognitive
impact of using generative AI on learners. This includes loss
and ability to retain information, loss of critical thinking skills,
loss of empathy, degradation of writing and math skills, degradation
of problem solving skills, and on and on. Furthermore, generative
(46:18):
AI is bad for the environment, contributes to environmental racism,
is built on stolen data by corporations who will zealously
guard their own intellectual property while willingly violating yours and
capitalist excitement in replacing entry level workers with AI is
having an immediate negative impact on young job seekers aged
twenty two to twenty seven, but will have a much
more profound impact on all of us when we lose
(46:40):
a generation of entry level workers who won't get the
experience needed to become experienced workers when the experienced workers retire.
Chat GPT is also inaccurate, bland, and produces terrible work
full of false information. Furthermore, in classrooms, including mine, the
use of generative AI is considered an academic integrity violation
and will result in as zero for that assignment and
(47:01):
forwarding the case to the student Conduct Board. My students
would not appreciate it if I use chat cheapt to
create lesson plans, to grade their work, and to write
their letters of recommendation. I do not appreciate my students
passing off work they didn't write that has a direct
negative impact on themselves and the world as their own.
So I don't think it is bad that women are
less likely to be cheating by using chat cheapt in
their work than men. I think it is bad that
(47:22):
so many men are wasting their educations by cheating. And
I felt like that really summed up what I was
trying to say about this reframing. I don't like that
this conversation is simply framed as women aren't using AI
and thus sort of almost blaming women for all of
these systemic reasons and reasonable reasons as to why that
(47:44):
might be, and also sort of saying, well, if women
weren't so afraid and also stupid about AI, maybe they'd
be making more women.
Speaker 3 (47:52):
It's not society, it's a stupid women.
Speaker 4 (47:54):
I really think that we need to reframe that and say, well,
what are women telling us by not using this technology? Like,
should we really just be saying these women are making
a bad choice. Shouldn't we really be zeroing in on
what women are saying as that pertains to why we
are not so gung ho to be adopting this technology
the way that men are.
Speaker 3 (48:12):
That's that's sort of my that's sort of my my.
So what in all this?
Speaker 4 (48:15):
You know?
Speaker 5 (48:16):
And I think in the future we were going to
come back and have a conversation about the red the
lining of this type of usage in this conversation and
the growing population of the red pill community, Yes, because
we're talking more and more like it has become even
more so. Worthy have come to the point of saying
of inceels saying being straight and having a relationship with
(48:37):
a woman is gay, Like literally, that's their like and soul?
Speaker 3 (48:41):
Have you seen this? So really a man can do
is sleep with a woman having sex with a woman.
Speaker 5 (48:46):
Right, Submitting and pleasing a woman essentially was kind of
like that kind of intake. But I really do wonder
if this conversation of using chat GPT to replace that
interaction again with a mal loneliness epidemic, and how it
becomes violent. If this conversation and if this push and
things like what Sam Allman was trying to do when
he first created that chatbot, as well as what Elon
(49:08):
Musk is trying to do creating his chatbot is just
opening up a bigger community, a bigger doorway for more
in cell activity and red pill activity, and we're gonna
have to come back and have to deal with the
consequence of things like AI being a part of this narrative.
Speaker 4 (49:25):
Absolutely, I completely agree. I think those things are totally linked.
And I'm in the middle of doing a lot of
research about the kinds of sexual and romantic attachments that
people create with AI, and I think it's complicated because
I want people to feel like they have nourishing relationships
in their lives. But healthy nourishing relationships come with friction,
(49:48):
like that's what you get from being in relationship with
other humans. And I think that the availability of folks
to kind of engage in frictionless relationships where they can
just take and take and take an extract and extract
and extract and always get what they want out of
them in some ways I can see by that is therapeutic,
but that is not what builds healthy people like health
(50:09):
Like healthy people have to experience relationships with friction. And
that's just the bottom line.
Speaker 1 (50:16):
Yes, yes, well topic for a future day.
Speaker 3 (50:20):
Yeah, I love it.
Speaker 2 (50:23):
I also have to say a dude told me the
other day. He was like, well, I love AI because
it gives me more leisure time. And I was like, well,
wait till you don't have a job, and then you're
not gonna have a way.
Speaker 3 (50:35):
That time, it'll be all all the time and we
won't be able to do all those things because you
have no monies more leisure time.
Speaker 2 (50:48):
Oh all right, well, thank you so much, Bridget for
coming during the stressful time. We always love talking to you.
Can't wait to see you in the new year twenty
twenty six.
Speaker 3 (50:58):
Yes, we're almost.
Speaker 1 (50:59):
There, almost there. In the meantime, where can the good
listeners find you?
Speaker 4 (51:04):
You can find me at my podcast There are no
goals on the internet, where we explore all kinds of
issues of the intersection of gender and identity and technology
and social media. You can check me out on Instagram
at Bridget Marie in DC, or on YouTube. But there
are no girls on the internet.
Speaker 1 (51:17):
Yes, and go do that.
Speaker 2 (51:19):
If you have not already listeners, If you would like
to contact us, you can You can email us at
Hello at stuff onever Told You dot com. You can
find us on Blue skyte Mom Stuff Podcasts, or on
Instagram and TikTok at stuff one Never Told You.
Speaker 1 (51:29):
We're also on YouTube.
Speaker 2 (51:30):
We have some new merchandise at comp Bureau, and we
have a book you can get wherever you get your books.
Thanks as always too, our superducer PRIs Senior Executive Prius
my under contributor Joey, Thank you and thanks to you
for listening. Stuff Never Told You is production by Heart Radio.
For more podcasts from my heart Radio, you can check
out the heart Radio app Apple Podcasts wherever you listen
to your favorite shows.