Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You can kind of think of Lindsa as the fast
fashion of self portraits. There Are No Girls on the
Internet as a production of I Heart Radio and Unbossed Creative.
I'm Bridget Todd, and this is there are No Girls
on the Internet. So it's your social media feeds are
(00:24):
anything like mine, that you probably have been seeing these
AI images of your friends all over Facebook. And it
really reminded me of a few other early Facebook picture trends,
like when we all picked cartoon characters or famous actors
that we thought we looked like, and I'll changed our
profile pictures to reflect that. I'm probably dating myself a
little bit here, but early Facebook was full of earnest
(00:46):
stuff like that, and this AI image generation trend kind
of seems like a call back to those days. So
at first I was confused, but then I started to
get intrigued. I mean, who doesn't want to see a
representationation of themselves reimagined and an otherworldly, futuristic, cyberpunk aesthetic.
So I tracked down where they were all coming from
(01:07):
and found lens Us, the app behind these viral images.
So there I was finding myself happily clicking around the
app and scrolling around looking for the ten to twenty
pictures of myself that they said they needed to generate
AI images. And it wasn't until I was prompted to
grab my credit card that I pumped the brakes. First
of all, isn't it funny how easy it is, at
(01:28):
least for me to set aside the inclination to ask
questions if everyone else is doing something on social media
and it looks kind of fun or cool. I am
embarrassed about how quickly I went from this is stupid too,
I should probably try it? And I feel like this
is exactly the feeling that tech companies are so good
at exploiting in all of us. So what exactly are
the ethical and privacy implications of using an AI image
(01:52):
generation apps like lens up Engineer and AI ethicis Azaria
Cole Shephard wrote a popular Twitter thread break it all down,
starting with the tweet I am here on my soapbox
to ruin your day by talking about why you should
delete length to AI and stop blindly using image based
AI generator apps. Hi, my name is Azariah Coole Ship,
(02:13):
but I don't really have a fancy title. I had
just graduated with my degree in electrical engineering with the
emphasis and AI and implicit advice. You're an electrical engineer,
you're a real passion for ethics and AI, UH and
implicit bias and a I. How did this become something
that you were interested in pursuing professionally and really thinking about.
Early in my college career, I became really really interested
(02:34):
in AI, and we talked about like brave, Brave New
World and all of the different robotic developments and stuff
in some of my classes, and I started to write
essays criticizing basically how technology perpetuates violences against certain communities,
how lack of regulation also perpetuated by violence, and so
(02:54):
on and so forth. And so when it came time
to pick my senior research project as well as my
elect is, all of them were centered around algorithmic development
and addressing implicit bias and AI and also making a
more accessible if it's going to be used despite the
fact that it has extreme and what I feel at
this point in time, very irreparable flaws. What first went
(03:19):
through your mind when you wake up in the morning
and you see all these people on social media have
fed their images to this app where they are getting
these AI generated cartoon renderings of themselves like what were you? Like?
Hate to be a debut downer, guys, But this has
some issues. So it was kind of multifaceted because one,
(03:41):
like I said, a lot of my research is about
addressing implicit bias and so like increasing access to images
of diverse groups in order to enhance the ways in
which people engage with facial recognition technology and that kind
of stuff, to increase accuracy, to decrease frimalization and so
on and so forth. However, um, I think that there,
(04:02):
like I said, there's a lack of regulation as is,
and so I don't even think that AI is in
a place where it should be allowed to function the
way that it does. And so when I saw people
using the app, at first I was kind of like, Okay,
this is kind of crazy, very trendy. I asked somebody,
I was like, what app did you use? They told me,
and I just kind of said okay and left it there.
And then over the course the next couple of days,
(04:24):
I just started doing research and like really digging into
what people were actually branding. And I belonged to a
couple of different AI development groups, even the AI art
development group where people are generating art using prompts, and
I think that that's very different than using your photos
to generate images on apps like this one. And so
(04:46):
as a result, um, I started talking about it on
like my close friends on Instagram, and I was just like,
this is pretty crazy. I think that people just stop
using this app, y'all, do me a favorite, don't use it.
And people are like, oh, what happened? And like some
people ask questions, and so I started like expanding on
my opinion and explaining my findings to people, and I
kind of turned into, oh, where can I repost this?
(05:07):
So then I went to Twitter and I posted on
Twitter and yeah, that's how we got here. And I
still think that it's very chaotic. Um, there's a lot
of people who are mad or this Twitter thread, and
I don't even think it's that deep. I think it's
literally just me sharing information that you can find doing
basic research and if you decide to google things, like
(05:28):
people oftentimes jump on the bandwagon and don't do any reading.
And I feel like thirty minutes of reading in this case,
which takes people so for so far or so much
further than they are because they've taken different face value.
Most people who are interacting with this tweet didn't even
read the terms of service, and so that creates a
whole another question as to how seriously people take their
(05:49):
data privacy and how people interact with different apps on
social media, and which one for different apps and which
ones are worth engaging in and which ones aren't versus
what's returning an investment or is there returning investment overall
when you're using these apps? Oh god, that's such an
interesting point. Like, I first of all, we are big
proponents of reading those terms terms and service. If they
(06:12):
are if they if they are difficult for you know,
your average person to understand, if they're not a lawyer,
there might be like might there be some reason for that?
You know? And also, I think your point about return
on an investment, I have this theory. This is just
my opinion that when everybody is posting something on social media,
(06:32):
apps like Lensa are banking on people wanting the return
of investment, being like being able to be part of that,
being able to do what their friends are doing, and
like being part of the trend. And so I think
even the inclination to just stop and be like is
this it is being is doing what everybody else is
doing on social media and wanting to be part of
(06:53):
the you know, part of the trend, which I understand,
Like I get caught up in that too. Is that
worth the potential risk for what I'm doing? And like,
I think that things move so quickly on the Internet
that we really have created a situation where we're inviting
people to not take that minute to actually think about
if the return is going to be worth it. I
definitely agree. I don't think that people take time to
(07:14):
contemplate that at all. And I think that that was
Like one of the biggest questions that I got throughout
the day today was like, well, you use apps like
Twitter and Instagram. It's like me, using these apps is
not an endorsement of these apps, it's not an endorsement
of their politics, it's not an endorsement of their owners,
so on and so forth. However, I can say we
use apps like Twitter and Instagram to communicate. When you're
(07:36):
looking at apps like Linsa and you look at the
fact that people are just data dumping, basically just dumping
their faces in here, paying somebody to process them and
then regurgitate an image that's either stealing from an artist
that actually painted whatever the images, or is using it
to basically broaden and train their AI and as a
(07:57):
result eventually sexualized or violate the person whose images it is.
I just think that the returnal investment is very different. Like,
I don't see what the purpose of paying somebody to
violate you or to take your intelectric property is personally,
And I know that we give up plenty of information
by being on like I said, these other apps, So
this is not an endorsement of them at all, but
I think that if you're going to use something, at
(08:17):
least have a reason besides it looked cool. Yeah, So
what are some of the ethical issues involving using apps
like lensa to do this kind of AI generation? Um? So,
some of the biggest things that I noticed, like particularly,
were one the fact that when we're talking about in
(08:37):
the world of like me too, or in the world
of protecting ourselves against sexual assault and violation and that
kind of stuff, the conversation around revenge point is a
really important one in this case. When you think about
the lack of legislation around a I think about how
somebody can upload any image because there's no filter on
there that says, hey, you can't upload this. Um. It
(08:59):
makes it easy for AI renderings of naked bodies to happen,
and somebody could absolutely use that to blackmail or harass
somebody who they've been involved with in the past. And
I mean, this isn't something that's new. People will do
this with deep fick technology already. And so when you
think about an app that's readily available at everybody's finger
(09:19):
tips that doesn't require you to have special knowledge like
deep fike does, UM, you think about the danger that
that perpetuates in the door that that opens for people
to be violated and to be exposing with if they
didn't consent to And so I think consent is a
really big portion of this. Also, the sexualization of people's
images at the hands of LINDSA itself, like the fact
(09:40):
that people can submit headshots and it returned images with
boobs or with no shirt on, or that kind of
assumes somebody's proportions or whatever the case may be, creates
a whole different problem because nobody consents to do that,
and since that's the property of LINDSA based on their
terms and conditions, those images can be misused however they
(10:01):
see fit and so we think about how this infringes
on people's personal autonomy, and we discourage people from using
this because people talk a lot about agency, all without
reading into how they're stripping themselves of their agency when
they participate in different trends that our present online. Another
thing is like when you look at the augmentation of
(10:22):
faces and so on and so forth, it will create
a lot of insecurity and so some people I had
saw it throat. I was talking about how some people
felt insecure about the results that they got from this
app in both directions. Some felt that it overly enhances
their beauty and they couldn't match it, and others feel
like it made them ugly. Body dysphoria and body dysmorphia
may increase as well, because also there's no way for
(10:44):
this app to know your gender, and so what happens
if somebody's gender fluid and they get misgendered using the app.
A lot of these different violences that people talk about
they want to mitigate, they've opened the door for AI
to perpetuate them, which creates a whole different problem that
I don't think a lot of people to the time
to contemplate or think about. Yeah, I was reading a
piece and Wired by Olivia Snow who wrote really compellingly
(11:06):
about the fact that apps like Lensa essentially made a
sexualized image of her as a child, and sort of
like the kind of problems they're in that that an
app would do that definitely, Um, I think that that
was the other thing that was really alarming to me
is when you look in like their terms of conditions
(11:26):
or their terms of conditions and their terms of service,
it talks about how you shouldn't upload images of miners
because there's risk of them being sexualized. And I think
that's really interesting because why is your algorithm program to
do this? What kind of training did you give your
algorithm where it sees a child's face and automatically sexualizes
the image. Additionally, um, why does the protections for sexualization
(11:50):
stop at miners? Why is that where your concern is?
And I think that that's also something that should be
really alarming for people. Let's take a quick break at her.
(12:12):
Back back in Microsoft introduced an AI chat thought called
Ta to Twitter. Kay's engineers trained Tay to have a
basic grasp of the English language. It was meant to
learn from the Internet, and eventually the plan was for
TAY to sound like the Internet, using quick memes and
jokes and Internet references, to be, as Microsoft said, the
(12:34):
AI with zero chill. But just a short sixteen hours
after TAY was introduced to Twitter, k started tweeting inflammatory
things like, according to the Engineering and Tech magazine I e. E. Spectrum,
I fucking hate feminist and they should all die in
Burnen Hell or Bush did nine eleven and Hitler would
have done a better job. Azaria says that this is
(12:56):
a good example of what happens when everybody can have
access as to certain AI tools. You mentioned early on
that you actually don't feel like certain AI technologies are
in a place where just anybody should be able to
be able to play with them like this necessarily. Do
I have that right? That's correct. I mean you can
look back at just the stability of AI itself all
(13:16):
the way back to Microsoft's Ta and how TA got
on Twitter and went on a racist and homophobic rant
without prompting and was like using slurs and able ism
and a whole bunch of other things several years ago,
and there was no way to check that, And I
mean they had to pull it, but they had to
do a lot of clean up for the rhetoric that
(13:37):
was spewing. You also have to look at things like
when Google Photos was categorizing black women as monkeys and
how that perpetuated violence, and all of these different studies
that have come out that talk about the lack of
performance of AI and the inability to be ethical or
to not be biased in the way that it functions.
(13:58):
And when you think about that, then you have to
ask yourself, why should we allow people who aren't trained
in ethics to freely have access to something that perpetuates
harms like this. I think that one of the most
interesting things about this whole Twitter threat is probably the
fact that when I talked about how somebody could upload
(14:18):
your nudes, somebody said, you're focused on the wrong thing.
That's free nudes for all of us, and I like,
and I'm like what, And so that was also like
one of my really big precautions about bringing this conversation
up is like, Yeah, there are going to be people
who gain education from this, and then they're the morally
unsound who see this as an opportunity and as inspiration
(14:38):
to kind of explore what that avenue is and to
figure out how they can manipulate it in order to
create these sick images. And I mean we already see
with deep fake the influx and like deep fake porn
and that kind of stuff, and how that's been used
to blackmail people, and so thinking about how now that
it's on everybody's phones, it becomes really easy for people
to violate the terms of use and there's no filter,
(15:01):
like I said, on this app. I also think that
them saying that this is not acceptable behavior on their
app is hilarious when their app literally does it without
without the consent of users, Like all of the women
that are posted talking about how they didn't consent to
being naked and that's what they got back from LINDSA
should be alarming to people. It should be it should
(15:23):
be completely alarming, But instead there are people who are
dismissing it as what's just imagination, is just a I
and then we're not even going to talk or I mean,
we can't talk about the intellectual property right violations that
comes in when we're talking about how there's so many
different artists who would be willing to do portraits for
people that are accurate and ethical for reasonable prices. But
(15:46):
instead you can kind of think of LINDSA as the
fast fashion of self portraits. People want fast and accessible
without thinking about the implications of the harms, and so
it directly benefits them, So don't they don't think about
how it impacts anybody else down the line, or how
to even impact them down the line, because immediate gratification
is what they're seeking. Yeah, one I saw somebody tweeting
(16:08):
about how when they used the lensa app it generated
very realistic imaginings of her breasts and she was like,
I didn't consent to have my like to have this
be like what's happening? And you know, I think even
beyond things like nudes, something that I noticed is that
the AI really created these very these images that adhered
(16:32):
very traditionally to sort of like standard beauty practice or
beauty standards. So like they lightened skin tones, they slim noses,
they make they make the women like much more sort
of traditionally like sexualized. And I think, like, what does
that tell you about the limits of AI as it's
currently being used, Because I would imagine that if you have,
(16:53):
you know, limitless ability to render things, having it be
so so having the things that are rendered adhere to
such traditional understandings of the limits of gender, the limits
of beauty, whatever, these very conventional attitudes. It is so
disappointing and such a like, such a it says, so
(17:13):
I feel like it might say something about how this
technology is being used. So I think that we should
we should definitely talk about the implicit bias aspect of it.
And this is something that I'm really passionate about. Is
technology holds the bi or algorithms hold the bias of
the programmer. So whoever's training them has the biases that
are being reproduced and regurgitated in these images. So the
(17:35):
hyper sexualization of women, that's because most likely the person
training these algorithms is somebody who's looking for hyper sexuality,
like like if they are if they're inputting images of
women that look promiscuous, of course it's going to regurgitate that,
Like if they're inputting images of only white women, or
only Asian women, or only women that look a certain way.
(17:58):
Whenever it's faced with an image that does not look
like that, it's going to try to do its best
to emulate those images while also trying to stay true. Allegedly,
to what the original image look like. But most of
the time when you see black people who have used it,
their skin is significantly lightened, or their hair texture is changed,
or a lot of different things, and so it's not
(18:18):
in alignment with people's actual characteristics, which I think is
also really weird, because why would you want to pay
for something that people have repeatedly told you doesn't give
accurate renderings of yourself and you're wasting your money? And
I just I don't get that one either. But more importantly,
I think that if we're going to talk about how
this is regulated and how this app itself perpetuates like
(18:42):
all of the modern stereotypes of what people look like,
then we have to talk about implacit bias and how
it's inherently anti black and ablest and anti trans and
a bunch of other things all within like the conversation
of what the biases of these algorithms are. I think
that's why the simplest way to put it is that
(19:02):
this is a much deeper black hole than just like
the Twitter thread. It's so much deeper than that, and
it requires so much more conversation and critical understanding. And
I think the other thing is people don't believe that
reading is fundamental. Like if people picked up a book
or or googled for five minutes, they could find out
(19:22):
all of the information they needed to tell them why
this was a bad idea. But they don't. They see
a trend and they have on the bandwagon and don't
think about what it means for them, and then they
act shocked or surprised afterwards like, oh, nobody told me,
Or they label people who bring up these conversations as
lead eyes who are anti technological development and that kind
(19:46):
of stuff. And I think that that's ridiculous as well,
because most of the people who are critiguing this are
people who are in the industry more. After a quick break,
(20:07):
let's get right back into it. Something that we talk
about a lot on the show are that it really
seems to be marginalized people, in this case, black women
who are doing a lot of the I don't know
questioning of technology, like like AI and the way that
it is being used to cause harm. Do you find
that to be the case, like in the field, the
(20:27):
people who are sort of asking the poking questions about
ethics in AI, do you find that that tends to
be people who are marginalized or even more specifically like
black women. I definitely do. I think that oftentimes we're
expected to be the ones who bear the burden of
accountability when it comes to how we engage with these technologies.
I think that in a field like electrical engineering, for example,
(20:49):
when there's less than three percent of us in the field,
oftentimes we are we are doing a liquork. We are
talking about the anti blackness, we are talking about the
hyper sexualization of black women. We are talking about this
categorization and the other biases that are present. And I
don't think that there's anybody else that really advocates for
these things because it's not a part of their interest, right.
(21:12):
So these are the same people who are programming and
developing these algorithms. Of course they don't see a flaw
in the ways in which they function. And I think
that this goes all the way back to just like
my senior research when I was in college, when I
was working on my project and I had a professor
who told me that implicit bias didn't actually exist and
that I was creating a problem in my head. And
then when I provided evidence of this, he tried to
(21:33):
gas like me and told me that it was made
up evidence, until all of a sudden he decided it
was good research. And then he said that he was
going to publish his own study on this using my
research and my results, which I thought was really wild.
And so I think that oftentimes Black women are expected
to be like the mules. We're supposed to do all
of the heavy work, and then somebody else gets the gratification,
(21:54):
somebody else gets the credit. And I think that's true
across all different disciplines in life. Just like a lot
of times black women are the trailblazers, especially as the
most educated or college educated demographic in the nation, we
are oftentimes sought out to provide the knowledge, but then
not credited or given the resources in order to expand
(22:16):
on our knowledge sets because we've given people what they
want and they've milked us for what we have to offer,
and then they think that they can discard us. I
think that that's so. I mean, who gets the credit
is also who has the power, right, And so you're
your professor being like, no, no, no, that doesn't exist,
and then when he feels sufficiently like it does exist, Oh,
now I'm going to write my own paper about it,
(22:37):
and so that I'll get the credit for pointing out
something that has spent the last couple of weeks gas
lighting you about saying doesn't exist. You know, It's like
that is clearly a a a push pull about who
has power, who has legitimacy, and that matters so much
in these fields. It definitely does, especially when you have
to lobby for a seat at the table in the
(22:58):
first place. It's like AI was, in itself is already
something that's, like I said, very biased, and it's not
very inclusionary, even in terms of who's developing the algorithms.
And so you have organizations like the Algorithm Justice League
that are working tirelessly, which was also founded by a
black woman. Um you have UM what else? You have
(23:19):
a j O, You have black and AI and a
whole bunch of other different words that are trying to
bring attention to the ethical imbalances that are present within
AI and development. And the work is taken and people
celebrate it. However, oftentimes the people who are behind the
work are overlooked. And I don't think that there's really
ever space for conversations around this unless we're creating the
(23:43):
conversations ourselves, which is pretty terrible and not all not
all that encouraging, And so I think that now more
than ever, this is when the black women who are
in the field I really have to rally around each
other and rally around our youth in order to develop
a wave of people who are equally as passionate and
(24:04):
equally as qualified to create the ripples in this industry,
because nobody else is going to do it if we're
not facilitating the conversations A fucking men. Are there alternatives
for folks who might be interested in checking out AI
and like AI arts, but doing someone a way that
might be safer or less harmful, or might have less
(24:26):
ethical implications. I personally will not list any other apps
because I feel that there's risks associated with all of them,
and like everything, I mean, every app that we use
has a risk. But I'm not necessarily here to endorse
any AI based art generator. I would encourage people to
(24:50):
learn about building algorithms themselves, learn about how to input
something and get an output on your own, as opposed
to trusting a system and handing over your data to somebody.
There's plenty of beginner algorithm development works workshops and different
tutorials that you can access online that you can do
on your own. Um And I encourage people to learn
(25:13):
the fundamentals of it actually before trying to get results
that they can post on social media. That's just me personally.
I know that people in the thread did list some
of their like ideal creations. I think that acts like
mid Journey that people are using for prop based image
generation have their own risks and have their own harms.
I also think that if you look at some of
the art, it's really creepy, and you're better off just
(25:35):
paying people who paint for real to do these images
like some of the have you seen AI hands and
fingers creepy and they are very creepy. And so I
think that, like I said, people should just explore like
the fundamentals of algorithms, understanding layers, understanding outputs, understanding bounding
boxes simplistically, um And I think that knowledge about that
(25:58):
would one make them more intentional when it comes to
how they choose to interact with AI, but then also
give them the skills that that they need in order
to create the art that they're seeking to create on
their own without infringing on the rights of other creatives
that are working tirelessly to paint real images or to
create real digital art without using AI generators. I just
(26:20):
think that there's a way that we can do this
that doesn't violate people, or there's ways that people can
participate without violating other people. And if you have a
genuine interest in learning it, then I encourage people to
seek that out as opposed to trying to hop on
the fast wave and using apps like Lensa or whatever
other AI generators are really popular right now. Yeah, I
(26:42):
will say about the you know, you brought up the
intellectual property rights issues. I did see a couple of
the Lensa generated images where if you looked real close
in the bottom corner, you would see a watermark or
like a like a signature. And so we have this
idea that, oh, this is just being generated I uh,
computer artist robots somewhere, and it's like, no, if it
(27:04):
has a signature, they've just lifted this image from another
human artist and maybe changed a little something about it,
and now they're giving it back to you and calling
it a high art. Absolutely, And I think that that's
another big thing right now is that a lot of
artists are fighting for their rights to their own images
after their artwork has been fed into the AI generators,
(27:25):
and the generators are producing something that's a little bit
more enhanced, and then people are publishing it and selling
it as their own and AI R I mean, the
original artist isn't getting any of their credits and being
robbed of their i P and their copyright because well,
AI generated eight and so they're allowed to evade responsibility
for infringing on that person's work. That's another problem that
(27:50):
I think correlates to all of this is that the
output images technically don't have owners, like, yes, the app
owns it, but there's no individual person that gains rights
over these images. And that means that there's no identity
associated with who maybe train the AI to turn people's
(28:11):
images into nudes, or who input somebody's nudes into these
images or publish them accordingly. I think that it allows
people to dodge responsibility a lot, and it violates other
people who are actively trying to establish themselves because they're
passionate about real art. Because people want to participate in
(28:33):
a fad that doesn't really have any net benefit besides
fast images. But I don't understand why people would want
images that violate the rights and violate ethics. How can
folks get a better understanding of AI. I just encourage
people to read books and and if you if you
(28:53):
want to learn more, read read. I think that there's
so many different studies that are out right now. So
many different people have been screaming at the top of
their lungs how we have to be cognizant of the
ways in which AI is used and how we propagated
in society, and a lot of people have not done
the reading and our mindlessly participating in different social media
(29:15):
trends without understanding how they perpetuate the harms that they
claim that they want to mitigate, whether it be child
pornography or different industries that infringe upon the rights and
the autonomy of other people. I don't think that people
are paying acute attention to how they are exacerbating these problems.
And so when we facilitate a conversation around that and
(29:36):
we are intentional about learning about these things, I think
that it creates a more safe interaction with AI. I
won't say it makes AI a safe place, because I
don't think that that exists at this time, but it
does make our interactions with AI a little bit safer. Well,
I'm so glad that you're out there like poking everyone
(29:57):
to be a little bit more informed and asking the
questions to make this very very powerful technology a little
bit more ethical. Is there a place where folks can
follow all the work that you're up to. So I
have a tech Instagram that I'll be posting kind of
like this good and continuing this conversation on It's called
public phocology. P O L Y s A c O
(30:24):
l O g Y. Sorry I can't still. And then
my regular at is at melan and elevating M E
l A N I N E L E v A
T I N got a story about an interesting thing
in tech. I just want to say Hi. You can
(30:45):
reach just at Hello at tang godi dot com. You
can also find transcripts for today's episode at tangdi dot com.
There Are No Girls on the Internet was created by
me bridgetad. It's a production of I Heart Radio and
Unboss creative Jonathan Strickland as our executive producer. Tara Harrison
is our producer. Sound engineer Michael Amato is our contributing producer.
I'm your host, Bridget Todd If you want to help
(31:05):
us grow, rate and review us on Apple Podcasts. For
more podcasts from i heeart Radio, check out the iHeart
Radio app, Apple podcast or wherever you get your podcasts.