All Episodes

January 23, 2023 46 mins

Today is a deep dive episode about racism and A.I.! Artificial intelligence is amazing technology but it is not immune from prejudice influencing how it operates, much like other parts of our lives. We’ll explore some ways in which racism pops up in AI as well as looking at some of the people and organizations that are working hard to make AI more equitable and inclusive.

PLUS we are excited to welcome back to the podcast the wonderful Jayme Alilaw! Jayme Alilaw is a creator of many passions with a grounded mission of empowerment, edification, and education. Jayme engages the world as an opera singer, leadership and life coach, entrepreneur, educator, and public speaker. Based in Atlanta, Jayme is an Army veteran and mother. Follow her on IG @sangmsjayme

 

Check out our comedy videos @markkendallcomedy

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Ridiculous News is a production of our heart radio and
cool Cool Cool Audio. Yeah yeah, yeah, we're amazing and
crazy topics to dig in to Jools you would now
tune me and to Ridiculous News. We get us the
views on waking, the rules of broadcasting and all sorts.
A while of course that she was lapping brand up

(00:20):
beat journalism, the strange and unusual stories, and well we
gave them. When it's all about ridiculous news. Everywhere we
told about ridiculous News over here. Hey, everyone, welcome to
Ridiculous News, not your average news show. We cover stuff
you didn't realize was news from the wild and funny,
to the deep and hidden to the absolutely ridiculous. I'm
Bill Worley and Land based filmmaker and comedian, and I'm

(00:42):
super excited about the topic today because, as we all
know from movies and film, AI never has any ifshoes.
It always works perfectly. Everyone. UM name is Marknell. I'm
an Atlanta based comedian and Bill, I gotta say you
know that's I love your take on what sci fi
movies taught us about out AI. But real like, while
I'm cautiously optimistic about you know, technology and AI, and

(01:05):
where I can go, you know, at the same time,
you know those sci fi movies they do have me
skeptical and y'all. Today is a deep dive episode about
racism in AI as well as the people and organizations
that are working hard to fight against that. Uh. So
as artificial intelligence, uh you know, it's this, it's this
amazing technology, but it's not immune from prejudice influencing how

(01:26):
it operates, much like many other parts of our lives.
So we're gonna look at ways in which racism pops
up in AI through different news headlines as well as,
like I said earlier, looking at the people uh, trying
to make things better and more inclusive and equitable. In
addition to that, we are joined uh by our special
guest who has joined us on the podcast before. We're

(01:47):
so happy to have them back. It's Jamie Ali Law
and so. Jamie is a creator of many passions with
a ground admission of empowerment, edification, and education. Jamie engages
the world as an opera singer, leadership and life coach, entrepreneur, educator,
and public speaker. Based in Atlanta, Jamie is an Army
veteran and mother. Welcome Jamie yay. I'm so happy to

(02:12):
be back. We're happy, happy back, Jamie. And we'll kick
things off the way we do every time we have
a guest, which is a segment called giving them their flowers,
and so this is a moment where we give you
a quick compliment that you cannot return. So Jamie'll started off.
I want to thank you for joining me on stage
at City Winery in Atlanta. We got to do some improv,
you sang some music. You were phenomenal as always, and

(02:33):
I'm already looking forward to the next one. I'll ask
you about what you're doing very soon to see if
you want to do another show. So thank you, Okay,
thank you. Yeah, it's always so much fun to be
on the stage with you. Mark. Yeah. And to to
pony off of that, I was at that show in
the on the in the front row, and uh, Jamie,
I've seen you perform before, you know, we've worked together before,

(02:53):
first sketch stuff, but I have never seen you do
that bit that you did there, and it blew my mind.
It was so amazing and I'm sure you get that
feedback from anybody who sees it, but it's just so
awesome to to watch and also just it's just fun
to catch up with you, Jamie, like even before we
start recording. It's always fun to chat with you. Excited
to have you on the podcast. Always good to see you.

(03:16):
Absolutely good to see you too. I like you, Bill.
So next up, y'all, we have some ridiculous news nibbles.
So these are headlines that are centered around the theme
of racism and AI our topic for today. And so
this first headline comes from BBC News, uh, and the
headline is Facebook apology as AI labels black men as primates.

(03:37):
And you've probably seen headlines like this before a couple
of times throughout the years. But Facebook users who watched
a newspaper video featuring black men were asked if they
wanted to quote keep seeing videos about primates end quote,
And that was from an artificial intelligence recommendation system. So
Facebook told the BBC it was clearly an unacceptable error.

(03:58):
It disabled the system in law an investigation. They went
on to say, we apologize to anyone who may have
seen these offensive recommendations. And so this has been a
pattern where you have errors popping up, and it's because
of racial bias embedded in the AI technology. In Facebook
announced a new quote inclusive product council and a new

(04:21):
equity team and for Instagram and that would hopefully examine,
among other things, whether it's algorithms exhibited racial bias. So
the primates recommendation was an algorithmic error on Facebook and
did not reflect the content of the video. And that
was what a representative told BBC News. Uh. So the
article goes on to say, uh that we disabled the

(04:43):
entire topic recommendation feature as soon as we realized this
was happening, so we could investigate the cause and prevent
this from happening again. As we have said, while we
have made improvements to our AI, we know it's not
perfect and we have more progress to make. And the
reason I like this as first headline to kind of
like start our conversation about, you know, racial bias and

(05:04):
AI race in AI is that these are kind of
typical stories that you will see pop up. Is that,
you know, a social media platform, a website will introduce
a form of technology, a form of AI, and then
there's like racial bias that comes with it, and then
they have to quickly backtrack like, oh, we didn't know
it was gonna do that, you know, and so and
then they're like, we're trying to figure out how to

(05:25):
solve it. And so I think that that sets up
a lot of these other stories we're going to go into. Yeah,
I mean, to me, this just speaks to the lack
of diversity and folks that maybe designed this program right,
And I think that's probably something that's going to come
up a bit is you know, certainly, I think if
there were more people of color involved in the programming
maybe this, and the testing, et cetera of things like this,

(05:50):
then those biases should be realized ahead of time. Um.
So it's just one of these things where you know,
the obvious thing is to make sure this works, any
kind of AI works regardless of what you look like,
regardless of your skin tone. Um. And one way to
do that is, I think a lot of the AI

(06:10):
that's designed right now, and you look at these tech companies,
that's a lot of white male dudes, right, And so
I think that's one of the big problems as well.
I'm sure we'll discover as we continue to talk about
this stuff. So Bill, Yeah, thanks so much for sharing
that point. I think that's really great. Jamie. Any thoughts
on the article, Yeah, I mean, um, one like wow,

(06:32):
like do you want to continue? Because what you've been
viewing is private? So do you want to keep it up?
At it like that? That one is like okay, wow,
but um, you know, actually it reminds me of a
posts off from someone who is a friend and some
a mentor um and a lot of the work that

(06:54):
I do around um equity and racial justice. And she
commented on something and she said, love is not the
solution to racism because hate is not the problem. She said,
the problem is white supremacy. And when I think about this,

(07:17):
and and you know, to your point, Bill, it's like
white supremacy can be so blinding that you don't even
like consider the fact that you are literally typically in
a room with just a bunch of older white guys
making these decisions because white supremacy says, of course, this
is the way that it happens. And white supremacy says,

(07:38):
of course, white is the standard, and everything else is
a deviation from that. So it's such a huge, gaping
blind spot that doesn't get questioned because white supremacy says
that white is supreme. So I don't hate you. I'm
not a bad person. I'm not you know, like the

(08:00):
biases that are written into these systems. It's not because
people are bad or mean, or hateful or ignorant. It's
because of the myth of why supremacy. Yeah, yeah, and
I think you know this, you know, people trying to
solve that issue leads us into this next article is
because when you have a room full of just white

(08:21):
faces and in a white town in the middle of
wherever you are, you know, then that you you you
become blinded or you're not thinking about other people, which
is terrible. And so a lot of people try to
bring in people of color, and they do that by hiring,
uh diversity. And this next article from Forbes by Madeline
Halbert is titled AI power. Job recruitment tools may not

(08:44):
improve hiring and diversity, experts argue. It says that job
recruitment tools that claim to use artificial intelligence to avoid
gender and racial biases may not improve diversity and hiring
and could actually perpetuate those prejudices. Researchers within the univer
Persity of Cambridge argued Sunday, casting the programs, which have
drawn criticism in the past, as a way of using

(09:06):
technology to offer a quick fix for a deeper problem.
The researchers to professors at Cambridge UM argued that these
tools may actually promote uniformity and hiring because they reproduced
cultural biases of the quote ideal candidate, which has historically
been white or European males. This is a quote from

(09:28):
Eleanor Drange, one of the co authors, and she said
by claiming that racism, sexism, and other forms of discrimination
can be stripped away from the hiring process using AI,
these companies reduced race and gender down to insignificant data
points rather than systems of power that shape how we
move through the world. And that's always yeah, anytime you

(09:48):
take the humanity out of something and just rely on
these statistical data points, this comes up over and over again.
And you know these articles that mark that you pulled
in our wonderful researcher casey uh pulled information from as well.
It's like, and this is a problem. I'm not sure
how it can overcome, because it's when you break us

(10:09):
down into data points and things like that, it becomes faceless.
There's so much nuance to the world that that doesn't see.
And one of the interesting things about this article two
is goes on to say that Amazon um announced in
back it would stop using AI recruiting tools to review

(10:29):
applications out after I found it strongly this is from
the articles quote strongly discriminating against females. And that's because
computer models that relied on were developed based on resumes
submitted to the company over the past ten years, which
majority male applicants. And so it's not you know, it's
like these computers are just using the data that we're

(10:52):
giving him, and we're not realizing are these particular people
aren't realizing the biases in that data already, and just
the I feel like the biasities and data in general. Yeah,
to me, it's just like a demonstration of just how
much we don't get it, like what is actually going on.

(11:14):
Because to your point, Build that was exactly what came
to mind immediately. It's like, this is a human thing,
and so getting away from humanity is not going to
bring the solution that is going to be sustainable. And
there's no way around the deep, challenging, messy work of

(11:41):
diving in, getting curious, questioning, challenging, facing the shame, the
fear that all of that stuff in order to get
to the solutions on the other side, like you can't
come up with a mathematic equation that erases it all,
and it can it can provide insights, which it definitely has,

(12:03):
And like I'm sure they didn't. I think that this
was the way that it was going to benefit the work, right,
But you know, if if they look at this, they
can see all of the things that they've been trying
to avoid by you know, uh passing it off to
computers so that they don't have to be uh, don't

(12:25):
have to face you know that I'm a good person
and I'm racist. Yeah, yeah, And I think this next
headline kind of continues uh similar conversation. So this from
CBC News by Jorge Berrera and Albert Long, and the
headline is AI has a racism problem, but fixing it

(12:45):
is complicated, say experts. So online retail giant Amazon recently
deleted the N word from a product description of a
black colored action figure and admitted to CBC News is
safeguards failed to screen out the racist term. The multi
billion our firm's gate keep being also failed to stop
the same word from appearing in the product descriptions for
a do rag and a shower curp. Uh. So the

(13:07):
China based company selling the merchandise likely had no idea
what the English description said. And that's what experts tell
CBC News as an artificial intelligence language program produced the content.
So Retaliate and Condi said that AI has a race problem.
And this is a former journalist and technology policy expert,
and they run the U s based nonprofit organization AI

(13:29):
for the People, and that organization aims to end the
underrepresentation of black people in the USU technology sector. And
they go on to say that what it tells US
is AI research development and production is really driven by
people that are blind to the impact that race and
racism has on shaping not just technological processes, but our

(13:50):
lives in general. Uh So, the article goes on to
say that like simply and we were kind of touching
on some of this earlier, but like simply filtering data
for racist words and stereotype would also lead to censoring
historical texts, songs, and other cultural references. And so it
searched for the N word on Amazon turns up more
than a thousand book titles by black artists and authors.

(14:11):
Um So, this is something that you are were getting at,
which is that it's more nuanced than simply just like
having a program that uh, you know racist these things.
And Bill you had highlighted a quote from Incandi that
I thought was, uh, summed it up really well, which
is him saying we need to normalize the idea that

(14:33):
technology itself is not neutral. I mean, and can we
just take a moment right because you said you said
on Amazon that it also turned up the use of
a racist term when referring to shower curtains some like

(14:53):
that call of like nigga, like you'll need to come
and get the admigre Like, I mean, I don't know,
they could be great, but what an experience to be
searching for, you know, just luscious, luxurious black shower curtains. Yeah,

(15:17):
it's like what it's like, what's it's so bizarre? And
you know it's to speak to that nuance of it is.
You know, you have a thousand books with that word
that might be in the title or around and so
you know, I think part of it is it feels
like it's laziness wanting to put this on AI a bit.

(15:39):
You know, I think there's such nuance to it, and
you have these companies with you know, billions of dollars.
And to the to Conde's point is like, if we
know technology itself is not neutral, then it might not
be ready to handle some of this yet, y'all. So
some of these things need to be handled by human

(15:59):
being ings and not just older white men. We need
a diversity of human beings there that can actually look
at this stuff and make an informed decision. And it's
so sensitive. I don't understand why that's not happening. I
know that it's probably a decent amount of data, but
I don't know. A thousand is not too much in

(16:22):
my mind for someone who had that job or that position. Well,
And and the thing is is that all of it
is the responsibility of people, because then the people go
and program and create the AI. So it gets back
to the input, and so the programmers the creators have

(16:44):
to themselves possess the nuance, the willingness, the ability to
then teach the computer and the technology to carry on
what they've put into it. So there's no there's no
there's no way around it. Yeah, Yeah, we're gonna take
a quick break to hear a word from our responsor. Yeah,

(17:11):
what amazing and crazy compose ridiculous news all right, y'all.
So we're back and we are talking about racism and
AI as well as people that are working to make
things more equitable. And we are joined by an amazing
special guest, Jamie ali Law. And so we're hopping into
another story. This one was from The New York Times
by Cad Mets and the headline is who is making

(17:32):
sure the AI machines aren't racist? So Dr Timmy Brew
was pushed out of Google without a clear explanation, the
article says, And she said she had been fired after
criticizing Google's approach to minority hiring and with the research
paper highlighting the harmful biases in the AI systems that
underpinned Google's Research engine and other services. Uh. They had

(17:52):
a quote saying this from Dr Cabreuse saying, Uh, your
life starts getting worse when you start advocating for underrepresenting Pete.
You start making the other leaders upset. So then the
article goes on to say that doctor Margaret Mitchell, who
was building a group inside Google dedicated to ethical AI,
defended Dr Peproup and the company removed her too. Uh.

(18:17):
She had searched through her own Google email account for
material that would support their position and forded emails to
another account, which somehow got her into trouble. And yeah,
her firing or her move from the company was weird
as well, you know. So their departure became a point
of contention for AI researchers and other tech workers. And

(18:37):
some saw a giant company right there was no longer
willing to listen, too eager to get technology out the
door with without considering the broader implications, and so they
were removed. Trying to fight for better you know, equity
and inclusion with this technology. Now they're gone. Um and so, uh,

(19:00):
we have other articles that kind of talk about what
happened after that, but I just wanted to introduce that
in there, because you know, we were just talking about like, oh,
you know, you need to have a more diverse workplace,
you need other voices in there. And what's interesting about
that is that like those people do speak up, those
people that are in the room speak up, they're not

(19:21):
always really listened to, you know. So it's just like
so that's there's a cost to that. And then and
it's not like speaking up is easy, you know what
I mean, that's not really their job. A lot of
the time so uh so yeah, so uh that I
thought that it was an interesting part of the article.
I mean, this is what we mean by systemic and structural, right, Like,

(19:44):
it's not check the box, we fixed the problem type
of a thing. It's a culture, it's a way of being.
And so I've seen time and time again, especially over
these two years, where people are like, yeah, we need diversity,
let's get diversity in here, and it's like, well, let's

(20:05):
pause and consider the fact that we've not been diverse
this whole time, and we've gotten comfortable and we've seen
what we view as successes. So it's not going to
be so easy to bring in somebody new, something new
and just be like, all right, we're gonna adapt to it.

(20:25):
And what crap are you bringing these people into I'm like, wait,
y'all ain't never had the black people here before? Um?
Do y'all got lotion in the bathroom? You know? Like,
I have questions. So there are so many things to
be considered that I think to that point, I think

(20:46):
the the Russian hurry to produce and to not get
canceled or to be right or to prove that you're
not negative, causes us to try to sidestep and circument
it these human factors. You know, you're creating new relationships.

(21:06):
You gotta kind of there's some work that has to
be done there to make it safe, right, Yeah, I
think it's so tough. I mean, I'm thinking about my
jobs in the past two you know, I worked for
Big Big Brothers Big Sisters for five years, um and
who had a black CEO, majority black workforce here in Atlanta,

(21:27):
which is not that that uncommon in Atlanta for certain businesses,
some businesses it is, of course. But it's so interesting,
like everything you were saying, Jamie, because then when I
switched to a company that was not that way, UM,
I moved over to agency that was incredibly white supremacist
in and you know what's interesting, and you know, trying

(21:49):
to get into the nuance of this is like even
some of those other nonprofits I felt like kind of
had a white supremacist culture in terms of the way
feedback was provided or the way that people were treated,
even if that particular branch was majority black, if the
overall organization and the rules that were coming down from

(22:12):
quote unquote corporate or had you know, weren't taking into
consideration the exact things that you were just talking about,
and so that it's you know, and I hate not
being solution oriented about I'm just making the observation that,
you know, even if you're at a branch where there's

(22:32):
a majority of diversity in a company, sometimes that not
going all the way to the top or the headquarters
can cause that white supremacy or that culture where people
of other um colors and attitudes ideas are not considered
well and too. To add another layer to that, it

(22:53):
is possible to have an all black organization that perpetuates
white from this culture. It's in the structures, it's in
the things that we consider to be professional, to be right,
to be success, to be We've all been impacted by

(23:15):
these beliefs and ways of being, and so it is
possible to be black and white supremacist, or to be
black and anti black right. So it's work we all
have to do and we're working. Yeah, we're solving it
problem right now. This podcast here we go, we tune in.
The solution to racism is right here. We got it well.

(23:36):
This next article is from NBC News by Julian McShane
and Technique Cabrew is part of a wave of Black
Women Working to change AI is the article, and it
goes on to talk about cabrew as a known advocate
for diversity and AI who announced the launch of the
Distributed Artificial Intelligence Research Institute or d a i R.
There its website describes it as a quote space for

(24:00):
independent community rooted a research free from big text pervasive
influence UM And you know, talking about her resigning as
you just mentioned Mark in this previous article, Hebrew said
she learned from email from her managers manager that she
had apparently resigned from her high profile position as a

(24:21):
co lead of Google's ethical AI team, but she never resigned. Uh.
She was fired after requesting that executives explained why they
demanded that she retract a paper she co authored, and
it was that paper was about how large language models
are AI trained on large amounts of text data, a
version of which, Underpinns, Google's own search engine could reinforce racism, sexism,

(24:46):
and other systems of oppression. And like we talked about earlier, y'all,
I think, you know, think about the text we're feeding
in the decades and decades of text from white and
European centric ideology. And if you think, for some reason
and putting in all of that is not going to
reinforce racism, sexism, and other systems of impression. That's interesting.

(25:07):
I mean, I hope that we're at a point now
where we realized that's obvious. It seems obvious saying it
out loud. It seems very obviously that if you're putting
all this stuff from the past, and if you look
at the history and you know who's writing these papers
that you're putting in, you know, it seems obviously. But
it's not for some of these folks. Um. And you know,

(25:27):
Google's head of research Jeff Deane said in the company
email that the paper quote didn't mean our bar for publication,
although others within the company have cast out on his claim. Um. Yeah,
which is which is funny to hear, because you know,
this new organization that she started has financial support from

(25:47):
folks like the MacArthur Foundation, the fore Foundation, you know,
the Rockefeller Foundation, all these places. That's not someone that
can't write a paper that's standards like yeah, I don't know, yeah,
oh god, I need I just I need lead racism.

(26:10):
I don't need this, but I need racism to like
get like new material, right, like we fired her because
she wasn't qualified. We fired her because she wasn't a
good fit for our culture. We fired her because like
it's like, y'all, whatever way you package it, like whatever

(26:30):
synonyms you use, we hear you. But the differences is
now more people here and it's not just the black
folks who are like looking around like did you artists?
But it's chair well, you know, this is a very
different topic, so like switching you know, uh uh places.

(26:51):
But it reminds me of Nicole Hannah Jones with her
tenure situation. Uh like a couple of years ago, um
Kohana Jones off off of the nine ten Project. And honestly,
I don't remember the story very well, but I just
remember seeing headlines being like she was denied tenure wherever
she was at and I'm like, how's that even possible?

(27:11):
Like um, and and again I don't know the details
of the story. I just saw something along those lines.
And yeah, it comes up sometimes when uh, someone does
something the organizations two doesn't like or they feel like
ruffles Feathers called people out in a certain way and
Jamie to your point, rather than saying like, oh, this
made us uncomfortable, so we got rid of them. Instead,

(27:33):
it's just like, oh no, this is about you and
your qualifications or your ability to be professional or something
like that. Uh so yeah, So to your point about
getting a new script, Yeah, you see it in different
ways in different places a lot of the time. Oh yeah,
they passed the book around. And and to the point
in the previous story, um where you had I believe,

(27:54):
if I'm remembering the story correctly, that those the second
person who spoke up in support of the other person
and was a non black person, or maybe I'm just
you know yet think it said well, let's just pretend,
let me just make up my own news, and let's
just pretend because the point that I was going to

(28:18):
make is still relevant in that because what was mentioned
in that article was when you speak up around something
that is viewed as uh controversial, you become at risk.
And so again, when we look at these structures, it's

(28:39):
not just as simple to be like, the black person
will get fired. It is the person who positions themselves
against white supremacist standards is at risk, regardless of the
race of that person. And so you know, when we

(29:00):
have these conversations, when people talk about ally ship or
whatever have you. And you know, black folks are like,
don't come and support me in quiet, you know at
the water cooler, said with your chest in the meeting.
And we know the reason people don't do that is
because Dan, I'm gonna be fired too. I don't want
to be fired, you know, right, But you know some

(29:21):
of us don't have that option. But it's it if
we just imagine that that was the scenario that happened
in the previous story. You know, I can see how,
regardless of your racial makeup, when you disturb white supremacist structures,
then it's inconvenient. We will be right back with more

(29:41):
ridiculous news after this short break. Yeah, don't to confuse
ridiculous news. Hey, we're back. We're gonna continue talking about
some organizations, some places that are working to AI a better,
more inclusive place. So this first story comes out of

(30:04):
Northwestern's news website, news dot Northwestern dot EU. Uh, I
went to end you nice place. They also have a
lot to work on, so I'm glad that they're working
on trying to make AI more safe and more equitable. Um,
hopefully they can do that with their security in certain
buildings as well. So that would be nice, but what

(30:27):
we can do it for a different podcast. But but so,
this particular article talks about how to help examine artificial
intelligence AI UH systems UH and evaluate the impact UH.
There's an organization Underwriters Laboratories, Inc. And Northwestern University their
joining forces to create a research lab that seeks to

(30:48):
better incorporate safety and equity into this technology that is
growing really fast. So the Digital Intelligence Safety Research Institute
or d I s r I at Underwriter's Laboratories UH.
The goal is to work on this very thing. So
the article goes on to say that artificial intelligence UH
informed by machine learning is increasingly ubiquitous in our everyday lives.

(31:11):
And that's a quote from Christopher J. Kramer, who is
the Underwriter's Laboratories Chief Research Officer and acting as the
d I s r I Executive Director. They went on
to say that it's imperative we get it right. We
must develop approaches and tests that will incorporate equity into
machine learning and hold it to standards guided by both
safety and ethical considerations. I'm terrifically excited about this partnership,

(31:36):
which will foster research aimed at integrating safety into machine
learning and artificial intelligence design, development, and testing processes. So
hopefully you know that that that works out in that
great way. Certainly sounds positive. I mean, you know, I
think obviously we've got to keep working on it, and
we realize from all these other articles that we've talked

(31:59):
about and just life that it's an issue that we
need to continue to work on. I don't think it's
ever going to stop. The work will never stop. But
you know, it's interesting. I don't know, you know, if y'all,
I hate to be like Donald Downer, but it's this.
It's it's a lot of like interesting language. But I

(32:23):
don't know, I'm just curious to see how it actually works. Well,
you know, Mark, to your point, I know, we kind
of chuckled and you're like, I hope they apply some
of this to the security um. That brings it back to,
you know, the conversations that we were having in the beginning.
It's like who's doing the work, who's leading the work,
and on top of that, what changes have taken place?

(32:48):
Because if they've not done the reparative work to make
sure that their students feel safe on on campus with
their security, I don't trust them to do this, right,
because we're talking about blind spots and and and it
could be blind spots, and that that is best case scenario, right.

(33:12):
And there's also complacency and just an unwillingness that could
All these things can contribute to the reasons why things
have not changed thus far. Um. And a lot of
that also has to do with people not willing to
look within and people in structures. So if the structure

(33:33):
the institution of Northwestern has not looked within to consider,
you know, or to even be aware of the issues
that the black male students on campus might be facing, um,
then they're not equipped. They ain't ready to do this work.
I'll say that. And also, you know, let's give it,

(33:57):
let's give it a go, you know, do what we
gotta do. But also you yeah, you just have to
make sure we come up with these solutions. We must
make sure that we've got the people who are equipped
to do them are doing them. You Know. One of
the things that jumps out to me and all these
things that we've talked about is, Okay, we have this technology.
You know, people are working on improving it, making it
less racist. It seems like you know, I've said, oh, well,

(34:21):
maybe we should just leave this up to people. I mean,
is there a solution where we can do both? Is
it like, can we test this technology and then have
someone there to make sure it works and and then
that until we get to a point where it's working
well enough. That seems like a solution. You have the
AI doing its thing, and I'm sure they're they're like

(34:43):
throwing stuff at the speaker, like that's what we're doing.
But you know, it's like, let's what a novel ideal
like this guy clearly doesn't know anything about our process,
but you know, that just seems like, you know, until
computers do a better job. If we're implementing this stuff,
we have people working alongside it. And I do, of

(35:05):
course advocate for fixing these systems. And this next article
goes on to talk about how AI can help combat
systemic racism, which is, you know, taking a even further
than what we talked about just trying to make it
not racist. Now we're trying to use it to actually
fight racism, which sounds great. So this is from m I. T. S.
Institute for Data Systems and Society and those articles by

(35:28):
Scott Murray, and it says in Detroit police arrested a
black man for shoplifting almost four thousand dollars worth of
watches from an upscale boutique. He was handcuffed in front
of his family and spent a night in lock up.
After some questioning, however, it became clear they had the
wrong man. So why did they arrest him in the
first place. The reason a facial recognition algorithm had matched

(35:50):
the photo on his driver's license to graining security camera footage,
and facial recognition algorithms, which have repeatedly been demonstrate it
to be less accurate for people with darker skin are
just one example of how racial bias gets replicated within
and perpetuated by emerging technologies. Um SO Professor S. Craig Watkins,

(36:14):
a professor at the University of Texas at Austin Love
Austin um said that one of the fundamental questions of
the work is how do we build AI models that
deal with systemic inequality more effectively. Systemic change requires a
collaborative model, and it's different expertise. As Watkins, we are

(36:34):
trying to maximize influence and potential on the computational side,
but we won't get there with computation alone. Yees see, Okay,
there we go. We got it's a little bit as
like a collaborative model. Yea, there it is. Watkins goes
on to say that models, in my view, can inform
policy and strategy that we as humans have to create.

(36:55):
Computational models can inform and generate knowledge, but that doesn't
equate with change. It takes additional work and additional expertise
and policy advocacy to use knowledge and insights to strive
towards progress. He says, I was inspired by the response
of younger people to the murders of George Floyd and
Brianna Taylor. Their tragic deaths shine a bright light on

(37:17):
the real world implications of structural racism and has forced
the broader society to pay more attention to this issue,
which creates more opportunities for change. Yeah. I mean it
just sounds like to your point, Bill um, And you
know that Professor Watkins caught on us such a such
such a quick guy. He is um that it is

(37:41):
all about uh, a conversation and a feedback approach so
that when we get things wrong we look at him
was like, Okay, that was bad and unacceptable error as
Facebook said, and now we're going to go back and review. UM.

(38:03):
So being willing to try something, um I think was
it my mom and my grandmam with somebody used to say,
get caught trying to do right, you know, and you
know you fall short, you miss up, you know, Okay,
try again, but making those efforts and then being willing
to go back and say, I don't know what I'm doing,

(38:25):
but this person with this set of skills and knowledge
might have a different perspective that I don't hold, so
that it goes beyond just racial diversity in uh creating
these solutions, but a different approaches, different um discipline diversities. Yeah,

(38:47):
I think, you know. And going back to the beginning
of this article, I can't imagine if I was a
kid and I saw my dad get arrested in front
of me and then taking the jail for anthing he
didn't do. You know, like, that's just something that stays
with you for the rest of your life. And you know,
Mark to your point about the security thing at Northwestern,

(39:10):
that's something that you're probably will never forget. I mean,
well hopefully all cybers. I don't know why. I don't
know why I want that but but you know that's why,
you know, it's just just to highlight why that stuff
is so important and why you know, you think differently

(39:32):
about these systems of security and the police and all
of these things when stuff like that happens to you
or your dad or your uncle or your brother or
your neighbor and it and it's just highlights why this
stuff is so important. And I know this is such
an important topic to talk about. And I think, you know,

(39:54):
for our listeners, I'm lucky enough to be around amazing
people like Jamie and Mark that are open and talking
about this stuff. I know, not not everybody is able
to have these conversations or around a diverse, you know,
a group of people. And I don't know what the
solution of that for some people's like move, I mean,

(40:18):
we've moved for lesser reasons, right, Yeah, let me go
and give myself about it here over here, and the
food will probably be better when you move to I'm
just saying maybe maybe a better crazy way for me
to say that is is like travel, you know, is

(40:41):
like expand your horizons. I do think I just wish
that everyone in on the planet could travel and experience
other things and if that means that you're in a
very uniform place, even in America. I think that's one
good way to try to change your perspective, you know,
and start to see how these kinds of discriminatory practices

(41:06):
in machines affect people in a very human like way
because of your and in very practical ways. Right, Like,
I'm sorry, Bill, I just jumped in, but I was
just reminded about Uh. I was watching a panel interview
with with people of color who are in this work,

(41:26):
and they were discussing the racism and just talking about
things like how the automatic faucets in bathrooms, you know,
wouldn't turn off for black people because they didn't recognize
black hands, or you know, the problem with you know
AI voice activated you know um computer thinkies and it's like,

(41:51):
you know, they won't recognize certain um tones or whatever,
because it's like do you want to continue listening to
primates because you know what I mean, it's like it
doesn't doesn't recognize this as a human voice, you know.
I mean, And and I know black folks are well
versed in the code switching that we do when we

(42:14):
put on a certain voice when we're on the phone.
So that you don't you don't know that we are
melanated until we walk into the interview, right. But you know,
if I got if I'm talking to Alexa, I don't
want to have to put on my white lady voice
just to get her to play what I mean, Hello,

(42:34):
Yea's a robot when I talked to Alexa, and I
don't know, maybe maybe at the end of the day
of these robots stuff is just gonna make us all robotic.
I'm like, I like, it's terrible. I mean, I think
that's their goal. Actually I think, uh, I think what
was that Will Smith movie that I robot? I think

(42:57):
that that was the warnings like listen there trying to
make you all robots and here you are, Bill, it's
a slippery on a slope of slipper redas Yes, yes,
well I think, I mean, I feel like hopefully listening
to this article for folks too, if if you're not
in an adverse situation, hopefully this just as a reminder
to everybody to think outside the box. Think about if

(43:18):
you're designing a faucet system, maybe not everybody's hands look
like here's you know, if you're designing a security matching system,
maybe the way that the camera records an image, it's
not made to record darker skin tones in a more
accurate way, and you need to consider that. So just
another aspect of considering everything and everyone. And that brings

(43:42):
us to our final segment here, which is our spring
of inspiring inspirations and Mark you pult a really great
quote for this one. Yeah, so this quote from Octavia Butler. Uh,
and the quote is all that you touch, you change,
all that you change, change is you. The only lasting
truth is change. So Andy, but let's change. I mean, really,

(44:06):
that's it. That's it. Just just change really quick and
then the world will be a better place. Yeah. I
think the one constant in life is in permanence and change,
and so you can fight it and die, or you
can move with it and make the world a better place.
And as always, listeners, I hope you're making a world
a better place you are by listening to our podcast, uh,
and we love you for it. So thank you so

(44:28):
much for tuning in. Yeah, thank you so much, and Jamie,
thank you so much for being here. Um. What's the
best way for people to follow you, keep in touch
with you and support you? Oh, thank you for asking mark. Um. Well,
you know they can follow me on the socials. Um,
if you just enter sang s A n G Miss
Jamie sang s A n G M s j A

(44:51):
y M e. UM. You can find me on the
facebooks and on the Instagram's. You know, I just went
ahead and left Twitter alone. You know, they on their
own little little journey. I'm a little baby. Um and
and uh, you can visit my website Jamie dash Ali
Law dot com. Um. And if you are interested in

(45:14):
seeing my coaching or my speaking those kinds of offerings,
you can visit from the Core Core like an Apple
Core from the Core Coaching dot com. Awesome, y'all, please
follow Jamie stay in touch with her, and you can
stay in touch with us by emailing us at Ridiculous
News at iHeart media dot com and on Facebook and
i G follow Ridiculous News and you can check out

(45:35):
our comedy videos at Mark Kendo Comedy by All. Ridiculous
News is hosted by Mark Kendall and Bill Warley. Executive

(45:58):
producers are Ben Bowling and Noel Brown, produced and edited
by Tarry Harrison. Research provided by Casey Willis and theme
music by four Eyes and Doctor Delight. Four more podcasts
from my Heart Radio. Visit the i heart Radio app,
Apple Podcasts, or wherever you listen to your favorite shows.
Advertise With Us

Popular Podcasts

1. On Purpose with Jay Shetty

1. On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

2. Las Culturistas with Matt Rogers and Bowen Yang

2. Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

3. The Joe Rogan Experience

3. The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.