All Episodes

September 27, 2024 7 mins

Share your thoughts with us

Ever wondered if your phone is eavesdropping on your every word? Discover the unsettling truth behind AI's seemingly magical ability to predict your needs and how it might be shaping your life in ways you never imagined. In this episode, we dissect an impactful article from AI4SP.org. How biased data can lead to AI systems reinforcing inequalities, from the infamous Amazon recruiting tool to healthcare algorithms missing critical diagnoses.

Join us for an engaging discussion that will challenge your understanding of biases in AI and inspire you to think about the steps we can take to ensure AI works for everyone's benefit. Tune in and rethink the role of AI in your daily life as we navigate these complex and ethical challenges.

🎙️ All our past episodes 📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 250 million data points collected from 25 countries.

AI4SP: Create, use, and support AI that works for all.

© 2023-25 AI4SP and LLY Group - All rights reserved

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Elizabeth (00:00):
Ever get that feeling like your phone is listening to
you, like one minute you'retalking about something and the
next minute, bam, there's an adfor it.
It's like whoa creepy.
But really that's just AI doingits thing and, believe it or
not, that's just scratching thesurface.
Today, we're diving deep, deepinto how AI learns and get this,
how we might be shaping itwithout even realizing it.

(00:22):
We're using this reallyinteresting article from AI4SP.
org.
It's an organization all aboutmaking AI, you know, actually
ethical, and they break down howthe data used to train AI, like
the stuff it learns from, canactually have some pretty wild
and unexpected consequences.
So, whether you're an AI whizor you're just starting to wrap
your head around algorithms,this deep dive is going to be
eye-opening, trust me.

Winston (00:42):
You know it's funny People often think of AI as this
all-knowing, all-seeing thing.

Elizabeth (00:47):
Right.

Winston (00:51):
Like some kind of oracle, right, but it's not like
that at all.
It's actually way more liketraining a dog.
If you only ever showed a dogpictures of a cat, you wouldn't
expect it to learn any cooltricks, would you?

Elizabeth (00:57):
So you're saying AI is kind of clueless when it
starts out.
It only knows what we teach it.

Winston (01:01):
Exactly.
Ai learns by spotting patternsin data.
So if the data is messed up orbiased, then the AI's whole view
of the world is going to be alittle well, a little off.

Elizabeth (01:12):
Makes sense.
So let's get into some realworld examples.
The AI4SP article mentionedthis Amazon recruiting tool that
caused quite a stir a whileback.

Winston (01:21):
Ah, yes, that one.
That's a perfect example of howusing historical data can
really backfire.
They were trying to train thisAI to pick out the best job
candidates, so they fed it adecade's worth of resumes.

Elizabeth (01:33):
Sounds like a good plan in theory.
What went wrong?

Winston (01:36):
Well, the tech industry , as we know, has had this how
should I put it?
A bit of a reputation for beingmale dominated right and that
bias.
Unfortunately, it was bakedright into the data, so the AI
picked up on that pattern andstarted to well penalize resumes
from women.

Elizabeth (01:53):
Wait, seriously, it was actually downgrading resumes
simply because they were fromwomen.

Winston (01:57):
In a way, yes, the AI wasn't programmed to be sexist,
of course, but it learned frombiased data, and that's the real
danger here.
If we aren't really reallycareful, ai can actually take
existing inequalities and makethem even worse, and the scary
part is we might not even see itcoming.

Elizabeth (02:13):
And it's not just about you know who gets hired
right.
This could be happening in allsorts of areas where we're using
data to make big decisions.

Winston (02:20):
Exactly.
Think about loan applicationsor insurance assessments, or
even get this health care.
The article actually brings upthis really striking example of
an AI system.
It was supposed to predictmortality risk from chest X-rays
.

Elizabeth (02:34):
Oh yeah, yeah, I remember reading about that one.
Wasn't it trained on data thatwas mostly from one racial group
?

Winston (02:39):
That's the one, yeah, and because of that, the AI was
way less accurate when it cameto predicting the risk for
patients from other backgrounds.
Wow, when your training dataisn't diverse enough, you end up
with these blind spots, andthose blind spots they can have
some really serious consequencesin the real world.

Elizabeth (02:55):
Yeah, that is a pretty disturbing thought, you
know, like an algorithmpotentially misdiagnosing
someone because of their race.
It's like we're getting intosome pretty ethically complex
territory here.

Winston (03:05):
It definitely makes you think right.
It's a prime example of why wereally really need to be careful
about the data we're using totrain these AI systems.
But hey, there's good news toothe AI4SP article.
It also talks about how we canactually fix this, or at least
make it a lot better.

Elizabeth (03:22):
OK, good, because I was starting to get a little
freaked out there for a second.
So what's the magic solution?
How do we de-bias AI?

Winston (03:30):
You know it might sound counterintuitive, but the key
is actually more data.

Elizabeth (03:34):
More data.

Winston (03:34):
More diverse data.

Elizabeth (03:35):
Ah, I see what you mean.
So instead of just showing theAI a tiny sliver of information,
we need to give it the wholepicture, right?

Winston (03:42):
Exactly.
Imagine trying to understand,like the whole spectrum of human
experience, but you can onlysee one color.
You need all the colors toreally get it, you know.

Elizabeth (03:51):
Yeah, that makes sense.
And the AI4SP article.
They found that having a morediverse data set actually makes
AI more accurate for everyone,right?

Winston (03:59):
100%.
They talked about this study.
It looked at an AI system usedfor breast cancer screening.
At first it wasn't as accuratefor women of color, but then
they added in data from a muchmore diverse group of patients
and guess what?
The accuracy for all women shotway up.

Elizabeth (04:15):
So it's not just about making things fair.
It's about making AI workbetter for everyone.

Winston (04:20):
Exactly by training AI on data that actually reflects
the real world with all itsdiversity and complexity.
That's how we unlock its truepotential.

Elizabeth (04:29):
That's actually a really powerful thought, but it
makes you wonder what about usLike?
What role do we play in all ofthis as individuals?
I mean, I'm not building AI,I'm just posting pictures of my
cat on the internet.

Winston (04:39):
And that's where things get really interesting.
Every time you do somethingonline, every share, every
review you write, you'reactually contributing to this
massive pool of data, and thatdata that's what's being used to
train AI.

Elizabeth (04:53):
Whoa, okay, so we're all like part of this giant AI
experiment without even knowingit.

Winston (04:59):
Kind of yeah.
And here's the really wild partAll those little things you do
online, you know they all add up.
They're like building blocksfor how AI sees the world.

Elizabeth (05:08):
Okay, now I'm thinking twice about all those
questionable online purchasesI've made.
But really, though, how can webe more responsible with the
data we're putting out there?
It's not like we can just poofdisappear from the internet,
right?

Winston (05:19):
No, you're right, it's not about going off the grid.
It's more about being aware, Iguess, of the information we're
feeding these algorithms.
Like, think about what youdecide to like or share on
social media, are you, I don'tknow?
Are you reinforcing oldstereotypes or are you trying to
, like, promote different voicesand perspectives?

Elizabeth (05:36):
Wow, I never thought about it like that before.
It's like my social media feedis basically a training ground
for AI.

Winston (05:43):
Exactly, and it goes beyond social media too.
Think about the reviews youwrite, the things you click on,
even what you search for.
It's all data points for AI tobuild its picture of the world.

Elizabeth (05:53):
That is kind of mind blowing when you really think
about it.
So where does this all go?
I mean, if we're all kind ofshaping AI in a way, what does
that mean for the future?

Winston (06:02):
That's the big question , isn't it?
The future of AI?
It's not set in stone, it'sbeing written right now, by all
of us together, every click,every post, every search.
It all adds up.

Elizabeth (06:13):
So it's not just on the tech companies or, like the
engineers, to get this right.
We all have a role to play, huh.

Winston (06:19):
Absolutely.
We can't just be passive.
We just start thinking aboutthe data we're creating and the
impact it might have.

Elizabeth (06:25):
That's a pretty powerful message.
It's not just about making AIsmarter.
It's about creating a betterfuture for everyone, a future
that's fair and well, actuallyrepresents all of us and, if you

(06:47):
want to learn more about makingAI ethical, it's this big,
mysterious thing of the future,but really it's a reflection of
us, you know.
It's reflecting our values, ourbiases, our choices.
So the question isn't just whatwill AI become, but what kind
of future do we want it toreflect back to us?
That's all the time we have fortoday.
Thanks for joining us andremember the future of AI.
It's in our hands.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.