All Episodes

April 24, 2025 53 mins

We continue our conversation about the state of DEI and the heavy influence that AI will have on this important directive. This month, we explore the impact of AI on DEI initiatives—how it can either amplify biases or serve as a tool for equity. We look at how DEI programs play a key role in developing AI that does not cause unintended and disproportionate harms. And we will also get a sneak peek at how AI might be changing our behaviors without us knowing.

So if you’re concerned about the intersection of technology and equity—and what it means for the workplaces of tomorrow—you’re in the right place.

To talk about this important topic, we’re delighted to welcome back Bo Young Lee.

Read the transcript @https://bit.ly/3GE94II

Learn more about UC Berkeley Extension @https://bit.ly/3YMx82i

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
[MUSIC PLAYING]

(00:02):


BO YOUNG LEE (00:06):
The sad fact is that once an AI has
learned a certain bias,it can't unlearn it.
Just like how hard it is tountrain a human who has a bias,
it's the same thing forartificial intelligence.
Now, the good thingis that you can
shut down an artificialintelligence and not use it.
But how many people are willingto spend millions, billions
of dollars building anAI and then shut it down

(00:26):
once it's shown itselfto be really problematic?


JILL FINLAYSON (00:31):
Welcome to the Future of Work podcast
with Berkeley Extensionand EDGE in Tech
at the University of California,focused on expanding diversity
and gender equity in tech.
EDGE in Tech is part of theInnovation Hub at CITRIS,
the Center for IT Researchin the Interest of Society
and the Banatao Institute.
UC Berkeley Extension isthe continuing education arm
of the University ofCalifornia at Berkeley.

(00:54):
Today, we're continuingour conversation
about the state of DEIand the heavy influence
that AI will have onthis important directive.
Last month, we talkedabout where we are,
where we should be going withDEI programs and mindsets.
This month, we're exploring theimpact of AI on DEI initiatives,
how it can either amplify biasesor serve as a tool for equity.

(01:19):
We will look at how DEI programsplay a key role in developing
AI that does not causeunintended and disproportionate
harms.
And we will alsoget a sneak peek
at how AI might be changingour behaviors without us even
knowing.
So if you're concerned aboutthe intersection of technology
and equity and what it meansfor the workplace of tomorrow,

(01:39):
you're in the right place.
To learn more, we onceagain turn to Bo Young Lee.
Bo is a globally recognizedworkplace and AI ethics DEI
and ESG executive and widelysought after public speaker
and leadership coach.
Bo serves as the presidentof research and advisory
for the anitab.org, the leadingmission driven organization

(02:01):
advancing women andnonbinary technologists.
Welcome back.

BO YOUNG LEE (02:04):
Thank you so much for having me.
I'm really glad to be here.

JILL FINLAYSON (02:07):
It occurs to me that when we last spoke,
we did not discusswhy you started
to work in diversityand inclusion,
how you chose this career path.
Can you share a little bit ofwhat led you to this career?

BO YOUNG LEE (02:19):
Yeah, absolutely, it's
not set out to have ahuge multi-decade career
in diversity, equity,and inclusion.
It was really quite actuallya very simple question
that I was asking.
So I was getting my MBA, I wasin my mid-20s, getting my MBA,
not really surewhat I wanted to do.
And as I was in theprocess of being educated,

(02:40):
one thing that really struckme is that everything that I
was being taught in my MBAprogram, which was ostensibly
to train the next generationof business leaders,
was not designed inany way, shape, or form
to reflect somebody like myself.
Business school,still to this day,
uses a very case studybased model of education.
You study variousdifferent cases
of dilemmas and opportunitiesthat real world companies

(03:03):
have faced.
And in every single casestudy that we were studying,
mostly the protagonist wasusually a middle-aged, white cis
hetero male.
Very rarely did we see a womanbeing featured in a case study.
And certainly, we did not seeany women of color, anybody
with a disability,anybody who was LGBTQIA
being reflected in this.
And so it was a very, tacit wayof communicating to someone like

(03:26):
myself, a 26-year-oldKorean-American woman,
that you don't belong.
There's no placefor you in business.
And so me being thecheeky person that I
am, I think that's prettyclear for anybody who's
heard me speak, is that Ikept asking my professors,
this is all fine andwell, and these lessons
are great in theory.
But I said, whatdoes someone who

(03:47):
looks like me, what dowe do to be successful?
And none of them couldanswer that question.
And so it just became me askingthat question over and over
again.
What does someone like myselfhave to do to be successful?
And that's reallywhat led me down
to this path of wanting todo research in the area.
And that's why after graduatingfrom business school,

(04:07):
getting my MBA, having$80,000 worth of student debt,
I made the very odd, butfundamentally, I think,
right for me decision to join anonprofit with my student debt.
And I joined a nonprofitcalled Catalyst.
And they were one of theearliest organizations,
nonprofit organizations,asking that question
of how is a woman successful?
And so I started from there.

(04:27):
And I never-- and this wasway back in like 2001, 2002.
I never thoughtthat simple question
of how does someone who lookedlike me succeed in business,
I never thoughtthat question would
lead to this both a career,but also over those ensuing two
plus decades, becomingthe central question
that every corporation has toanswer because the workforce has

(04:50):
fundamentally changedand the consumer market
has fundamentallychanged as well.
And so now, everybody'sasking that question.

JILL FINLAYSON: Yeah, I'm actually (04:56):
undefined
quite proud of Berkeley.
The Haas Business Schoolhas a program called EGAL.
And one of the thingsthey did was put together
a collection ofdiverse case studies
because they recognizedexactly the problem
that you were talking about.
But if we look at the facultyacross business schools,
it still has a long way togo to represent the community

(05:16):
of people who are in business.
Do you have anythoughts on how we
move that dial for our educatorsas well as our businesses?

BO YOUNG LEE (05:23):
Well, I think one of the things
to really think about issome of the guardrails
that we put in place foreducators and for academics.
Generally speaking, someonehas to have a PhD in order
to teach at theuniversity level,
especially at thatelite university level.
And I'm actually aperfect example of this.
When I entered myundergraduate, I always
assumed that I was going to getmy PhD in behavioral economics.

(05:46):
That was always my goalfrom the age of 18.
And then sometimein my senior year,
I'd actually gotteninto a PhD program.
But as a first generationKorean-American immigrant,
the thought of staying in schoolfor six more years, the thought
of going into even moredebt for my education,
that was unfathomable for me.
So I basically, like,reached out to MIT,

(06:06):
where I'd gotten intograd school and said, hey,
I'm going to go andmake some money.
I'm not going to get my PhD.
And so I went and didsomething else, right?
I didn't get my PhD.
And I think that the mostbasic requirement, which
is a PhD in something to teachin any subject, that becomes
a barrier to diversifyingwho teaches because it takes
a certain level of alegacy of education

(06:27):
within a family, a legacyof being able to take
that risk of-- we knowthat academia is not
the most-- it's avery prestigious job,
but it's not a verywell-paying job.
Usually, you have to comefrom a certain degree
of socioeconomic privilegeto even get to that level.
So we have to think abouthow we've articulated
the role of an academic and lookat the systemic barriers we've

(06:49):
built into that job title, thatjob role, the requirements,
and look at how that preventsthe population from diversifying
as well.

JILL FINLAYSON (06:58):
That's a really interesting observation,
because if you haven't knownsomebody who got a PhD,
or if your parentshadn't gotten a PhD,
you're less likely to dothat for a number of reasons.
But also because a lot of therules for applying for a PhD
are kind of unwritten.
And you have to haveinside knowledge
to be able to figure it out.

BO YOUNG LEE (07:19):
Yeah, absolutely.
So my husband, he comesfrom a completely different
of a background as I have.
So my husband, hisfather has an MD/PhD.
His father went to Yale andthen got his PhD at Stanford.
His father was aprofessor of biology
and medical research at JohnsHopkins, tenured professor.

(07:40):
So for my husband, he actuallywas-- for a very long time,
he was a physicianscientist at a university.
And for him, that was themost obvious choice for him
to go down this route ofacademia, and ultimately, end up
become a medical schoolprofessor like his father.
And of course, myhusband is somebody
who represents-- his familyhas been in the United States
for hundreds of years.

(08:01):
He's a white, cis, hetero male.
And it's just like that wasthe legacy that he was taught.
And so for him, academia wasa very logical path for him.

JILL FINLAYSON (08:11):
So if we look at the larger society,
then for those who didn'tjoin our last conversation,
can you recap why dowe need DEI today?

BO YOUNG LEE (08:19):
The simple fact of the matter
is I have always stated thatdiversity, equity, inclusion
is important within thecorporate framework.
I'm not talking about thelarger social justice.
But in the corporate framework,it simply boils down to the fact
that the people who buythings, the people who
make those things, they are anincredibly diverse population.

(08:39):
We know that inthe United States,
women have control of about80% of all discretionary funds
in a family.
So when it comesto making decisions
about what kind ofproducts a family is going
to buy, what kind of carthey're going to drive,
what kind of laptopsthey're going to purchase,
it's usually awoman who's making
that decision for the family.
Similarly, if you lookat the workforce itself,
I always like touse this statistic.

(09:00):
If you look at the birthrates in the United States
and the immigration ratesin the United States,
white, cis, hetero men onlymake up about less than 30%
of the entering workforceat this point in time.
And why is that?
Simply because womenmake up about half
of the workforceentering right now.
Then you look atpeople of color,
the population ofpeople of color,

(09:20):
the people who areimmigrating, they
make up anotherapproximately 25%.
So white, cis, hetero men onlymake up less than 30% of people
entering into the workforce.
But if you start lookingat corporations starting
at the manager level and above,majority white, cis, hetero men.
And so you have a populationof people in corporations,

(09:41):
white, cis, heteromen who are making
decisions on behalf ofpeople like myself, yourself,
for women.
And do they truly understandwhat it means to be us?
What is it that we arelooking for in our products,
in our services, how we wantto be treated through a sales
process?
They don't because they don'thave that lived experience.
And so simply,even though there's

(10:02):
a huge component ofsocial justice and equity
in the work of DEIin a company, it
boils down to the bottom line.
It boils down to do youknow how to build products
and services for the peoplewho are actually spending money
on it?
We have multipleexamples of where that
has failed over and over again.
Companies just don't knowwho they're selling to.

JILL FINLAYSON (10:22):
That's a challenging problem
because a lot of peoplewould say, why do we still
have this problem?
That's history.
That's in the past.
We now have thislevel playing field.
Must be that men are justbetter at these jobs.
That's why they'regetting promoted.

BO YOUNG LEE (10:37):
Let's say you believe
that men are betterat these jobs,
and that's why they'regetting promoted.
And not just any man,a specific archetype
of male, which is awhite man, most likely
a heterosexual white male, mostlikely a cisgender white male.
People often say,well, we support DEI.
But we don't want to lower thebar in any way, shape, or form.

(10:59):
And I said, OK, let's take,for example, that the bar is
set very high already.
And that bar ensures that white,heterosexual, cisgender men
are the majority insenior leadership,
especially at the mostsenior executive levels.
There's one of two waysthat we can explain that.
Either one, we haveto admit that there
is masculine supremacyin this world

(11:20):
and there's white supremacy.
So fundamentally, whitepeople are better, smarter
than the rest of us.
And that men are better andsmarter than the rest of us.
And if that statement makesyou deeply uncomfortable
because you know it's nottrue, then the other argument
is that we have designedsystems that validate
the behaviors, the traits of--
that are most commonlyin archetypically

(11:41):
seen within whitemale communities.
And the system is biasedpositively towards those traits.
And therefore, they pluckout those individuals
from a very earlystage in their career,
and then mentor them andsponsor them all the way up
into senior leadership.
We talked duringthe last podcast
about this conceptof meritocracy
and how when meritocracy wasfirst articulated, actually,

(12:04):
in Britain, it was articulatedas this almost pejorative,
as this negative thing thatwas not actually achievable.
That when someonesays meritocracy,
they're talking aboutsomething that is superficially
designed to be equitable.
But fundamentally,it really isn't.
And it's just designed toreinforce old traditions
and values.
Nowadays, people usethe term meritocracy

(12:25):
completely without anypejorative aspect to it.
They think that it'san achievable goal.
And they hold meritocracyup in this ideal.
But if you study anyof the literature,
philosophy around meritocracy,one thing that you will see
is that a meritocracyis fundamentally
impossible withoutfirst starting
with a system of equity.

(12:45):
If you don't havea system of equity,
no meritocracy is ever possiblebecause you have eliminated
the ability for the besttalent, regardless of where it
starts in life, from rising up.

JILL FINLAYSON (12:57):
When we think about this bar
that people have to meet, in alot of the research I've looked
at, the people of colornot only have to meet,
but exceed that bar.
There's this proveit again bias.
And they are held toan even higher standard
in my experience.
So what have you seenin terms of the bar?

BO YOUNG LEE (13:18):
Yeah, absolutely.
Well, first andforemost, I think
one of the bars that we've setis a very, very masculine--
and I know I keep using thatterm over and over again,
but it's the bestway to describe it,
a very masculine archetypeof communication.
And I happen to actuallybe very fortunate.
My natural communicationstyle is extremely direct.

(13:40):
It's kind of monotone.
And it's extremelylimited in verbiage.
We know that women aresocialized to use more words.
They tend to use a lot moreindirect language as a whole.
As opposed to saying, Idid this, they'll say,
I really want togive acknowledgment
to the team and thework that they did.
And we tend to judgepeople's competency based

(14:01):
not on what they haveachieved, but on how
they present themselves.
And those individuals who aremuch more masculine aligned
in their behavioralnorms are more
likely to be validatedthan those who are not.
But here's thedouble-edged sword.
There's a-- Catalyst, actually,coined a really great phrase.
They say a woman isdamned if she doesn't
engage in the game oftrying to act like a man

(14:23):
and be like a man.
But she's also doomed ifshe plays that game a little
too well.
And so this is somethingthat women in particular
really confront.
There's a certainlevel of code shifting
that women have to engagein to make themselves
very appealing to men,to be very comfortable.
So they have to adopt someof the normative behaviors

(14:44):
that men naturally arecoached to display.
Yet at the same time, if theydo it a little bit too well,
if their elbows are justa little too pointy,
if they are a little too directin their communication style,
and then thatworks against them.
So women are walkingthis very fine line.
And frankly, thathappens with anybody
if they have identity factorsthat are typically minoritized.

(15:06):
So a woman of color has towalk an even finer ledge.
A woman of colorwho is also queer
has to walk even amore finer ledge.
And what happensis that when you
have all these otheringfactors, and you're
putting on all these masks,these different layers of masks,
eventually, you're spendingso much of your time thinking
about how people will perceiveyou that you're not spending
that same mental capacity onactually getting your job done.

(15:28):
And that's the burden of beingothered over and over again
within the workplace.

JILL FINLAYSON (15:33):
It is a very fine tightrope
that people have to walk.
And to your earlierpoint, people often
mistake confidencefor competence.
And so one of thethings I think DEI does
is it makes us put together somequantifiable and clear rubrics
of what competence is.
So that we don't get confusedby this display of confidence

(15:55):
that is more favorable for menthan less favorable for women.

BO YOUNG LEE (15:58):
People have this misnomer and misunderstanding
at this point in time, thinkingthat DEI is specifically
about promoting a certainsector of people, right?
They're like, no,DEI, all you want
to do is you want to hire women.
You want to hire Black people.
You want to hire Hispanics,and/or Asians, and LGBTQ people.
And we're like, no, we don'twant to hire more of them.

(16:19):
We want to create an environmentwhere those individuals can just
be successful as they arewithout having to overadapt.
And we are creatingan environment where
everybody, frankly, can thrive.
We're not looking tohire a specific group.
But hopefully if wehave an environment that
is truly inclusive and focusedon belonging for all people,

(16:40):
then everyone can performto the highest ability.
And then you get thatmeritocratic environment
where the best talentis truly rising.

JILL FINLAYSON (16:48):
So when we think about this bar,
and we try to think about thequantitative side of the bar,
you're very interested in data,how does that feed into AI?
And can that help usto mitigate for bias?
Or is that going to amplifybias, just on the big picture
level?

BO YOUNG LEE (17:05):
So right now, first and foremost, we
have to worry aboutartificial intelligence
because it is so prevalentin our life already.
Even if you'resomebody who doesn't
go on to ChatGPT, orClaude, or Grok 3 every day
to ask it questions,artificial intelligence
is still impactingyour day to day, right?
Almost every searchengine on the internet

(17:25):
now uses some form ofartificial intelligence
to filter selection.
And there's actuallybeen research
that shows that artificialintelligence enabled
search results are farinferior to those that are just
based on a simple algorithm.
So the Google algorithmthat they launched in 1998
and basically revolutionizedthe way data is searched

(17:45):
on the internet,those results are
better than somethingthat is AI enabled.
Because what happens when youhave an AI-enabled search engine
is that the AI is supposed tolearn what your biases are.
And then based onyour biases, they
will present you results thatartificial intelligence thinks
you're going to findmore validating.
Well, think about that, right?
If the AI is supposed to learnyour biases, your preferences,

(18:09):
and then give you results,it's going to just reinforce
your bias over and over again.
And it's going to learnnot to present information
that you don't want.
But oftentimes, it is in theprocess of being challenged.
And this is so fundamentalto the principles of DEI.
It is the processof being challenged
with different informationthat fundamentally leads
to a better holisticunderstanding of a subject

(18:29):
matter, or risk assessment, orwhatever else we decide to apply
different perspectives on.
And so artificialintelligence actually
diminishes the options and thechoices that we are exposed to
and we are given.
And our world becomes smallerand smaller and smaller.
And even if we have, let'ssay, a tech company where
the operators are hyper awareof the possibility of bias

(18:52):
within artificial intelligence,and they build an algorithm
and they train a machineto minimize bias as much
as possible, becausethey are training
on existing historical data,any bias that exists in
that data will bemanifest in the outcomes.
And Ruha Benjamin, who isa professor at Princeton

(19:13):
University, and she recentlywas a recipient of the MacArthur
Genius Grant, she haswritten extensively
about how bias in datafundamentally works its way
into artificial intelligence.
And there's this veryinteresting case study
that I read not that long ago.
And it was about ahealthcare management system.
And this healthcaremanagement system,

(19:34):
when it was createdby the company, Optum,
they actually knew thatthere was a real potential
for bias in the outcome.
So they specificallydesigned the health algorithm
to not considerrace in its outcome.
However, because they weretraining this healthcare
management system on historicalhealth utilization data going

(19:55):
back 50, 60 years,what do we know
about the way in whichhealth has been distributed
in the United States?
We know that anabundance of resources
have gone to treatpeople who are white.
And we know that there hasbeen a consistent, decades
long underinvestmentwithin the Black community.
That is what the data shows.
And so regardlessof how much you
try to outsmart the databy building an algorithm,

(20:18):
that bias data goes intothe training process.
And guess what?
Regardless of Optum'sbest intentions,
the result of theOptum Healthcare AI
is that it was recommendingfar more healthcare
utilization for white patientsand far less for Black patients.
And fundamentally,they had to take
it offline, retrainit, make sure
that there wasn't that bias.

(20:39):
But even with thebest of intentions,
because the datais biased, you're
going to get biased outcomes.
And very few organizationstruly have the lens
to be able to see thatthe bias is there.

JILL FINLAYSON (20:52):
For these companies that
want to do theright thing, you've
identified onechallenge with AI,
which is historical inequalitybeing reflected in the data
sets, and we need tomitigate for that.
What are some of theother reasons why we're
seeing flawed outcomes in AI?

BO YOUNG LEE (21:09):
The other reason why
we're seeing a lotof flawed outcomes
is because a lot ofartificial intelligence,
it's not just about buildingan algorithm, putting in data,
and then getting an outcome.
There is a huge amount of humanreinforcement training that
takes place to ensure thatan artificial intelligence is
consumer ready.
And the humanreinforcement training
is being done by humans.

(21:32):
And if you don't have avery diverse group of people
who are doing the training,and again, there's
data out therethat's just starting
to show this, thebiases of the operators,
the biases of the trainersmanifest themselves
in the outcome.
And a really greatexample of this
is simply the factthat-- and I know
we're going to talk alittle bit about agentic AI

(21:53):
in a little while, but weknow that as a general rule,
agent-based AIs are designedto be servile in their nature,
to be very pleasingin that nature.
And that was a designdecision by the operators
that is reinforcedby human training.
Now, that might seemall fine and well.
And you're like, well,you're creating an agent
to do something for a human.

(22:15):
Don't you want it to havea very servile attitude?
And you're like to afirst approximation, yes.
But if that servile, servitudemindset gets to the point where
it cannot push back whensomeone is being abusive to it,
then you're goingto get bad outcomes.
We see this a lot in theway in which companion

(22:35):
artificial intelligence works.
So there's multipledifferent categories
of artificial intelligence.
There's agentic AI that'smeant to do stuff for us.
There's largelanguage models that
produce, and collateinformation, and regurgitate
information out to us.
But there's also companion AIs.
And companion AIs are thoseartificial intelligence agents
that are designed to have almosthuman-like interaction with us.

(22:57):
And we can use those companionsfor many different things.
But we're now starting to seepeople use AI companions to be
actually likesubstitute friends,
or even in some cases,substitute romantic partners,
girlfriends and boyfriends.
Well, if you put that servileagentic operating model
into an artificialintelligence, it actually
creates the environmentwhere people can become very

(23:20):
abusive to their companion.
And you're like, well,but the companion's
just a bunch of numbers, anddata, and neural networks.
It's not human.
So what does it matter thata human is abusing verbally
their companion AI?
Well, everything influences thehuman psyche, so if you have,
for example--
and this is really happeningwithin the heteronormative male

(23:41):
to feminizedcompanion, if a male
gets accustomed to abusing--
verbally abusingan AI girlfriend,
what happens to that malewhen he goes out and engages
with women who havemuch more free will?
And we're seeing--we're starting
to see a little bit ofthis starting to emerge.
The research reallyhasn't been done
because it's such early days.

(24:01):
But we're starting to see howmen who spend more time online,
more time engagingwith non-human females,
are starting to reallybecome aggressively
negative towardstheir engagement
with real women who have realminds and who have real agency.
And that fundamentally becomesanother layer of oppression
for women in our society.

JILL FINLAYSON: Yeah, that research (24:22):
undefined
started to surfacewith Siri and Alexa.
So those have feminizedvoices versus say,
IBM's Watson, which is literallyprogrammed to cure cancer,
win Jeopardy!, speakswith a male voice.
So data has shownthat, yeah, how
you treat the AI transfersover into how you
treat women in the real world.

BO YOUNG LEE (24:43):
Yeah, there is a company
out there right now who isnow trying to really vocalize
artificial intelligence.
And they recentlyreleased a model of two
voiced large language models.
I think the femalevoice is named Maya.
And I feel like themale voice is, I think,
Mark, or Mike, or something,some masculine name.

(25:04):
And I went online.
And I started-- Ilove playing around
in AI because in orderto understand it,
you have to play in it.
And I went online and I askedboth Maya and the male voice,
I said, Maya is voiced asfemale, you are voiced as male.
But your underlyingalgorithm and neural net
is exactly the same.
The data that you aretrained on is identical.
Have you, as AI models,but who are very gendered,

(25:27):
have you noticed adifference in how
people are engaging with you?
And both the male voiceand the female voice,
Maya, they are both very clear.
They said, yes, for Maya, peopletend to ask questions and use
Maya as almost a coach.
What do you think Ishould do about this?
What do you think Ishould do about that?
Versus the male voice,they were both very clear,
people tend to be far moreterse in their engagement

(25:49):
with the male voice.
They tend to ask itto do things for them.
Go and fetch me this data.
Go and find me thisinformation versus much more
social with Maya.
And so even though bothMaya and the male voice
start off in theexact same place,
they're being influenced by howpeople perceive their gender.
And then that will fundamentallychange Maya and male voice

(26:12):
counterpart because artificialintelligence is constantly
learning every piece of datathat goes into it, whether it
is a spoken engagement,whether it's
a verbal engagement,whether it's
a text-basedengagement, that changes
what that artificialintelligence ultimately becomes.

JILL FINLAYSON (26:27):
So all of these human interactions
are, in fact, changing the AIand training the AI in ways
that we may or may notwant it to perform?

BO YOUNG LEE (26:37):
There's a lot of misunderstanding
that people have aboutartificial intelligence.
It was very interesting.
I was having this engagementonline on LinkedIn,
where I was talking aboutthe flaws in the data that
train artificial intelligence.
And first and foremost,we should start off
by saying every singleartificial intelligence
model has run outof data to train on.

(26:58):
Every piece of Englishlanguage knowledge
that has been synthesized sincewritten knowledge has existed
has pretty muchnow been utilized
to train artificialintelligence.
And there's a desperate needfor more data to train on.
This is one of the reasons whythese AI companies are giving
away their models for free.
Because every time yougo on, every time I go on

(27:19):
to Anthropic's tools,OpenAI's tools, whatnot,
my data becomes part oftheir training material.
And they need me on there.
That's why they giveaway so much for free.
And that's why they've beenstruggling with monetizing
their platforms.
But the otherthing to understand
is that people are like, no,no, no, when new data comes in,
it goes into a warehouse.

(27:40):
And I was having thisargument with this gentleman.
He was like arguing thatno, when new data comes in,
like when I use it, the datagoes into a warehouse somewhere
and somehow, they incorporatethat data into the next model.
Going from chat 3.0 to 3.5,and so forth, and so on.
And I go, no, no, no, no,no, that is not how it works.
The data that youput in there is
changing the artificialintelligence in the moment

(28:03):
that you are engaging with itbecause it is learning from you.
How do you think itlearns your preferences?
It is learning in the moment.
But that data also goes inbecause then the organization
goes, OK, what do we have tochange about the underlying
algorithms?
What do we have tochange about some
of the types of additional humanreinforcement training we have
to put in there to create even amore enhanced form of synthesis

(28:26):
that's out there?
And so I said it's both/and.
Every engagement that we havewith artificial intelligence
changes the artificialintelligence.
And that's thedanger, again, there
is-- short of the companies likeAnthropic has specifically said,
we are trying to build abetter, more ethical version
of artificial intelligence.
And they try very hard in theirhuman reinforcement training.
But the level ofquantification that

(28:48):
happens within artificialintelligence, the level
and volume of human engagement,is so great, none of us
can really, truly controlwhat's happening there anymore.
And that's the danger.
As a society, we're inthis hyperpolarized period,
where we are becoming morefractured as a society,
where people are seeing--
rather than feeling safer,they're feeling far less safe.

(29:08):
They're operating more on fear.
And that is going into what ourartificial intelligence learns.
And that's thescariest part for me.
Is that all thesenews articles that we
see every day, that is trainingartificial intelligence right
now.

JILL FINLAYSON (29:23):
Yeah, I think it really
argues for the importanceof having objectives, having
clear metrics that thisis what a successful AI is
from the start, protectingsafety and privacy from the
get go, but then also continuousmonitoring because of the fact
that new data is coming in.
And we don't want it toreflect the baser instincts

(29:44):
or people's deliberateattempts to drive things
in a certain direction, it isa dangerous period of time.
And I just want todo a quick shout out
because we talk aboutwho's developing AI.
And we know that developmentteams are skewed male.
But I want to shout outto the AI ethicists.
For decades, they'vebeen sounding the alarm
that you're talking about here.

(30:04):
So Sophia Noble from UCLAwrote Algorithms of Oppression
and pointed out thatGoogle isn't a social good,
it's a company driving profits.
And so what happensin Google Search,
and now what happenswith the Google AI,
is driven by a profit motive.
Caroline Criado Perezwrote Invisible Women
on how many things in ourworld are designed specifically

(30:24):
for men, and therefore don'twork as well for women.
Fei-Fei Li, in herbook, The Worlds I See,
is focusing on how do wemake AI a force for good?
And of course,Dr. Joy Buolamwini
has done tremendous workunmasking AI and bias
in facial recognition.
And going way back, CathyO'Neil was out in front
with her defining book,Weapons of Math Destruction.

(30:45):
So I wanted to pointout that oftentimes we
hear that there aren'tenough women in development,
and I agree with that.
But there are alot of women in AI.

BO YOUNG LEE (30:54):
Yeah, absolutely.
And I think somethingto keep in mind
is that while womenonly make up about 27%
of the overall AI workforce,and even sadder, only about 15%
of AI researchers are women,women make up between 70% to 80%
of all AI ethicists.

(31:16):
And I actually see this inmy current master's program.
We have about 50students in my cohort.
40 of them are women.
And you might askthe question, well,
why is it always the women?
Especially, like almostevery woman you just named
is a woman of color.
Why are we the ones whoare sounding the bell?
And it's becausewe are the victims
if AI doesn't get things right.

(31:38):
We are the ones who haveto live with the decisions
that artificial intelligence.
And one thing that weare seeing happening
in the artificialintelligence space
is AI being utilizedto further erase women
across the board and especiallywithin the tech sector.
And it's happening atthe human level as well.
A couple of years ago, The NewYork Times published a story

(32:00):
and it was titled the who's whoshaping artificial intelligence.
And of the about 30people on that list,
there was not a single woman onthat list, not a single woman.
And this is when Mira Muratawas the Chief Technology
Officer at OpenAI.
She wasn't on the list.
Fei-Fei Li, wholiterally created

(32:22):
ImageNet, who withoutthe work of Fei-Fei Li,
we would have no graphicalartificial intelligence,
there would be noAI-based image making,
there would be no AIart without Fei-Fei Li,
she wasn't on the list.
So we're seeingartificial intelligence
being utilized to furthererase women from technology.
And this is something that'sgone back, the women--

(32:43):
if we go back 70, 80 years,the women of Bletchley Park who
worked on cracking thecode during World War II,
they were erasedfrom that narrative.
We see that all the women whoare the original computers,
they were all erased.
So women were beingerased from every aspect

(33:04):
of computer science.
And now, they're beingerased from every aspect
of artificial intelligence.
And they're being replaced bythese hyperstereotyped females
by artificial intelligence.

JILL FINLAYSON: Yeah, it's tragic. (33:16):
undefined
And it's critical thatwe address these issues.
I think for us whoare in the industry
and talk aboutthese things, there
are a lot ofexamples of where AI
and systemic biaskind of interact.
And so for the folkswho are listening,
I think it's worth givingsome more of these very

(33:36):
specific examples.
Because it's one thing tosay bias in data sets is bad.
But it's another thing tosee the implications of it.
We'll just do sortof a lightning round
of how is AI impacting hiring?

BO YOUNG LEE (33:48):
Yes, absolutely.
So Bloomberg News ran--
did a really interestingstudy, where they ran--
there's a classicexperiment where
people would take resumes,identical resumes,
give one resume aarchetypically white name,
and give one resume inarchetypically Black name,
and then send them outto employers to see
what the response rate was.

(34:09):
And in thoseclassic experiments,
we saw that the resumewith the white name
always got more responsesthan the resume--
the identical resumewith the Black name.
So that's theclassic experiment.
Bloomberg wants to seeif that sort of bias
existed withinartificial intelligence.
So they ran thousandsof iterations
with a very similar premise,get identical resumes,
put a white name on it,put a Black name on it,

(34:31):
run it through a system.
And the experiment that theywere looking for was let's
see if there's any biasin these resume review
AIs for financial analysts.
And sure enough, when they ranthousands of iterations of this,
they found this hugebias in what these AI
systems were recommending.
So they found that these resumeAI agents were recommending

(34:51):
the white resumes, the Asianresumes, and in particular,
the Asian female resumes.
So if they had a namethat was Asian female,
that was the most likelyto get pulled out.
Mind you, the resumeswere identical.
So whether theyhad a Latino name,
whether they had a Blackname, Asian name, white name,
female name, male name, theywere all equally qualified.

(35:13):
Yet something like 40% of allrecommendations were Asian.
And then like 38% were white.
And something like 10%of the resumes that
were recommended bythese AI resume agents
were Black and Latino.
How did it make that choice?
All the qualifications onthe resumes were identical,
and if the AI wasn'tbiased, it should

(35:33):
have had 20% Asian, 20% white,20% Black, 20% Hispanic,
20% other.
But it wasn't at all.
So it was making anassumption based on the name.
And why would it make theassumption based on this name?
Well, we know that alllarge language models
have a negative associationwith African-American Vernacular

(35:53):
English.
If you engage with any largelanguage model using AAVE,
African-AmericanVernacular English,
it will make an assumptionthat you are stupid.
This has actually been proven.
And so by extension, if that'swhat the LLMs have learned, then
by extension, any agent that isbased on these large language
models, which all resumereviewing AI agents are,

(36:15):
they're going tothen take a name that
is reflective ofAfrican-American Heritage
and think negatively of it,even though, objectively,
the skills are exactly the same.
So that's one form ofbias that is absolutely
negatively impacting Latinoand Black job seekers
if a company uses an AI agent.

JILL FINLAYSON (36:32):
And the story I heard
was, of course, AmazonAI's hiring tool
that was trained on a majoritymale data set and immediately
began penalizing women.
And they found patterns thateven people wouldn't have found.
Like, oh, softball,bing, you're a woman.
And it couldn't unlearn this.
So you couldn't adjustthe AI to become more fair
once it saw that pattern.

BO YOUNG LEE (36:53):
The sad fact is that once an AI has
learned a certain bias,it can't unlearn it.
Just like how hard it is tountrain a human who has a bias,
it's the same thing forartificial intelligence.
Now, the good thingis that you can
shut down an artificialintelligence and not use it.
But how many people are willingto spend millions, billions
of dollars building anAI and then shut it down

(37:14):
once it's shown itselfto be really problematic?

JILL FINLAYSON: Yeah, to it's credit, (37:16):
undefined
Amazon did end that tool.
But we've also seen hiring use--
like HireVue used video.
And there have been someproblems, of course,
with facial recognition.
What are you seeingwith that in hiring?

BO YOUNG LEE (37:30):
From a facial recognition perspective, Dr. Joy
Buolamwini, she's donea ton of work on this.
It's simply the fact thatartificial intelligence can't
distinguish dark features.
And so if you have aninterview happening
with an artificialintelligence agent
or an artificial intelligenceagent as an intermediary
there, to what extentis it just going
to confuse darker skinnedcandidates and just

(37:51):
mistakenly placein the responses
from one Black candidateto another Black candidate?
And while facialrecognition, particularly,
is problematic froma hiring perspective,
it's even worse from acriminal justice perspective.
We know that-- we don'thave any data on this,
but we know that a lot of policeforces across the United States
are starting to usefacial recognition
technology to identifypotential criminals.

(38:14):
Right now, there areno disclosure laws
when a company is usingartificial intelligence,
whether it is the police forceusing it for identification
of potential perpetrators,whether it is a company using it
for resume review orperformance review.
There's no law that dictates anykind of transparency about it
or accountability about it.
And this has been inthe news very recently.

(38:36):
Insurance companies are nowusing artificial intelligence
to make decisions about whetheror not they deny claims.
Yet, they're nottelling people--
when people are like, oh,why did you deny this claim?
They're not saying, well, itwas artificial intelligence.
And this brings up a wholequestion around accountability.
Who is held responsible?
Who is held accountable whenartificial intelligence makes
a decision that hasa material impact

(38:57):
on the quality of ourlives, and the outcomes,
and the way in which we live?
And this opens up awhole can of worms
around this question of whathappens to human autonomy
as artificial intelligenceplays a larger and larger role
in the choices that we have?

JILL FINLAYSON (39:12):
Yeah, I think related to this,
it is a fairly heavilyregulated industry.
And so they do have toexplain their decisions.
But the information leadingup to it going to the person
could be biased in those ways.

BO YOUNG LEE (39:25):
Yeah, absolutely.
That is the real true concern.
It's like artificialintelligence
and algorithmic bias ispervasive in our society.
If you have a smartphone, ifyou use apps on your smartphone,
there is, somewherein the background,
an artificialintelligence is making
a decision on your behalf.
Whether it is the pricethat you get when you're

(39:47):
calling a Lyft or an Uber.
There are all sortsof rumors out there
where people are likethe type of phone
you have influences the priceyou get on your next Lyft ride,
right?
Or whether it is the price thatyou see on Amazon for a book,
whether it is the feed that youget on Instagram, all of that
is being influenced byartificial intelligence.

(40:08):
And you don't knowhow your decision
making is being influencedby artificial intelligence.
For example, I have noticedthat ever since Meta decided
that they were going to takeaway professional content
screeners and basically justhave community screeners,
and also after MarkZuckerberg said
we need more masculineenergy in this world,

(40:28):
I have noticed thaton my Instagram,
I get a hell of a lot moretradwife influencers on my feed
than I have everwanted to ever see.
And there isnothing about what I
view on Instagram that wouldtell you that I was somebody
who wanted to see tradwives.
99% of my Instagram feedshould be cute babies and dogs.

(40:50):
That is what I use Instagramfor, dog influencers and cute
baby influencers.
That's it.
But suddenly, I startedseeing all these tradwives.
And it's because the artificialintelligence was like, here's
a middle-aged woman.
She should probably bewatching a bunch of tradwives
because she's a littlebit too independent.
Some people ask me, whenI started transitioning
more and more towardsartificial intelligence ethics,
they're like, well,why are you doing that?

(41:11):
And I said, it's becauseevery bias that we have seen
socialized that individualpeople can inflict upon somebody
else that I have beenworking on for two decades,
it is now being quantifiedat quantum levels
into artificial intelligence.
So that's the challengeof artificial intelligence
is that there's no transparencyin artificial intelligence.
And any good AIoperator will tell you,

(41:32):
we don't reallyunderstand how it learns.
We just know that it learns.
It's kind of like a big blackbox that's out there right now.
We don't know how all thissocial bias that is there
impacts the artificialintelligence.
But we know that once theartificial intelligence becomes
biased, it candiscriminate at a level
that no one single humanor even a system of humans

(41:55):
is capable of doing.

JILL FINLAYSON (41:57):
So it's not neutral.
And it's leading to, Ithink you said earlier,
stripping autonomy.
Can you say more how it'staking away autonomy?

BO YOUNG LEE (42:07):
Absolutely, I'll use a really simple example
that almost everybodycan relate to.
So I don't know ifI'm the only person,
I don't think I'm theonly person who does this.
When I have an Amazon cart, Igo in and I put-- at any given--
it's 2:00 AM, and I'mdealing with insomnia,
so I go into my Amazon,and I choose 20 things.
And I'll put it in my cart.

(42:27):
And then my cart sitslike that for a few days.
And I come and go andvisit my cart every once
in a while, going, doI really want this?
And then after aboutfive days, I'm like,
I don't want anythingthat's in my cart.
And I delete everythingexcept for maybe one thing.
And I place an orderfor that one thing.
That's usually how I engage.
And I think a lot of otherpeople do that as well.
Well, if you have an AI bot thatis designed to incentivize you

(42:47):
to buy everythingin your cart, you
don't know that theads that are being
put in front of you every daythrough little notifications,
through that sidebar is alllinked back to your AI cart.
And let's say Amazonhas a AI agent meant
to increase yourpurchasing on the platform.

(43:09):
You don't know whether or notyou suddenly decide, oh, I
do actually want all 20things that are in my cart,
you don't know if that'sactually a choice that you made,
or if that's something thatthrough micro, micro influencing
of what you see on a dailybasis makes you buy more.
And you're like, well,that's not very harmful.
At some point, youdid decide that you

(43:29):
did want those 20 things.
So it's just making you buy it.
That's one instance.
Over a lifetime, I mightconsume so much more
and spend so muchmore of my own money.
And then my house is filledwith all sorts of knickknacks
that I don't really need.
There is this incentive whenwe start to use agentic AI,
and I read a reportfrom Google, they
said that they think thatin the next five years,

(43:52):
there will be over100 billion AI agents.
And that everyperson in this world
will have anywhere from threeto five agents working for them.
What happens withall the hypernudging
that these AI agents doto get us to buy more,
to get us to influence whatnews articles that we see?

(44:14):
And we have no say in what thesehypernudging moments can be.
And to what extentdoes that strip us
of our autonomy as human beings?

JILL FINLAYSON (44:23):
And how do you opt out if you don't want these?
It's not going to bevery easy to do that.

BO YOUNG LEE (44:28):
You can't.
You really can't opt out.
Think about the way weused to read newspapers
in the olden days, right?
It was a physical paper.
And we had the choice offlipping through all of it
and going to thesections that we want to.
Now, if you go toThe New York Times,
there is no way tobypass the front page.
You must see the frontpage of The New York Times

(44:49):
before you can navigate to yoursection that you want to read.
Therefore, every timeI go into The New
York Times, or The WallStreet Journal, or whatever,
or The Washington Post, I getpissed off every single time
because I am--whether or not I want
to stay informed about what'shappening in the United States,
it is just put rightin front of me.
And then I get upset.
And then I have a heartburn.

(45:09):
And then I forget that I wasthere just to do the Wordle
and then leave, right?
When we start to automatehow things get done,
we take away theability to see things
that we don't want to see.

JILL FINLAYSON (45:22):
It's an interesting problem
that we have.
Even when you doa Google search,
it tries to autocomplete andguess what you're looking for.
And here's the thing, thatactually influences people.
So it makes things lookmore popular or more likely
than it would otherwisebecause it's pre-populating.
And people are like, thatwasn't what I was searching for,

(45:42):
but I'm curious.
So they click through on it.
It is changing your trajectory.

BO YOUNG LEE (45:47):
Even something like Apple Music,
we know that there are companiesout there, music companies who
are really promoting anartist, and they will--
there was that famouscase a few years ago
where for one of the newversions of the iPhone,
it came pre-loadedwith the new U2 album.
And what happenedis it completely
backfired becauseeveryone was like--
and there was no way todelete that U2 album off

(46:08):
of your music app.
And people were pissed.
They were like, I don'twant this U2 album.
Why is it comingpre-loaded on my phone?
It's not my music style.
And eventually, Applepushed an update
that you can remove the album.
But nowadays, I gointo Apple Music.
And it's like, we thinkyou'll love this new music.

(46:28):
And it's the first thing I see.
It's not even like BoYoung Lee's station.
It's not the most frequentlylistened to on Bo Young's phone.
It's like the top button is likethis artist I've never heard of.
And then every once ina while, I'll click it.
And I'll listen to it.
And 99% of thetime, it's not music
that I particularlyfind interesting.
But what happens when you'reconstantly promoting things
because some corporate entitywanted you to see that thing.

(46:51):
It will ultimatelyinfluence you.
And yes, we're talkingabout little things
like what I buy in a cartor what music I listen to.
But think about it froma teenager's perspective.
And we know that ingeneral, older adults
are much more leery andwary of this technology
and are much more capable ofquestioning how this technology

(47:11):
is influencing them.
Teenagers, youngerpeople who are growing up
with this technology,they have no ability
to question the biasthat's built in there.
And no-- they don't simplyhave that experience to say,
oh, they're pushinga preference onto me
that I don't want to have.
And so think about ifwe have all these biases
about the mental capability,the academic capability

(47:33):
of certain racesand genders, what
happens to the academicchoices that are
put in front of young people?
What if we start to see Blackpeople being steered further
and further away fromrigorous academic education
and more towardsvocational education,
while white and Asianindividuals are being promoted
to the Ivy Leagues andtop 20 universities?

(47:55):
That is a realpossibility at some point.
It may be happening rightnow, for all we know.
And then that systematicbias ultimately becomes
epistemic violence,where we are limiting
educational opportunities,career opportunities based
on what the artificialintelligence believes
of certain populations.

JILL FINLAYSON (48:16):
This is a great full circle moment,
because if we don't have thediverse collection of people
coming into technology andin solving these problems,
we're going to have moreproblems in the future.
Any thoughts about how wecould use AI to nudge for good?
And who decides what good is?

BO YOUNG LEE (48:36):
I mean, ultimately, I
think it comes down tothe companies themselves.
This is why diversity ofworkplace is so important.
And I gave you theexample of the Optum
Health, where they actually knewthat there was bias in the data.
So they tried to build it at thealgorithm and removed that bias.

(48:57):
And still yet, the biasshowed up in outcomes
and then they had to fix it.
No matter what yourbest intentions,
you could neverbuild an algorithm
that isn't going topotentially become biased based
on the data set that's there.
But what you can do is you canhire people who are already
aware of the risk that's goingto be there, more women, more
people of color, more LGBTQ,more people with disabilities,

(49:20):
more people from lowersocioeconomic backgrounds who
have lived as victimsof the social bias,
and can therefore then, asmuch as possible, both build
algorithms that takeinto consideration,
ensure that the humanreinforcement training is
really, truly addressing that,and to try to clean the data as
much as possible to ensurethat the data being utilized

(49:40):
isn't biased.
So that's why you needa diverse workforce.

JILL FINLAYSON: Absolutely, and so (49:44):
undefined
what would be yourfinal words of advice
to the individual contributorwho's out there, the leaders who
are out there?
How can they bestthrive in this workplace
and ensure thatbad things aren't
happening under their watch?

BO YOUNG LEE (49:59):
Well, first and foremost, I
think it is imperative thatevery person becomes much more
sophisticated, both aboutwhat AI tools are out there
and how they'reinfluencing their lives.
So everyone needs to, whetherit's go and get a certificate,
whether it's self-learned,learn how to use these tools.
Secondly, when youare using the tools,
don't accept everything at facevalue and just be like, well,

(50:21):
the AI told me that.
If you are usingit for research,
make sure that AIis not giving you
any made up research sources.
This is actually a huge problem.
We know that AI has theability to hallucinate.
And people will oftentimesuse research given by the AI,
not realizing that thereare some made up resources

(50:41):
out there.
So approach artificialintelligence
with a huge levelof salt. Just make
sure that you'requestioning everything
that comes out of there.
And then if you're an individualcontributor in a company that
is using artificialintelligence,
ask the simplequestion, what are we
doing to ensure that the biasfrom the artificial intelligence
isn't built into ourproducts and services?

(51:03):
What accountabilityhave we built in there?
Whether you work inthe insurance industry,
whether you work inconsumer packaged goods,
you're all starting to useartificial intelligence.
Just ask that onequestion as an IC.
What checks andbalances have we created
to make sure that we don'tallow artificial intelligence
to make mistakes and notbe held accountable for it?
If you're an executiveleader, I would
ask you to simply not adopt AIagain without first making sure

(51:28):
that your organizationis building those checks
and balances.
Build the checks andbalances first, then
bring in the artificialintelligence.
Don't bring artificialintelligence first,
allow for mistakes tohappen, and then go,
oh, we need checksand balances here.
We need some kindof accountability.
We need some kindof way to make sure
that there is nobias in outcomes
for what we're utilizing.
And for the operatorsout there, I

(51:49):
would actually say tothem, just because you
can build it doesn't mean thatyou should be releasing it.
I would say that when OpenAIlaunched ChatGPT to the public
in November of 2022, it wasprobably really too early.
It was too early.
ChatGPT simply wasn't ready yet.
But they released it.

(52:09):
And then everybodyelse felt like they
had to release it as well.
And we've seen that there's beena lot of problems since then
based on that early release.
So I would say justbecause you can
build it doesn't mean that youhave to release it immediately.

JILL FINLAYSON: Thank you so much. (52:20):
undefined
This has been a great primer onAI and its interaction with DEI,
and in fact, our ownbehaviors in the real world.
Thank you so muchfor joining us, Bo.

BO YOUNG LEE (52:30):
Yeah, thank you.
And thanks for listening.

JILL FINLAYSON (52:33):
And with that, I hope
you enjoyed this latest in along series of podcasts we'll be
sending your way every month.
Please share withfriends and colleagues
who may be interested in takingthis Future of Work journey
with us.
And make sure to check outextension.berkeley.edu to find
a variety of courses touplevel your AI skills
and certificatesto help you thrive
in this new working landscape.

(52:55):
And to see what's comingup at EDGE in Tech,
go ahead and visitedge.berkeley.edu.
Thanks so much for listening.
And I'll be back next month todiscuss one of the top learning
and developmenttrends, skills agility.
Until next time, theFuture of Work podcast
is hosted by Jill Finlayson,produced by Sarah Benzuly,
and edited by Matthew Pietro,Natalie Newman, and Alicia Liao.

(53:16):
[MUSIC PLAYING]
Advertise With Us

Popular Podcasts

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.