All Episodes

October 20, 2022 25 mins

Artificial intelligence – it’s not the easiest thing to trust when it comes to our healthcare. I mean, will AI replace our doctors in the future? There’s a lot of uncertainty about algorithms deciding our medical fate, so Liberty and Scott are getting the truth on AI’s role in healthcare. They go to the experts to explore the problems behind biased algorithms and faulty diagnoses, and discover if AI will cause harm to patients or if it will progress the medical field farther than it could ever go. 


Liberty and Scott talk with Caroline Uhler, co-Director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard; and Niels Olson, Chief Medical Officer at the United States Defense Innovation Unit.


Data Nation is a production of the MIT Institute for Data, Systems, and Society and Voxtopica

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):


(00:02):
[THEME MUSIC]
Welcome to Data Nation.
I'm Monzer Ali, and I'm thedirector of the MIT'S Institute
of Data Systems and Society.
Today on Data Nation,Liberty and Scott
are looking into the roleof artificial intelligence
in health care.
[THEME MUSIC]


(00:24):
So, Liberty, you and I havetalked about this a lot.
There's a lot of mixed opinionson artificial intelligence.
Some people think it'sfuture, some people
worried AI is going totake over and replace
humans jobs, the world.
Other people think it mightbe a little overblown.
I know.
You think it's goingto be Ready Player One,
or maybe you hope it's goingto be Ready Player One.
But, I mean, theother place we're
seeing this conversationreally explode

(00:46):
is in health care, where AIis being used more and more.
And some people are saying thatAI will make health care better,
but others are pretty scaredthat AI could cause harm
when it comes totheir own health.
So do you think we're goingto end up with AI doctors?
I mean, I don't know, I'dprobably let one operate on me,
but I'm not sure society is.
Do we think this is amisconception that there's

(01:07):
going to be an AI doctor?
What do we think thefuture of health care
AI is going to really look like?
I think before wedefine the future of AI,
we all have to be on the samepage of what it means when we
say, quote, "AI in Health Care."
When we talk aboutAI in health care,
we're talking aboutmachine learning.
So let's say we want to createa program that detects lung

(01:30):
cancer in patients CT scans.
So this is how AI would learnto detect something like that.
Basically, a large data set iscollected of patients who were
previously diagnosed, and ahuman hand labels which scans
have lung cancerand which don't.
Then this data withthe human label
is fed into an algorithm wherethe program would analyze

(01:52):
the difference betweenthe two groups, lung
cancer and non-lungcancer, to determine
its own meaning of whatlung cancer on a CT scan
looks like so it can identifyon new on human labeled cases.
So it's interesting because inthis case and in future cases,
the AI learns based on thedata sets that humans collect
and label, butthe machine really

(02:14):
decides for itselfwhat to look for
and how to identifysomething peculiar in a scan.
So that's just an interestingexample of AI in health care.
We want to explorea little bit more
realistically how much can AIreally influence decisions made
by practitioners going forward.
So for that, we're goingto ask Niels Olson.
Niels is a boardcertified pathologist

(02:36):
and chief medical officer at theUnited States Defense Innovation
Unit.
Niels, I want to understandwhat AI and health care
actually looks like.
I mean, are we goingto get to the point
where a doctor doesn'tneed to go read
the X-ray because there'sa machine that's read it
and it says thepatient has cancer?
Is this where we're headed?
We want to know what is data'sactual role in health care.

(03:00):
So for pathology,one of the systems
it evaluates theprostate cancer,
but it only does it afterthe pathologist has already
made a decision butbefore they sign it out.
So it doesn't influencetheir decision.
But if they find cancer,it doesn't adversely
affect them finding the cancer.
But if in the rarecase that they

(03:20):
might have missed something,it'll backstop them and say,
hey, you should look over herebefore you sign this thing out.
And then for radiology, there'sanother somewhat more assertive
effort to organizethe work list.
So as soon as theX-rays come in,
the radiologist is gettinga list of these things,
and they're just comingand order, time order.

(03:42):
So these AI systemscan evaluate them
while the radiologist is workingtheir way down from the top.
And it can look at themvery quickly and say,
hey, this thing looksbad, and it will quietly
move that to the top.
So you've gotfresher eyes earlier
in the day looking at thisstuff and getting an answer

(04:03):
to the bad things faster.
Now, implicitly, the normalsshould float to the bottom.
And so you'll getto the normals later
in the day what youwant to be careful of in
that case is makingsure that there aren't
any bad things in thenormals, but generally,
it's like Christmas lights.
If you're looking for houseswith no Christmas lights,

(04:26):
the one that has Christmaslights stands out.
[THEME MUSIC]

Based on what Neil's told us,algorithms can assist doctors
in making decisions.
But the real question is,are we actually better
off using these algorithms?
I mean, one could argue we'rebetter off with the example
of the COVID-19 vaccine.

(04:46):
When developing anykind of vaccine,
clinical trials are thegolden ticket to FDA approval.
Every vaccine needs toundergo thousands of trials
to help protect thepatient's safety
and increase their abilityto fight the virus variants.
Before COVID-19, the quickestvaccine to be developed
was the mumps vaccine, recordbreaking four-year development
period.
However, the COVIDvaccine was created just

(05:08):
under a year, a fact thatcaused a lot of people
to question itsvalidity and safety.
But the thing is, it developedso quickly because of the help
from artificial intelligence.
Researchers usedAI to run trials
against different variants.
AI was able to changecomponents, tests,
and record results for trialsfaster than any human could.
AI also created usefuldata sets for scientists

(05:28):
to figure out underlying causesof successes and failures.
So with the helpof AI, we're really
able to rapidly create a vaccinethat saved millions of lives.
Sure, but this isvaccine development.
We have to look at all of theother aspects of health care,
like detecting and treatingcancer, chronic illnesses.
And so for thesequestions, we decided

(05:50):
to bring in Caroline Uhler.
Caroline is theDoherty professor
in the Department of ElectricalEngineering and Computer Science
and the Institute for DataSystems and Society at MIT.
She's an elected member ofthe International Statistical
Institute, and she is therecipient of an NSF Career
Award.
Caroline, it seems like AIcan play a really useful role

(06:10):
in health care.
We saw it with the COVIDvaccine development,
and we've seen how it can,for example, detect cancer
in X-rays.
But sometimes moretechnology, more data
doesn't necessarily meanbetter care, better outcomes.
So is using an algorithmactually better than the people
that we have on the ground?

(06:31):
Of course, the questionis, what does better mean?
So in this case here,what you would like
is that you might be ableto detect it earlier.
This is really important here.
And I think this issomething that has been
observed with current methods.
So some exactly developedby Regina Barzilay at MIT
that they were really able toshow that they could detect,

(06:51):
in particular, inbreast cancer, they
could actually detect it muchearlier than a pathologist could
currently detect it.
So yeah, an algorithm is ableto see many, many more examples
than a human everis, and so might
be able to pick up thingsthat are not so evident yet.
So what are thepotential problems
with having AI detect andpredict people's health
conditions?

(07:11):
I think the questionsthat we generally
have are questionsabout generalization,
whether you arereally picking up--
and that's why I alsowork a lot on causality.
So what the machine isactually picking up or causal
features and willthen generalize also
to other settings.
So say, let's just stick tothe breast cancer example,
you have trained your machine inone particular hospital setting

(07:35):
where the images have beentaken by a particular person
with a particular machine.
So now, of course,you want it to then
be generalizable toother hospital settings
where you might be taking imageswith slightly different machines
and also by other people.
And so that youralgorithms are not just
too specialized to theparticular application area
that now you're looking at andwill generalize also across.

(07:58):
And in particular,here, also you're
going to train it on data froma particular group of people.
And so then youdefinitely also want it
to be generalizable acrossdifferent populations of people
because this one particularhospital might be sitting
in one particular place.
And so these are difficultiesthat we currently face

(08:18):
and that it's really importantto be able to analyze better
when these differentalgorithms do generalize well
and when they actuallydon't generalize well.
And I think here an exampleis where we are often
using data from, say,the UK Biobank, which is,
of course, predominantlyEuropean white.
And so then it isreally important

(08:39):
that we actuallytest and understand
in which settings theseparticular conclusions that we
find in this particular data setwill generalize also much more
broadly.
[THEME MUSIC]

Starting with anonrepresentative sample
or potentially biasedsample and then
trying to generalizethat information

(09:00):
to the whole populationgives us a problem
that people are really startingto recognize in the mainstream
news, and that's that machinesmake biased decisions even when
they weren't taught to do so.
Recently, doctors have beenusing AI to read chest x-rays.
Historically, chestx-rays have never
been able to tell therace of a patient.
But when initially testingthe AI to read chest x-rays,

(09:22):
doctors noticed that the AIcould, with 90% accuracy,
tell the self-reportedrace of the patient.
So to be clear, theproblem here isn't the fact
that the race of a patient canbe determined by their X-ray.
The problem is thatAI systems tend
to diagnose with thepatient's race being a factor,
and often thechest X-ray is more
likely to neglecta sign of illness

(09:42):
in Black and female patients.
Scientists haven't been able todetermine how medical X-rays are
able to tell the patient'srace, and they're
finding that itmight be incredibly
difficult to create an algorithmdoes not have any racial bias.
This is an example of machinesmaking biased decisions.
But there's anotherproblem, and that's
that the machines learn, orcan learn, on biased data.

(10:06):
There was this algorithmfrom a company called Optum,
and the algorithm predictedwhich patients would benefit
from extra medical carewith the goal of preventing
these people from gettingto the point where they
needed to go to the hospital.
But the problem wasthat this algorithm
ranked patients basedupon health costs,
assuming that the moresomeone paid in medical costs,
the more medicalcare they must need.

(10:28):
So you can start to see whatthe potential problems with this
are.
And because of healthdisparities in the US,
African-American patients incurabout $1,800 less in medical
costs per year than theequivalent white person.
So with this data,the algorithm ranked
much sicker African-Americanpatients as equally

(10:49):
in need of extra care as muchhealthier white patients.
So I think whatthat means is it's
so important to note that thedata was not intentionally
created to be biased.
The data didn't evenknow if the patient was
white or African-American, butthe existing health disparities
were taken into account whichthen caused this machine

(11:10):
to learn on biased data andgive unfair advantages based
upon your race.
So at the end of theday, these problems
can have huge consequences.
Most doctors usingthese algorithms
would be unaware ofthe bias present which
then has the potentialof creating a deeper
disparity in health care.
So we wanted to goback to Niels Olson
to ask, what does it taketo fix the bias in data?

(11:33):
We've seen that there'senormous bias in the data.
We have AI systemsthat will rank
much sicker African-Americanpatients as equally
in need of extra care as muchhealthier white patients.
I mean, you see thisbias in the data
that happens regularly withinsurance systems or whatever.
So could that same issuebe applied in an AI setting

(11:53):
with medical data whereyou get enormous bias
from the data itself thatmakes the whole system wrong?
Yes, actually, I think this isa place where AI is a solid win
because you can measure it.
You can measure the bias.
I know how much, and Ican go hold the algorithm

(12:13):
developer accountable.
What it does do is put incentiveon developing those test sets.
So I can do themeasurement of how much.
And I think that's somethingthat most of these AI
conversations miss is themassive value in developing
independent test sets forverification and validation.

(12:36):
That independent verificationvalidation step is critical,
and the algorithmdevelopers don't
want to talk about that cost.
That's really expensive.
[THEME MUSIC]

So it seems like there'ssome methods and possibility
of solving biases in algorithms.
But like Neil says,it's expensive.

(12:57):
Yeah.
And so I think when wewere talking to Caroline,
I was really curious how sheapproaches this problem of bias.
And I guess,specifically what needs
to happen for it to be fixed.
Caroline, how do wemake industry actually
care to fix the problemof biased algorithms?
Besides doing the right thing,are there other incentives

(13:19):
we can give them?
So a lot of peopledo want to fix this.
And so there is a lot of reallyimportant research going on
in this area whereagain, it's always
this counterfactualquestion of understanding
whether if all I changedis race, for example,
would the outcome be different?
If everything elsewould be the same,
but this is somethingthat I'm changing.

(13:40):
And so this is the counter.
And if this is a problem,if this is really the case,
then, obviously, somethingis wrong in the data set.
There are biases in therein the data set that you
don't want to be perpetuated.
And so this is thetype of research
that we're doing quite a bitof in terms of really trying
to understand thesecounterfactuals
and really trying tounderstand where you already

(14:02):
said in the story, whereis it that it went wrong?
But this requireda whole analysis
of identifying what isactually these factors?
What are thesecausal factors that
then lead to thesedownstream effects
that you don't want it to have?
And so I thinkjust this research
in this area of reallytrying to understand what
are the causes that then leadto a particular decision that
is being made is superimportant so that we

(14:24):
don't run into this problemof just using some biases that
are just in the data set.
So if we can solve theseproblems with the data sets
and it seems likewe can, what are you
really excited aboutin the next 10 years
when it comes to using this kindof technology in health care?
Yeah, I really think it is justsuper exciting times right now
where we can really make bigadvances with machine learning

(14:44):
in drug development,in the identification
of different typesof diseases, but also
just in reallyfundamental understanding
of different diseases and thebiological mechanisms underlying
some of the diseases.
So I think it's justa super exciting time.
And I think we've only seen likereally the tip of the iceberg
where a lot of just theautomation kinds of things

(15:05):
have entered intodrug development
where we're using AI.
All right.
So what do you think is thenext big medical breakthrough?
Wave your magicwand for a second.
Do you think we will have solvedanything huge in the next 10
years?
Yeah.
So I think forsome diseases we've
gotten already quite goodat really looking at them

(15:26):
but others were still far away.
So something thatI like to say often
is that, for example, cancerhas been cured many times over
in mice, but we'restill not very
good at curing it in humans.
So there is a bigstep there to be
done that we haven't done yet.
And I think the other oneis also neurodegenerative
diseases I think we're very,very far away from really

(15:47):
understanding them and havinga cure so that we can not just
become old butactually healthily old.
I think that's healthyaging is really one
of the important steps to do.
You actually want tobecome old and really
be able to have a greatlife at that time.
[THEME MUSIC]


(16:07):
AI and health care is headingin a really revolutionary
direction, but are wereally ready for this?
Well, Scott, I asked mystudents who are, obviously,
a totally differentgeneration, to talk to people
in their circles about howthey'd feel with receiving
a diagnosis from AI.
So they went out,and they asked people
of many differentethnicities, backgrounds, ages

(16:28):
first about receivinga lung cancer
diagnosis but then aboutreceiving a sleep apnea
diagnosis.
And they prefaced thequestions with the fact
that the AI program would bestatistically more reliable
than the average human doctor.
So what was reallyinteresting is
that people that didn'thave chronic illnesses
or frequent doctor visits asidefrom the once a year checkup

(16:51):
were more likely to saythat they would trust
the AI program's diagnosis.
And this was very especiallytrue regarding the sleep apnea
diagnosis, because, Imean, I would assume
it's a less severe condition.
But those that consultedwith doctors frequently, so
people with chronic conditions,or medical issues, or whatever,
they were a lot less likelyto side with the AI diagnosis.

(17:14):
So that's logical to mebecause I would think people
with severe medical issueswould want a second opinion,
whether it's AI orjust another doctor.
Well, sure.
And what's super interesting isanother factor people brought up
was that it was dependent on howthe AI diagnosis was delivered.
So one of my studentsasked her mother
if she would trust an algorithmto deliver a diagnosis to her.

(17:36):
And her mom saidthat she would be
skeptical of a major diagnosisunless the information was
delivered to her by,like, Baymax, you know,
that adorable, little,inflatable robot from Disney's
film.
I'm sure you've watched thismany times, Scott, Big Hero 6.
But my student, she was prettysurprised and asked her mom why.
So her mom explainedthat it would

(17:57):
make the algorithm more tangibleand would feel more human
adjacent.
So she still wantedthat human touch.
And a lot of the studentspeers agreed with her mom
that if they weretold by a computer
that they have lung cancer, theywouldn't trust the AI diagnosis.
But if they had human contacttelling them the AI diagnosis,

(18:18):
then they'd be a lotmore comfortable.
So that poses the question,will we ever get to the point
where we fully trust AI todiagnose us or treat us,
will eventually take the lead,will it replace a doctor?
Well, I'm going to fog off thehard question to Niels Olson
again, because I know he spenta lot of time really considering
if people will ever cometo truly trust the data.

(18:41):
Niels, I have to askthe pointed question.
Do you think AI willget to the point
where it replacesdoctors to the degree
that we'll actually needto fully trust AI and not
have a person?
No, these arenontrivial decisions.
I'm actually amilitary officer, and I
was a Surface Warfare Officerbefore going into medicine.

(19:03):
And we had autonomousweapons systems in the '80s.
The Aegis Cruiser systemcould shoot a missile
without anybody evermaking a decision.
It's just like setit on auto, and it
will start shooting thingsif they need to be shot.
No one ever trusted it becausestarting wars is a big deal.
That's the understatementof the conversation.

(19:24):
[LAUGHTER]
I don't think anyone is ready tolet these things make decisions
like chemotherapy.
If I say somebodyhas got cancer,
then that gives an oncologistnot only permission
but an obligation toinject them with chemicals
that could kill them.
Chemotherapy is basicallyjust enough poison

(19:45):
to not quite kill you.
And that is a big deal.
It gives a surgeon not justpermission but an obligation
to cut you open.
That's a big deal.
So I don't think anybodyis ready to do that
without human intervention.
It would be nice if we couldmake the workload easier,
but we're going to want somebodyto sign for that decision.

(20:08):
You want accountability.
Right.
It comes down to accountability.
If my Tesla--
I drive a Tesla.
I've tried putting iton its full auto mode.
My regular car, I might drivewith one hand sometimes.
But the Tesla, I have two handson the wheel all the time.
Oh, that's interesting.
I feel like thepoint of my Tesla
is that I don't have to havemy two hands on the wheel
all the time.

(20:29):
I was going to say, it'sinteresting you bring up
the full autonomy thing.
In the Tesla, theargument is look,
it's not that the Tesla,these automated systems
don't make mistakes.
It's just thatstatistically speaking they
make less mistakes than humans.
But statisticallyspeaking, autonomous cars
probably make less mistakesthan human drivers.

(20:49):
They're not checking theirphones and things like that.
Is that kind of logicOK to apply to medicine?
To your point, we probably don'twant them diagnosing cancer
all the time.
But if statistically speakingthey're better than a doctor who
hasn't slept in three daysbecause they're on call, I mean,
it's not that it's notgoing to make mistakes,
it make less mistakes.
So computers aren'tnecessarily perfect.

(21:10):
They can still make mistakes,but statistically speaking,
they can be betterat certain jobs
than humans are as long asthey're programmed right.
So when applied tomedicine, do you
think we'll ever beOK with the trade off
and let AI take onmore responsibility?
I don't see it happening.
So we have one project whereit's de-identification.
We want to share freetext medical records
with researchers.

(21:31):
And it would be nice if wecould share it and be confident
that it's de-identified.
OK, we've actually evaluatedmultiple de-identification
systems.
The probability that theymiss a name is very low,
but it's not zero.
And so I'm still going to haveto pay somebody, preferably
two or three people, to actuallylook at it before we let it

(21:54):
out the door.
I'm going to have names thatI can identify and sign.
This person looked at it andsaid, it's completely clear.
And there's consequencesif they get it wrong.
And that's justde-identification
for medical research.
There's not really patient harm.
It's low stakes iswhat you're saying.
[LAUGHS]
Relatively low stakes comparedto making new chemotherapy

(22:17):
drugs.
I like this conceptbecause I don't care how--
I mean, I'm a data person.
And you could tell me thatthe computer is 99% accurate,
and the doctor's 90% accurate.
I still want the doctor tosign off on it in the end.
I get that.
I mean, the idea isin 50 years, are we
going to feeldifferently or not?
But I get that concept.
It's interesting you say that.
I'm OK with that.
And maybe it's maybe I trust thedata and the computer so much,

(22:38):
but I'm not necessarily sureI'm going to be the prevailing
opinion in our lifetime.
I think your opinion isgoing to be the prevailing
opinion is people are just--
the autonomous system youworked on in the '80s worked,
but no one wanted to riskstarting a war because of it.
So do you think inthe future AI is
going to be a netpositive for health care,
or do you think it's goingto be a net negative because

(22:59):
of the bias that comes intoit and the lack of trust?
I think it's goingto be a net benefit.
And I'd say any timeyou have more evidence,
I don't know of anyjudges or attorneys who
are sad to get more evidence.
So as someone makingdiagnostic decisions
on a regular basis, thatability to have more evidence--
and you can think ofit in this setting of,

(23:22):
let's say, I have anaugmented reality microscope,
and it has a prostatecancer algorithm
trained by 29 pathologists.
I can say this thing madethis inference, which
is a digital file, and thatis the aggregate opinion
of 29 other pathologists.
I disagree with itbecause of the following,

(23:46):
but at least I cancompose that sentence
where it starts withI disagree or I agree.
[THEME MUSIC]

Well, I think this has beencertainly one of our most
confusing episodes,but I do think
that I'm wrapping my mindaround the subject a lot more.
What about you, Scott?
Yeah, I think I am too.
It's interesting howAI is going to be

(24:09):
the next greatmedical tool assisting
diagnosis and things like that.
And we've had some astoundingmedical victories recently
with the COVID vaccine.
I think AI has really gotsome promise for the future.
Yeah, but to have the chiefmedical officer of the US Navy's
Defense InnovationUnit tell us the idea
that people won't trust the dataeither way is pretty incredible.

(24:30):
Yeah, not trustingthe data seems
to be a pretty commontheme as of late,
but I don't know thatit's really something
we've got to worry about.
Robot making our health caredecisions in the near futures.
We're not going to takethe person out of the loop.
AI is going to behere in many ways
with assisting in diagnosesand potentially surgeries
and things like that.
And as always, you justgot to be the best advocate

(24:51):
for your own health care.
Yeah, I think it'spretty clear that even
with AI playing a role, thebackbone of our health care
system will stay, as it's alwaysbeen, people helping people.
[THEME MUSIC]
Thanks so much for listeningto this episode of Data Nation.
This podcast is broughtto you by MIT'S Institute

(25:12):
for Data Systems and Society.
And if you want to learnmore about what IDSS does,
please follow us atMIT IDSS on Twitter,
or visit our websiteat IDSS@MIT.edu
[THEME MUSIC]
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.