All Episodes

May 5, 2023 55 mins

Send us a text

A new wave of generative artificial intelligence (AI) is proving to be a disruptor across industries. How can artificial intelligence and tools like Chat GPT improve health care? What are the pitfalls? The University of Maryland Medical Intelligent Imaging (UM2ii) Center is an academic research hub for revolutionizing medical imaging AI housed in the University of Maryland School of Medicine. Paul Yi, MD, assistant professor of diagnostic radiology and nuclear medicine and director of UM2ii gives his insight on AI in health care and how UM2ii fits into that landscape.

Paul is also the co-host of the Radiology: Artificial Intelligence podcast

Listen to The UMB Pulse on Apple, Spotify, Amazon Music, and wherever you like to listen. The UMB Pulse is also now on YouTube.

Visit our website at umaryland.edu/pulse or email us at umbpulse@umaryland.edu.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Dana Rampolla (00:00):
I saw an interesting thing on Instagram

(00:02):
where it showed some examples ofpeople who were feeding
questions to Chat GPT, and theywould say men are awesome.
And Chat GPT responds, I'msorry, but I cannot agree with
that statement.
Everyone is unique and has theirown strengths and weaknesses
regardless of their gender.
It's important to value andrespect everyone equally.

(00:22):
And then the next question was,women are awesome and it says, I
agree, exclamation mark.
Women are incredible and havemade significant contributions
to society, and it goes on andon in support of women.
And Charles, it makes me reallynervous because I worry about
all of this.
What is feeding that chatbot andwhat are they spitting back out?

(00:43):
Am I just afraid because I'm awoman with a little bit of age
or am I nervous because I'veseen too many movies about
futuristic societies?

Charles Schelle (00:55):
No, there, there's definitely some validity
to be concerned with it.
Look, I, I'm very much of theilk of, I'm excited for new
technology.
I like advancements and things,but no matter what the
technology platform, it, itseems like everything's always
released a little bit too earlyfor what people want to use it

(01:16):
for, or the expectations aren'tquite there.
Right?
And so it's like everything's inbeta mode and then it gets
released and it feels like a lotof these Chat GPT Bard, these
things are in a public beta modeand it's just on steroids,
right?
To a degree because nobody knowshow to use this, but everyone is

(01:38):
expecting perfect results andperfect answers, and, and it's
like, it's, it's crazy.
In our field of work, We'recommunicators, so we're, dealing
with words.
My first experience with ChatGPT, just playing with it was
just being cheeky after seeinga, an Instagram post like you
did of someone did the Hulu TVshow Letterkenny, which is from

(01:59):
Canada did a version forAustralia, right.
And so I was like, okay, welllet me do like Schitt's Creek
visits Baltimore.
And like, it was like, it wasokay.
But it kept spitting out likethe same type of stuff.
Like they were really obsessedwith the Baltimore Bazaar flea
market.
Really?
Yes.
The, that was the entire episodehinging around

Dana Rampolla (02:21):
that.
I even familiar with that.
I thought you were gonna say theBaltimore dialect or the
Baltimore talking about crabsand the O's or something.

Charles Schelle (02:30):
Yeah, there's a little bit in, in what they had
in the script that I asked it tocreate.
But I have to say, I'm sorry foranyone who's visited the
Baltimore Bazaar or if theowner's listening.
I, I wasn't sure if it was areal place or not, but I was
like, oh, let's a apparentlyvery well known flea market and
so they educated me.
But, but that's just creativecontent.

(02:52):
If you want to call it creative,it's up to you.
But, what happens when AI istrying to make medical
decisions?
You can use it for all sorts ofcool things with data
potentially, but where does theline get drawn?
Where, where does the humaninteraction with the technology
come into play?

(03:12):
How do we make sure it's tellingus the right things?
There's potential for greatthings, right?
But you still need to, to checkit, right?

Dana Rampolla (03:20):
Right, right.
And who's managing that from,from the top, the top level
Who's making sure that there'snot a bias in that data.
That the data is actually clean,good data and, and really what's
it being used for?
So we have a guest today, PaulYi, he is the director of the
University of Maryland MedicalIntelligent Imaging Center,
better known as UMii.

(03:40):
And Paul is an assistantprofessor of diagnostic
radiology and nuclear medicineat the University of Maryland
School of Medicine.
Charles, is he official fellow?
Is that

Charles Schelle (03:51):
An official fellow?
No, no, no.
An official fellow?
No, he's, Fischel Fellow at theUniversity of Maryland Robert E
Fischel Institute for BiomedicalDevices.

Dana Rampolla (04:02):
Very cool.
Paul is also an adjunct at theUniversity of Maryland College
Park, A.
James Clark School ofEngineering, as well as an
adjunct at Johns HopkinsUniversity where he actually
hails from for some of hisstudies.
So hopefully he can shed alittle bit of light on this.
I know he can talk all things AIand medicine because he also is
a co-host of the RadiologyArtificial Intelligence podcast.

(04:25):
So he's bringing all kinds ofknowledge and expertise and
hopefully he'll be able to setme straight a little bit.
So I'm not worried about robotstaking over the world.

Charles Schelle (04:33):
Well, well, let's find out.
Enjoy our conversation with PaulYi

Jena Frick (04:41):
You are listening to the heartbeat of the University
of Maryland, Baltimore, The UMBPulse

Dana Rampolla (04:55):
Paul welcome to the Pulse, or so excited to have
you here to talk about such ahot topic.
And you are a man of manytalents between radiology,
artificial intelligence being anadjunct professor with
engineering faculty, and you'realso a podcast host.
So tell us, let's just circleback to the beginning.
How is it that you're even hereat the University of Maryland,
Baltimore?

Paul Yi (05:16):
Yeah.
Well first, Dana and Charles,thanks so much for inviting me
to be on the U M B Pulsepodcast.
It's a pleasure to be here,particularly as a podcast host
myself.
I know all the work that goesbehind the scenes.
As for how I got to Universityof Maryland Baltimore, well, I
was recruited here to direct anartificial intelligence center
in the Department of Radiology.
It's called UMII, which is shortfor University of Maryland

(05:38):
Medical Intelligent Imaging.
And the idea was that our chairhad a vision for the potential
for AI to really transform howwe do medical imaging, whether
that's in how we acquire images,the diagnosis to communicating
the results.
And he really wanted to findsomebody who could lead out the
different parts of it.

(05:58):
I think that when we think aboutAI medical imaging, there's two
parts there, right?
You certainly have the technicalcomponents.
Artificial intelligence, wethink of things like
engineering, Google Theengineering side.
We also have the radiologymedical imaging, which is really
the medical side.
And so, I kind of fit that billI think because I had experience

(06:19):
uh, formally trained as amedical doctor.
I'm a radiologist.
My specialty is actually amusculoskeletal imaging, so bone
and joints imaging.
I had additional clinicaltraining in orthopedic surgery
for a couple years, but I alsohad a lot of experience with the
machine learning side.
During my residency at JohnsHopkins, I co-founded Atlanta
Research Group working withengineers and I would say that's

(06:40):
really been my calling card isfiguring out how do we make
collaboration happen betweenpeople with different
expertises, whether you'resomeone who's an engineer,
someone who's a physician.
And that's really been somethingthat, I've seen the potential
for synergy.
And it's something that Idefinitely saw during my time at
Hopkins.
And so, the other piece of thestory was that I did an imaging
informatics fellowship at theUniversity of Maryland because

(07:02):
as a result of my AI research, Irealized that for this
technology to really make animpact, it needed to exist in a
larger ecosystem.
And namely the imaginginformatics, which is basically
how do we make informationsystems work in a medical
system?
How do we actually get it toplay nice with the E M R, with
the PAC system?
And moreover, how do we actuallymake it helpful for physicians

(07:24):
using it?
And so I actually came over tothe University of Maryland
because you may not know thisinstitution has the, to my
knowledge, the first andcertainly one of the most
storied imaging informaticsfellowships in the country.
It was founded by a guy namedDr.
Elliot Siegel.
He's the chief of radiology atthe Baltimore VA.
Actually now for all of Marylandand West Virginia.

(07:45):
And he built the first Filmlessradiology department back in the
1990s.
And since then he's trained alot of the leaders in imaging
informatics, whether it's inacademics, in industry, et
cetera.
And so that's how I gotacquainted with this
institution.
And I think one thing led toanother, just with my time here,
I really enjoyed my fellowship.
I had a lot of really greatmentors, a lot of great
experiences, and I was reallyfortunate enough to be recruited

(08:08):
for this position to direct theyume Center here.
And so, I guess it's along-winded way of saying that I
did my training across townclinically I did my informatics
fellowship here, and I reallyfound a really unique
opportunity that I think reallyfit my passions and my unique
background.
And I was really lucky to behere.

Dana Rampolla (08:26):
So would you say that you're bridging that gap
then between the doctors and thePhDs in this think tank
environment of AI?

Paul Yi (08:34):
Absolutely.
I think that one of the things Ioften say is collaboration is
key.
The other thing I say is oneplus one equals three.
And what I mean by that is, I'vealways realized we all have 24
hours in a day, whether it's meas a physician, whether it's
someone else as an engineer oranother person maybe doing any
other job.
But I think that when you bringtwo people together who really

(08:57):
see eye to eye, who bringssomething unique to the table,
but they're things that cancontribute synergistically, you
can have the total greater thanthe sum of its parts.
So, one thing I realized earlyon was AI is really cool.
I was super excited about it,but I realized that myself, I
was never gonna be as good of acoder as an engineer, as someone

(09:18):
who does this full-time, assomeone who went through the
same amount of schooling to gettheir doctorate in computer
science.
On that same token, a computerscientist working at Google or
Microsoft would never have thesame amount of clinical
knowledge or clinical expertise.
And so it's kind of like aconundrum.
Well, how do we actually makethis work for AI and medical
imaging?
Well, if we bring the twotogether, I think that that's

(09:39):
when we can really havefireworks where you really have
the best of both worlds.
Now what I'll say about that iseasier said than done.
I think that it requires peopleto really be on the same
wavelengths just in terms of thepartnership to make sure that
the goals are the same.
Make sure that the language thatpeople use can really translate,

(10:00):
which is, I think one of the keythings that I'm tasked with,
which is translating differentconcepts, whether it's a medical
concept to the engineers orconversely a technical concept
to the clinical folks.
And so I think that well, yousummed it up nicely and I think
it's been a challenge, but it'sbeen a fun one.
And I think yeah, I think we'vebeen doing pretty well so far.

Charles Schelle (10:20):
It's a new era right now and all the consumer
side of it too, because I'm sure'a, a lot of this with AI and
medicines obviously been on the,the backend and the professional
side.
Right.
And so give people an, an ideafor the potential good that AI
plus medicine can give peopleand how it might be used when

(10:43):
they like go to their doctor'soffice.

Paul Yi (10:46):
Absolutely.
So I, I'll think of it broadlyas the here and now and the
future.
I think in the here and now, AIhas a tremendous potential.
To really reduce a lot ofdrudgery and things that really
make medicine not as fun or aseffective as it could be.
These are things that are verymachine-like or things that,

(11:08):
seem kind of monotonous.
So I'm talking about things likedoing paperwork, maybe just
transcribing speech.
So that's one example.
If you go to the doctor's officeright now, what happens a lot of
times is we have our doctortalking to us, but half their
attention is on their computerscreen, right?
They're typing away.
Well, there's a company calledNuance that's owned by
Microsoft.
It's the predominant player forvoice transcription.

(11:32):
They have this new technologythat's called I believe it's
Nuance Ambient.
And what it does is it's aspeaker kind of like Alexa that
sits in a doctor's office.
And as the doctor talks to you,it's basically figuring out
who's the doctor, who's thepatient, what did the doctor
ask, what did the patient say?
And so normally what happens iswe do that as the physicians.
We listen and we hear symptoms.

(11:52):
We put that into somethingcalled the history, and we write
down patient reports, thissymptom for this long started at
this time.
And then we do things likephysical examinations where we
say heart rate is this, the lungsounds are like this.
Well, what this ambient kind ofintelligence can do is listen in
and transcribe that, which getsrid of a lot of that kind of

(12:13):
monotonous kind of task, whichis, very clerical.
It's very important.
But it is something that I thinktakes away from the skillset
that physicians have, which isreally doing all of that stuff,
right?
Asking the history, doing thephysical exam.
Typing, it's important, but it'ssomething that maybe could be
automated.
And if you imagine that hasbenefits for the physician it

(12:33):
reduces the cognitive burden,reduces the amount of stuff they
have to think about in theirbrain.
It also makes a better patientphysician experience.
I think more, more of us than wecare to admit, have seen a
doctor who's kind of glued tothe computer screen rather than
talking to us.
Well, imagine a future where thedoctor can have all eyes on you
and it's just about you and thepatient and the computer does

(12:55):
all of the typing, all of thenote taking.
When I think about other thingsabout things like diagnosis one
thing in radiology is thatimaging volumes continue to
increase.
It's It's a pretty toughsituation because medical
images, they've reallytransformed how we do diagnosis.
You go to the emergency room,chances are you're gonna get an
X-ray, you're gonna get a CTscan.

(13:15):
But the problem is that humans,we do have a limit in terms of
how much input we can have at agiven time.
Right?
Like you can tell me, Paul, youhave to read 200 CT scans in an
hour, but I'll look at you andsay, I am human.
I'm only human.
Right.
Well, what the AI can dopotentially is do things like
triage scans.

(13:36):
So maybe we can say, I can't getto every CT scan in the next
five minutes, but I can get tothe ones that are the most
urgent, the ones that might havean emergency on them.
And so right now we have reallife algorithms here at the
University of Maryland from acompany called ADOC that
basically looks at CT scans,looks for things like bleeds in
the brain, looks for blood clotsin the lungs, and it

(13:58):
automatically triages andprioritizes those scans and
tells the doctor, Hey, look atthis scan first, because the
other ones don't look like theyhave something really bad, but
this one does.
And so I think that's in thehere and now.
Then go into the future, right?
I'm envisioning a whole newparadigm of medicine.
We hear people talk about thingslike precision medicine, which

(14:18):
is saying rather than doing aone size fits all, maybe one day
we can take all of the data thatwe have, whether it's your
medical imaging, your genomicsdata, your medical history, and
give personalizedrecommendations from a medical
imaging standpoint, which isreally my wheelhouse as close to
my heart.
We think about images being veryqualitative, meaning we just

(14:38):
describe what we see.
We say, Hey, there is a tumor.
The tumor is in this location.
But I believe there's a futurewhere we can take the images and
rather it just being adescription, we can actually
extract numbers from it and havethings like, this is the volume
of the tumor.
This is the signal intensity ofthe tumor that correlates with

(15:00):
this biological process.
And so I think it's both in thehere and now for the tasks that
we currently do as well as thefuture for some of these new
paradigms.

Charles Schelle (15:08):
What stood out to me was, was your first
example as someone who works inthe communications field,
because I would remember seeinga lot of freelance opportunities
for medical transcription.
The doctors would send in backthen, like their little mini
tapes and they need people totranscribe.
And I would look at these jobsthinking, I would've no clue
what some of these, terms theywould be using medical terms in

(15:30):
order to, to transcribe itcorrectly.
So, we use transcriptionsoftware here for our job, but
it still takes refinement tomake sure it's saying the right
thing.
So I'm sure that goes into anexample of, of where AI is right
now Anyways, regardless is thatyou have the tech that can speed
up that busy work, but you stillneed a human behind it to double

(15:55):
check it, refine it, and, andmake sure it's telling you.
Correct information.

Paul Yi (16:01):
So, I think your point's really well taken.
I think that AI is incrediblypromising, right?
We've seen it how it can dothings like self-driving cars
with Tesla.
It can tag our photos onFacebook and say, Hey, your
friend tagged you, or maybe thisphoto has your face in it.
But we've also seen these thingsaren't perfect, right?
I mean, some of the early reallydevastating things about Tesla

(16:22):
was, we had people gettingkilled by these cars, and I
think we've seen similar thingswhere AI is not perfect in
medicine.
One of the things that's reallyexciting is when it's right.
It can be really right.
It can find things thatradiologists might have missed,
let's say, on a CT scan of thechest.
But on the other hand, sometimeswe look under the hood and we
say, well, when did it getthings wrong?

(16:43):
And it can be wrong on reallyobvious things that a
radiologist would never miss.
And that's one of the challengesof this new kind of AI
technology that's called deeplearning.
That's really the predominanceset of techniques that's
transformed how we do things.
And the kind of really highlevel look at it is deep
learning algorithms can teachthemselves how to do a number of
tasks, whether it's how to playchess or how to look for a brain

(17:06):
tumor.
But the problem is that they'reblack boxes because we can't
necessarily know what'sunderlying the decisions.
And so because of that, thepredictions can be
unpredictable.
Like I said, sometimes it can bereally, really right, but it can
be really, really wrong.
So I think that's why at leastcurrently, we really need a
human in the loop, so to speak.

(17:27):
Where we have a lot of the tasksbeing automated, but we have to
have checks and balancesbecause, it's one thing if my
face gets tagged incorrectly onFacebook no biggie.
I can always, I'll, I'll live.
But if we're talking aboutsomething like a lung cancer
diagnosis or a bleed in thebrain, that's literally life and
death, right?

(17:48):
So I think that especially forsomething like medicine, we
really need those checks andbalances and thankfully that is
something that's top of mind forpeople like us in research
groups like the UMII Center,it's top of mind for the
radiology societies like the R SN A and the A C R, as well as
the Food and DrugAdministration, the F D A, who's
charged with actually regulatingthe safety and approval of these

(18:11):
and incidentally, these aregroups that we're working with,
with the UMII Center on thesetasks for things like
trustworthiness of AI.

Dana Rampolla (18:18):
Yeah, that is, that's the scary part to me.
So tell us a little bit about,you recently were part of a
study, it was the Chat GPTstudy, I think pretty much
everybody's now heard about ChatGPT.
Talk a little bit about theaccuracy of what you learned
from that.

Paul Yi (18:34):
Yeah, for sure.
So Chat GPT, as most peopleknow, it's really taken the
world by storm.
It's this so-called largelanguage model and it's
basically a chat bot where youcan communicate with it, text
it, and get answers toeverything from, Hey, make me an
itinerary for my trip to Paris,to Hey, I've got this computer
code, there's a bug.
Can you fix it?
And so one of the things that wehave realized even before Chat

(18:56):
GPT is patients are going tointernet.
They're looking up things abouttheir health conditions.
Oftentimes they come to thedoctor and they might have
something they learned thatsometimes isn't accurate.
And that's a little bit of anissue because.
It can cause patient anxiety, itcan cause misinformation.
And with Chat GPT we saw thepotential again for incredible
promise because of thistechnology just being so

(19:18):
scalable, so highly accurate insome cases, but also the
potential to really causemisinformation if it wasn't
performing well.
So we wanted to know how doesChat GPT do for giving
recommendations about breastcancer screening?
This is something that half thepopulation will have questions
about that they'll experience.

(19:39):
And so we asked the 25 commonquestions that we as
radiologists and otherphysicians get asked about
breast cancer screening.
We graded the recommendation.
Using a panel of expert breastradiologists, and we asked,
number one, were the responsesconsistent, meaning the Chat GPT
gave the same answer every time.
And number two, were theseanswers appropriate and

(20:00):
accurate.
And what we found on the plusside was about 90% of these
questions were actually accurateand appropriate, but that 10% of
the time it was eitherinaccurate or it was
inconsistent.
Meaning depending on the time ofday or the day of the week you
ask it, it might give you adifferent response.
And so really the take home iskind of, echoing everything

(20:22):
we've talked about, which istremendous potential, 90%
accuracy.
That's kind of crazy.
But 10% of the time, you reallyneed a human overseeing it.
And so I think that the finalcapstone on that was, this is
tremendously promising, but it'snot ready for prime time because
we don't know when it'sinaccurate and that's a problem.
But I think that it's 2023, Ithink the next few years are

(20:44):
gonna really bring a lot ofreally interesting developments,
a lot of good research that willhopefully allow this to
translate into real time.

Charles Schelle (20:53):
With, with Chat GPT there, there's a bunch of
different ways you can use it,right.
And so it sounds like what youwere doing was like the basic,
retrieval of answers.
And to see where they were beingcited, if it could provide you
where it was getting theinformation.
Because Chat GPT, the, the ChatGPT 3 I'm guessing is what you

(21:17):
used, right?

Paul Yi (21:19):
Yeah, yeah.
3.5 actually, yeah.

Charles Schelle (21:21):
Yeah.
Yeah.
3.5.
So that one wasn't connected tothe internet, right.
So that one only had informationup to a certain point.

Paul Yi (21:27):
Correct.

Charles Schelle (21:28):
So now there's, there's the new model and then
there's the professional modelas well where you can refine
things.
So it'll be interesting to seehow, how this evolves.
But nonetheless, it's still inpeople's hands and they can use
them.
They may not know how it, how itworks and, and are expecting to
get, the, the right answers.
And one of the things about ChatGPT as well is that you can

(21:49):
create frameworks and promptsand, and teach it.
And I was wondering if that'ssomething you're considering
looking into about creatingframeworks to teach it
information to spit backsomething a little bit more
accurate.

Paul Yi (22:04):
Yeah, totally.
I, I think that you're spot on.
I think that, this first paper,it was kind of the most basic
evaluation because the realityis we're all learning this
together, right?
This is a new technology.
For us, it started off with onequestion, just how accurate is
this?
But then the next one obviouslyis how do we make this better?
And I think what you're talkingabout, prompt engineering, how
do we make our questions better?

(22:25):
Maybe if I'm asking someone todo work on my house, the more
specific I am, the better I'llprobably like my results.
So I think it's similar withChat GPT and other large
language models.
With regards to some of thethings we're thinking about.
We've been playing around withthis thing called LLMA Index and
lama, L L M A, so kind of likethe large language model play on
words.

(22:45):
And it's pretty cool because youcan essentially take a database,
let's say it's a bunch ofrestaurants in Baltimore and
maybe there's like the address,maybe there's the type of food,
maybe there's menus and you canactually ingest that, so to
speak, into one of these largelanguage models.
And that's a way that you cangive updated information and

(23:07):
allow Chat GPT to actually.
Give you answers about that datathat might not have been there
from the original training dataor the original knowledge that
it learned.
And so we've been looking atthat for things like how do we
manage medical imagingdatabases?
That's one thing.
If I'm putting my informaticshat on medical imaging data,
it's a tremendous treasuretrove, but it's very, very

(23:30):
disorganized.
Things are cataloged reallyheterogeneously, it's really
variable in how people codethings.
Well Chat GPT is turning out tobe pretty good for that.
And so even things that seem alittle bit off the beaten path,
we're finding that it has a lotof use.
I think from a patient educationstandpoint, we're looking at
things like how do we rewritepatient education material to be

(23:53):
more understandable?
I think it's been pretty clearlyshown a lot of what we give to
patients to tell them abouthealthcare information.
It's not the most easilyreadable.
Same thing was the case forthese responses that Chat GPT
gave.
But if we ask that, Hey, can yourewrite this at a fifth grade
reading level?
It sounds kind of silly, but youread it and you're like, oh my

(24:14):
gosh, this is so clear.
Like a fifth grader couldunderstand this.
Yeah.
And it's accurate.
So I think there's a lot ofreally cool applications on the
technical side.
We use it to debug code, forwriting computer code and it
doesn't work or in patienteducational materials.
So I think the possibilities areendless and it's it's super
exciting.

Charles Schelle (24:34):
And are you looking to also inspect Google's
AI Bard, or, because Chat GPT'sreally has the, the, the
marketplace right now, andthere's probably so much you can
do.
Just focus on evaluating the,the first chat bot.

Paul Yi (24:52):
Yeah, we've actually started doing that.
Cuz we were curious, how doesBard do versus Chat GPT?
Who knows what other models aregonna be released?
Can't say I'm ready to discussthose results yet, but um,
that'll be coming out soon.
But um, absolutely, I thinkthat's one of the things to
remember.
Chat GPT is just one type oflarge language model.
There's new ones coming out andeven after Chat GPT came out GPT

(25:15):
4, which is really, it's I guessit's younger brother that just
came out, who knows what we'llhave in a year, probably have G
P T five and then Bard two andso on and so forth.
So I think it's only gonna getbetter.

Dana Rampolla (25:29):
Well, when you we're talking about the the
breast cancer study that you didthe, that 10% of, let's call it
error, how do you envisioncombating or growing the
technology so that that doesn'thappen.
I mean, I think you were quotedas saying there were a couple of
wrong answers given.
There was actually one placewhere the AI created some sort

(25:52):
of consortium or information onits own.
So that to me, as a user seemsreally scary.

Paul Yi (26:00):
Yeah, I think it's it's tough.
To what you were saying aboutcreating consortiums that
specifically, we kind ofexplored, can we have some
justification for the responses,meaning, Hey, chat chip, pt,
tell me what the answer is, andgive me a source that you're
citing.
Kind of like how an art areporter will say, Hey, this is
my source, this is where I foundthis.

(26:20):
And what we found was, it soundsvery convincing.
All the grammar is good.
The, the material sounds prettyreasonable, but when we look at
the sources, it's like, oh, thisis this joint statement from all
of these different medicalsocieties with all of these
acronyms.
But when we actually looked upsome of them, we found they
didn't exist and there didn'texist an actual statement from
these consortia.

(26:41):
But the problem is that to youor me, we look at and we say,
this sounds very legitimate.
I mean, why wouldn't we trustthis?
This is from the AmericanCollege of Radiology and the
American Medical Association, etcetera.
And so I think that's one of thechallenges.
I think that how to combat thisor how to mitigate this, I think
the key is really gonna betaking these systems which have
been designed for general use.

(27:04):
Cause keep in mind Chat GPT,it's not designed for medicine,
it's just kind of trained on.
Data from all over the internet.
We can do something called finetuning, which is we take
something as very powerful, verypromising, like chat sheet, bt,
and then we can tweak it andcustomize it specifically for
medical information.
For instance, we can maybe takea vetted source of data or

(27:26):
information about healthcare,and then we can say, all right,
chat, e b t start from yourstarting place, but we're gonna
tweak you only for this set ofdata.
And so I think that like mostthings in life's life, the more
specialized we get just thebetter results we'll probably
end up getting.
So I think that's gonna be key.
Now that sounds pretty easy,right?
But I think it's gonna be a lotharder than we think.

(27:48):
But I think that's gonna be keyin addition to having
evaluations like the one that mygroup did, which is making sure,
hey, we know what the technologyclaims, we know what it's
supposed to do, but how does itdo in real life?
Because that's gonna beimportant just to make sure The
theory matches the reality,

Dana Rampolla (28:05):
and ultimately, who is the gatekeeper?
I mean, we're using all of thisfor good, but who, who's the
gatekeeper?
Who, who is, there's gotta besomebody at, at the top who's
overseeing that, thatinformation, that data.
And who's going to determine, ifthere's a bias in it or if
there's just something that iscompletely inaccurate.

Paul Yi (28:27):
Yeah, I, I think there's levels to it.
I think, the simplest answersome people might say is FDA um,
at least in the U.S.
You can't use a medical device,which software is now being
defined as a medical device.
Unless the F D A clears it.
And there are certainexceptions, and, we won't go
into too much of the regulatoryred tape here.
But the reality is the FDA islearning this too.

(28:48):
So we have some collaborationsstarting with the FDA because
they realize AI software is amedical device.
This is a new paradigm.
Even things like evaluatingfairness and bias of algorithms,
it's such a new thing.
And so I think that it's gonnaat one level be these government
agencies that really set theregulatory policies, the legal

(29:11):
precedent.
But I think that it's gonna bein collaboration with research
groups who really understand thetechnical pieces as well as the
medical pieces along withmedical societies like the
American College of Radiology,the American Medical
Association, because these aregroups that really set the
processes for things likemedical reimbursements, which if
you kind of think about thephrase, money makes the world go

(29:33):
round, well, at least forhealthcare, a lot of what drives
innovation, a lot of what drivesadoption of technologies is tied
to reimbursement.
We need to keep the lights on,so I don't think it's all a bad
thing.
So I guess my point is I thinkcollaboration again, is gonna be
key.
You'll probably hear me say thisa few more times, but I think
the gatekeepers on one level,from a regulatory standpoint,

(29:54):
probably the F D A, at least inthe U.S.
From a knowledge standpointthough, which informs the
regulation, I think it's gonnabe the researchers shedding
light on these problems,figuring out what are the best
ways to measure these,developing the tools to actually
communicate them.
And then I think it's gonna beinterdisciplinary societies like
the AMA, like the ACR that aregonna go and try to

(30:16):
operationalize these policies.
And so I think it's really gonnabe a multi-pronged effort.

Charles Schelle (30:22):
A lot of times innovation is ahead of
regulation.
Right.
And you don't know what toregulate until it's created
sometimes and it's justexponentially that much faster
now with, with the technologythat, that you have here.
And so let's kind of zoom back alittle bit, and you were kind of
teasing a bit about, big techand what their abilities are,

(30:43):
but then what they don't knowbecause they're not doctors, and
it's kind of interesting howwe're not here already given the
history of like I B M.
So walk us through a little bitabout what's happening at that
big tech level and what they'retrying to do with the help of,
medical experts.

Paul Yi (31:02):
Yeah, totally.
So I think big Tech has beeninterested in healthcare for a
while.
If we go back 10 years ago,about 2012, IBM Watson had come
fresh off of its Jeopardychampionship beating uh, Ken
Jennings.
And they started setting theireyes on cancer diagnosis.
And in fact, I mentioned Dr.
Eliot Siegel.
He was their primary medicaladvisor back in the day.

(31:25):
You can actually look on YouTubefor Eliot Siegel and IBM Watson,
and they had this grand vision,the same technology that can
learn all of this humanknowledge and win Jeopardy about
everything from art to currentevents to sports uh, can be used
potentially to automatediagnosis of things like cancer.
And so, they worked with groupslike MD Anderson Cancer Center

(31:46):
in Houston, Texas to try to dothis, try to use all of this
data.
But we fast forward 10 years andthat Watson project, kind of
fizzled out.
It didn't really reach thepotential that they had
promised.
And I think one of the lessonsthat was learned was that
medical data is really hard towork with.
I think one, like you said,wrapping your head around it,
getting the subject matterexperts who can really tell you

(32:09):
this is how we should use thedata.
These are the, maybe theoutcomes of interests we're
really interested in.
But I think also one thing Ilearned is medical educated is
dirty in the sense that it's notcataloged the same way at every
hospital.
It's often got errors in things,even like, hey, like what body
part did we image?

(32:30):
What is the actual diagnosisthat this patient has?
And so I think that that was theearly kind of phase of big tech
working with medicine.
I think if we fast forward now,kind of in the last, five to
seven years, Some of the newplayers on the block, Google
Microsoft, Amazon, I thinkthey've wisened up where they
actually have hired physicianseither as consultants, working

(32:53):
as collaborators, or actuallyhaving physicians on staff.
I have a number of friends whoare full-time clinical
scientists at places likeGoogle, at Amazon, and it's
pretty cool because, what's beenthe gold standard for us in
academia has been publishing inpeer review journals.
Well, if you look at GoogleResearch, they've published at
the highest tier, they'vepublished in the Journal of the

(33:14):
American Medical Association orJAMA.
They've published in Nature ofMedicine in these super high
impact journals.
And so I think that's oneapproach where they have their
in-house team, they're workingwith different hospitals, but
having their primary researchteam.
But I think on the other hand,there's a lot of collaboration
ongoing.
We've been working with a teamat Amazon to build out some of

(33:36):
the infrastructure that we thinkare gonna help solve some of
these woes about the data beinginconsistently cataloged, not
being optimally set up for bigdata kind of analysis.
And so I think that well, Ithink it's exciting.
I think it's whether or notthere's a right way to do it, I
don't think, I think, I don'tthink there's a right or wrong
way, but I think there's adifferent approaches and I think

(33:57):
it's gonna be a good future aswe collaborate more between
academics, between industry andreally leverage strengths to
both sides.

Dana Rampolla (34:05):
When we're talking about this data, there's
clearly pitfalls associated withit, probably gender or, or skin
color.
If you're talking medicine, youjust talk about looking at the
skin of one person with lightskin versus dark skin, how that
data's accumulated and then spitback out in a sense.
So what's your thoughts on that,Paul?

Paul Yi (34:26):
So algorithmic fairness or making sure that these
algorithms treat or have thesame accuracy for all people
groups is the primary area ofinterest for my research group.
I think that it's no secret thatin our country as well as
others, there's a lot ofdisparity in healthcare, whether
it's access to healthcare.
We know that racial minoritiesand ethnic minorities have less

(34:49):
access to healthcare.
And the majorities that there'sdifferences based on
socioeconomic status and incomelevel.
And the problem is that the datacan reflect those biases.
One case in point is indermatology literature, if we
look at photographs intextbooks, there's more examples
of different skin conditions inlighter skin tones and darker

(35:10):
skin tones.
So if you can imagine that adataset has mostly lighter skin
tones, and maybe those havebetter or more examples of a
type of skin lesion, an AItrained to identify skin cancer
might do better on those lighterskin tones than the darker skin
tones.
And so that's to say thatalgorithms find patterns in the

(35:30):
data to make a diagnosis or toreach a goal.
And they try to.
Find the quickest way there.
And so, again, I mentioned thatthe deep learning algorithms are
black boxes.
They're very clever.
They can find associations thatwe might not even notice.
But the problem is that theseassociations might be the wrong
ones.
So, for example we've shown thatif you have a chest x-ray data

(35:52):
set and maybe the cases withpneumonia, they have a marker
that says pneumonia in the righthand corner.
The algorithms, they can reallylearn to identify those features
and say, oh, well this ispneumonia cuz it says pneumonia
here.
But the problem is we don't knowthat.
And so I think in the same tokenalgorithms can find these

(36:13):
associations.
One of the things that's reallystriking is these algorithms can
even identify things like yoursex or your race based on a
medical image.
And that's problematic becauseagain, these things can be
associated with differences indiseases and those can be
calibrated to make predictionsthat seem accurate, but might
actually be very biased.
And so, we've shown in our groupand several others that these

(36:36):
algorithms, they may performwell when we look at it
initially, but when we lookunder the hood and we do these
sub-analyses where we ask thealgorithm, Hey, how do you do on
Black versus white patients ormales versus females?
There actually ends up beingquite a bit of a difference.
And so, as much promise as thereis for AI to really deliver

(36:56):
healthcare in unprecedentedspeeds and scales.
If these algorithms are biased,they could also have
unprecedented levels ofdisparity.
And so I think it's reallyimportant.

Charles Schelle (37:08):
I hope one day that it gets to the point where
it, it learns to clean up thedata and, and recognize that to
the point where you're splittingup the data to get more custom,
results and, and recommendationsto a patient.
I'm just imagining, walkingoutta the doctor's office or
maybe getting an email from myinsurance provider with

(37:29):
basically like a health equityreport.
Like you've visited the doctorand based on everything that's
been examined and said, based onyour race and gender and
socioeconomic status, we thinkthat this is the best kind of
route of care, your likelihoodof, of improving.

Paul Yi (37:46):
Yeah, I think that is a very Intriguing and exciting
possibility.
But again, I always kinda haveto take things with a grain of
salt because doing this kind ofresearch, I've seen the pitfalls
with, algorithms when they get alot of data inputs.
They can learn how to do thingsthat are really just
associations rather than true.

(38:06):
I guess meaningful well theseare associations that might not
actually be meaningful from aphysiologic or disease
standpoint.
There's a really um, wellpublicized study that was
published in the journal Scienceback in 2017 by Dr.
Ziad Obermeyer.
He's a associate professor ofpublic health and statistics out
at UC, Berkeley, also anemergency room doctor.

(38:27):
And working with insurancecompanies.
So some of the big players likeUnited and Cigna, they looked at
these algorithms to makedecisions about who should get
approved for certain proceduresor certain visits.
And what they used was somethingthey tried to say, is this
person likely to basically incurhigh healthcare costs?
But what they found was thatthese algorithms, they're
actually exhibiting a lot ofracial biases, meaning that it's

(38:51):
disproportionatelydisadvantaging Black patients
compared to white patients.
And the reason why this was sogroundbreaking back in 2017 was
up until that point, thesealgorithms have been
operationalized.
They have been used to makeactual decisions and people
hadn't really been aware ofthese pitfalls.
And so, I guess what I'm tryingto say is again, that's just
another example where I thinkthese algorithms that can take

(39:13):
all of these data inputs can bevery powerful, but they can also
be problematic cuz they can findthese patterns that are may not
be what we hope them to be.
And so, yeah, I think it'sdefinitely a real possibility,
but I think we have to reallyvalidate these things in
rigorous ways, right?

Charles Schelle (39:28):
I mean, they definitely have to be like peer
reviewed, for instance, to notgive you basically meaningless
data, right?
Because I think that's whatyou're kind of going after is
that, here's a chart, but isthis a chart for just like, this
is a cool thing of numbers thatcould show you information
versus is it actual helpful truemedical advice information

(39:50):
that's, that's actionable and,and telling the, the real story,
kind of all those rankings thatyou'll read online of like Best
City to retire.
And it has all the differentdata points of like, oh, why did
they choose this?

Paul Yi (40:04):
Yeah, totally.
I think yeah, no, no, no, it'swell said.
I think the algorithms can findpatterns in anything and it's
important that we really makesure the algorithms are finding
what we wanted to find.

Dana Rampolla (40:15):
It reminds me when computers were first
personal, computers were firstcoming out, there was a term
GIGO, garbage in, garbage out.

Paul Yi (40:22):
Oh, totally, totally.

Dana Rampolla (40:24):
Kind of makes me think of that because, it might,
I I, I'm hearing how it soundslike it'll be great for
triaging, but maybe notnecessarily treating at this
point.

Paul Yi (40:34):
Yeah, yeah.
Just to get some color tooslightly technical here.
These algorithms, they'reso-called, they're basically
these massive statisticalmodels.
The problem is that thesealgorithms, they can actually,
essentially memorize data.
And that's where I worry aboutit when we start incorporating
all of these different variableslike race and gender and you

(40:57):
could throw anything in there.
There's a good chance that thesealgorithms can actually learn to
identify these patterns ratherthan, or rather memorize the
data rather than identify orlearn something that's like an
actual skill.
Something that transfers orgeneralizes.
So that's kind of the term thatwe use is generalizability.
How do these algorithms performin the real world?
Because it's one thing to say inour lab, in our research

(41:19):
setting, this worked prettywell, but you know, your mileage
might vary when you actually useit in a different population.
Yeah, I think I think it's gottabe tested out.

Charles Schelle (41:28):
That's interesting to hear what it can
do with numbers or treat certaindata, because we're hearing a
lot how these large languagemodels, it's just predicting the
likelihood of what the next wordis and not actually.
Creating original thought and,and, and a lot of instances,
right?
So just then taking data, it'slike, no, this is data that I

(41:49):
know.
So this is data that'slikelihood to be spit out,
right?

Paul Yi (41:54):
Yeah, yeah, totally.
These large language models haveactually been likened by some as
parrots.
They basically learn toregurgitate or repeat what
they've seen or what they'veheard.
Now parrot, I don't know exactlywhat goes on in the mind of a
parrot, but I'm pretty surethey're not actually thinking
about the words they're saying.
So, no, definitely.

Charles Schelle (42:15):
We, we've talked a lot about the, the
different things going on withAI and the industry and
everything, but walk us throughabout the main projects that
UMii is working on tackling someof the things that you've talked
about.

Paul Yi (42:30):
We were talking a lot about fairness and bias in these
algorithms.
We're doing a lot of thefoundational work that's really
important for tackling thisimportant problem.
One of them is simply figuringout how do we really identify
these biases.
One thing is that, it reallywasn't for a few years that
people started realizing, Hey,these algorithms can be biased.

(42:52):
But even the definitions ofthat, there's kind of this gap
between the statistical worldsand the machine learning worlds
with the clinical kind of sideof things, because these notions
of fairness, meaning whatdefines or makes an algorithm
fair or not it's very wellwritten out in mathematical
equations and, the machinelearning people will say, oh,
well we know how to measurethat.

(43:13):
But if you ask a doctor, it'snot gonna necessarily translate.
We literally use differenttranslations or different terms
for things like uh, somethingcalled sensitivity which is an
important statistical propertyfor screening tests.
The machine learning statisticalpeople they call sensitivity
recall.
And so, that's just one likeliteral difference in our

(43:35):
languages.
But I think when we extrapolatethat to the actual equations,
it's um, it gets prettyconfusing because what might be
statistically important ormeaningful may not be clinically
meaningful.
So we're doing work to evaluate,how do these different
definitions of fairness impactour conclusions that we draw.
And the rub is that, whether weconclude that an algorithm is

(43:58):
biased or not, that's gonnaimpact things like policy.
When physicians and advocacygroups like the American College
Radiology go to Congress andtestify, they lobby, they, talk
about what they need in thelegislation.
They bring the research and say,this is what the research shows.
This is why we need a law thatallows for this type of

(44:19):
regulation.
Well, if our conclusions differbased on the definitions that we
use, we have to know and reallyhave a good handle of that.
And so again, it's liketranslating things from the
technical side into the clinicalworld.
We're doing things likedeveloping techniques to
actually reduce the bias when wetrain these algorithms.
Like I said before, thealgorithms, they try to find
these patterns and data and theytry to really optimize a

(44:40):
problem.
It's basically saying, Hey, getthe highest accuracy you can to
diagnose this disease.
And so it might learn theseassociations we don't want it
to, but what if we said learnthe highest accuracy while
maintaining the fair treatmentof these groups.
Well, that's a different kind ofthing.
And so I think that um, it's avery exciting thing that we're

(45:01):
working on in terms of fairnessand bias.
I think kind of shifting gears alittle bit from the
trustworthiness side maybe alittle more human-centric is
this idea of human computerinteraction.
AI like I said, it's got a tonof potential.
I'm really excited about thefuture, but I think we've also
seen with other technologies,like our email and our
smartphones, email wasrevolutionary, right?

(45:22):
You can just send off messageafter message anytime of day.
You can have instantaneouscommunication.
But I think a lot of us haveseen how email is kind of taken
over our lives in a lot of uh,negative ways.
You know, It feels like we can'tever get away from work.
It feels like, oh my gosh, Ineed to reply to every single
message.
And it's really easy, right?
Writing a snail mail letter, ittakes effort.

(45:43):
So there's kind of a higher barto entry.
Email is just, type, type, typesend off.
I think that with AI, if wethink about a medicine, I think
there's a lot of potential, likeI said, but I also worry that
we're gonna have this overloadof notifications and we're gonna
get mental fatigue.
As a radiologist, I look at alot of studies every day, and

(46:04):
I'm trying to minimize thenumber of clicks I have to do
the number of, pop-ups, I haveto click through.
Well, if AI adds a burden ontome, it might actually have the
opposite effect of making mebetter.
It might actually make me worsebecause if I'm getting annoyed
or if I'm getting overloadedwith these notifications, I
might, I'm gonna have less brainspace.

(46:26):
I'm gonna have less mentalcapacity to look at that scan in
front of me.
Or if let's say a doctor isusing that ambient technology to
transcribe the encounter, well,what if it has a bunch of errors
and it's like, I should havejust done this myself.
I'll just add some burden.
So I think, so one of the thingsthat we're doing in
collaboration with a group atJohns Hopkins, Human Computer
Interaction Group is figuringout how does AI impact the

(46:50):
cognitive or the mental overloadof physicians?
What is the optimal way ofdelivering it?
Should it be a widget that popsup on your computer screen?
Should it be something that'sactually built into the current
workflows or should it besomething else?
And so, that's another importantarea.
And then the final piece, I'llsay um, there's a lot of
different things we'reexploring, but I think not

(47:10):
getting away from the medicalside is really enhancing the
value of medical imaging anddata.
We're limited in what we do now.
If you get a chest x-ray, youget a CT scan.
Radiologists, we just kind ofdescribe what we see.
We say there is a tumor, it's inthis part of the body, but we
don't necessarily have thatquantitative information like I

(47:31):
said before.
So I think the catchphrase youcan think of is developing novel
imaging biomarkers, meaningwe're taking the imaging and
we're trying to say, are therecertain markers or signs that
indicate some biologicalprocess, whether it's disease or
maybe metabolism.
And so, one thing to consider isa lot of people who are in the

(47:52):
fitness, they ask What's mypercent body fat?
And right now it's pretty crude.
You can kind of use these likelittle monitor things that send
like an electric pulse and itsupposedly tells you how much
body fat you have.
Imagine you get an mri.
And we can actually tell youexactly what percentage of your
body is fat compared to muscle.
And then we can actually createthings like age and sex,

(48:14):
normalized curves where we cantell you you're in the 90th
percentile for muscle, so you'redoing pretty good.
Or, Hey, you're a little bit lowon the muscle side.
Maybe we should get you intophysical therapy.
This is really important for theelderly because there's
something called sarcopeniawhere we have low muscle mass
and that really predisposes to alot of health problems.

(48:34):
So I think about that when Ithink about my grandmother.
She's not the most robustindividual.
Maybe, we could tell her like,Hey, you're this percentile, we
can do some rehab and improveyour health outcomes.
So I think that uh, those arejust some of the areas and we're
always looking for collaboratorslooking for new areas to work
with.
And the idea, again, beingmoving that needle from the

(48:55):
technical side into the clinicalworld to improve human health.

Charles Schelle (48:59):
I, I'm just thinking also as, as a side
component to this, everyoneassumes, and to a degree may be
valid depending on how this goesabout threatening jobs for, for
certain industries.
But I almost seem like, the oneproject that you're working on
about, how can you reduce thenoise from the, the

(49:20):
notifications of like, wellmaybe that's where you're
repurposing somebody's job andyou're helping them, go through
and sift through that data toexplain to the doctor the
radiologist and condense, andsynthesize all of it themselves
to make sure it's, basicallyquality assurance, quality
control.

Paul Yi (49:39):
Yeah, totally.
I think that when cars werebuilt, There were new jobs, you
needed mechanics, you neededpeople to put the cars together,
you needed people to dodetailing, things like that.
Now that analogy doesn'ttranslate totally to AI or
directly, but I think it'ssimilar.
There's gonna be a whole bunchof new different jobs.
Even if we think about Chat GPT,there's this idea of prompt

(50:00):
engineering, making thesequestions that we ask better.
If you look for promptengineering jobs right now on
Google, you're gonna find dozensof positions available.
And which I think is prettycrazy that they're already
coming up with dedicated jobsfor this.
But again, I think it's just thetip of the iceberg of the new
types of jobs that are gonna beavailable.
And then one thing I'll add is Ithink people often wonder, will

(50:21):
AI replace radiologists?
Will replace other doctors?
I think that in an idealsituation where we had the best
perfect technology and theperfect data, that's possible,
but I think.
It's like most things in life,the data's not perfect, the
technology's not perfect.
There's a lot of pitfalls.
So I think that our jobs aresafe at least for the

(50:41):
foreseeable future.
And so I'm really optimisticthough because of the potentials
I've described.

Charles Schelle (50:47):
Yeah.
And, and unfortunately as Covid19 has shown us, there's always
gonna be an X factor.
Some, some new disease orsomething comes along, then just
changes everything else.

Paul Yi (50:57):
Oh, totally, totally.
So, we'll see what the futureholds, but I think I think it's
a, it's an exciting time.

Charles Schelle (51:03):
Absolutely.

Dana Rampolla (51:04):
We'll try to look at it as a tool, not like the
movie iRobot, where everything'sgonna be taken over and we, we
won't have jobs because theywon't, we won't be needed
anymore.
So Paul, we mentioned earlierthat you have your own podcast,
which I'm looking forward tolistening to, to learn a lot
more about this topic.
Can you give us a littleshameless plug?
Tell us what we'd be in storefor if we take a listen.

Paul Yi (51:27):
Yeah, sure.
So, I'm the founding co-host ofthe Radiology Artificial
Intelligence podcast, and it'sthe official podcast of a
journal called RadiologyArtificial Intelligence.
It's the official AI journal ofour primary radiology society
called the R S N A.
And we basically interviewleaders in the field of AI and

(51:48):
radiology.
It could be people who havepublished journal in our art an
article in our journal, rathercould be people who are really
developing really cooltechnologies in academia, maybe
in industry.
And the idea is just giving usspace to talk about these
things.
We've interviewed people rangingfrom radiologists like myself,
to PhDs working in the field, topeople in bioethics, really not

(52:11):
even in the field of radiology,but kind of tackling these
bigger kind of ethicalconsiderations like fairness and
bias.
And the idea is again, thatcollaboration, literally
bringing people around the tableto talk about these things that
traditionally can be prettysiloed.
And so, we do release episodeson a monthly basis.
It's available on Spotify,Apple, or wherever you get

(52:32):
podcasts.
And yeah, I'd encourage anyoneout there listening check us
out.
I think you'll find a little bitmore detail in some of the
topics that we've discussed onthe Pulse today.
My co-host is Dr.
Ali Tejani.
He's a radiology resident at theUniversity of Texas
Southwestern.
He joined about, I'd say abouteight months to a year ago, and
I tapped him to join the teambecause he had previously been a

(52:56):
medical journalism intern, Ithink at CBS News.
So I'm really lucky to have'emjoin me.
And yeah, that's, that's onething I've really enjoyed about
the podcast.
It's an opportunity to bring in,I think uh, a lot of different
perspectives, even people whoare still in training.

Dana Rampolla (53:11):
And what's either your most recent or your next
upcoming podcast about?

Paul Yi (53:17):
Yeah.
So, um, the most recent episodesthat been released is actually a
two-parter.
It's um, actually aboutcollaboration.
It featured myself as a pseudohost slash guest along with Dr.
Jeremias Sulam.
He's my collaborator inbiomedical engineering over at
Johns Hopkins and one of his PhDstudents.
And we were talking about somerecent work that we had
published in the journal work wehad presented, including on

(53:39):
fairness and bias in AI.
And then one that will be comingout in just a matter of one or
two weeks is an interview withDr.
Hari Trivedi from EmoryUniversity who was one of the
lead authors of paper describinga very large database of
mammography or essentiallybreast x-rays is kind of how to
think of it.

(54:00):
And it was intentionally made tobe very diverse to have a large
proportion of Black patientscompared to white patients to
have geographic diversity andthe idea, again, to promote
development of algorithms thatare fair, that are unbiased.
So that one is pretty coolbecause it talks to a lot of the
issues about, building a dataset, how do we make sure that

(54:21):
it's useful from a technicalstandpoint, but also from a
demographic standpoint to havegood representation of all
people groups.

Charles Schelle (54:29):
I'm definitely putting it on my in my podcast
app to download and subscribe.

Paul Yi (54:35):
Awesome.
I appreciate that.

Charles Schelle (54:38):
Well, thanks for everything, Paul.
We learned a lot and, and, andthis topic is not going to go
away.
It's here.
The technology of AI inmedicine, and there's so much
more to discuss.
Hopefully we'll have you onagain soon.
But thank you so much forjoining us.

Paul Yi (54:53):
Thanks so much for having me.
It's been a pleasure.

Jena Frick (55:00):
The UMB Pulse with Charles Schelle and Dana
Rampolla is a U M B Office ofCommunications and Public
Affairs production edited byCharles Schelle, marketing by
Dana Rampolla.
Advertise With Us
Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.