All Episodes

November 29, 2023 57 mins

In this episode, Dr. Judy Wawira Gichoya, Associate Professor in the Department of Radiology and Imaging Sciences at Emory University School of Medicine, details her journey from Kenya to the United States, from interventional radiology to artificial intelligence.

Transcript. 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
So, really a lot of my neural networkis right place, right time, and
many, many sponsors who have litthe light, you know, ahead of me.
And some of those tended tobe these physicians who were
very interested in informatics.
I didn't even know that thatwas what it was called then.
And so, I ended up learning howto work on an open-source medical
record system, ended up deployingit in many places around the

(00:29):
world, virtually, some in person.
And so that's how I got into sort ofthis world of computers in medicine.
Hi, welcome to a new episodeof NEJM AI Grand Rounds.
I'm Raj Munrai, and I'm herewith my co-host, Andy Beam.
We're really excited to bring you ourconversation with Professor Judy Gichoya.

(00:53):
Judy is an interventional radiologistand AI researcher at Emory University,
where she's also an associate professorin the Department of Radiology.
Andy, I think this conversationwas special in how it repeatedly
reflected Judy's unique approach tosolving problems that truly comes
both from her deep clinical expertiseand her computational skills.
I completely agree, Raj.

(01:14):
Judy is really one of a kind.
In addition to being a world classmachine learning researcher, she's
also a practicing interventionalradiologist, a truly rare combination.
She said something during ourconversation that really stuck with me.
One benefit of AI that folks oftenpoint to is that AI will take over the
quote unquote easy cases, and doctorscan spend more time on the difficult
ones that might trip up the AI system.

(01:36):
However, Judy astutely pointed outthat this would actually be kind
of a terrible job for the doctors.
It would probably be prettystressful to only look at edge
cases that have a lot of ambiguity.
It would, in effect, amplify the moststressful parts of the jobs for doctors.
This really challenged a core assumptionI had about the integration of AI in
the near term, and it really speaks tohow deeply thoughtful Judy is and how

(01:57):
grateful I am that there are MDs likeher at the forefront of AI in medicine.
The NEJM AI Grand Rounds podcast issponsored by Microsoft, Viz AI, and Lyric.
We thank them for their support.
And with that, we bring youour conversation with Dr.
Judy Gichoya on AI Grand Rounds.
Judy, welcome.

(02:17):
So, this is a question welove to get started with:
could you tell us about the trainingprocedure for your own neural network?
How did you get interested inAI and what data and experiences
led you to where you are today?
So, I actually did not set out to do orwork in AI, just the same way I didn't
end up, even if right now I work a lotin BIOS, I did not set out to do that.

(02:41):
And my journey really aroundcomputers is this passion of
understanding how technology isused for health care delivery.
And so, during my medical school, whichwas in the western part of Kenya, I was
very excited to get to my clinical years.
And this is like the fourthyear of medical school to

(03:02):
the fifth-year transition.
And You know, it's almost like having yourchild move from writing with a pencil to
a pen and you're looking forward to it.
And I got there and, wow, Iwas really disillusioned.
It felt to me like busy work and,well, whatever you were trying
to look for in terms of results,uh, just was impossible to find.

(03:24):
So, really a lot of my neuralnetwork is right place, right time.
And many, many sponsors who've litthe light, you know, ahead of me.
And some of those tended to be thesephysicians who are very interested
in informatics. I didn't even knowthat that was what it was called then.
And so, I ended up learning howto work on an open-source medical
record system, ended up deployingit, uh, in many places around the

(03:48):
world, virtually, some in-person.
And so, that's how I got into sort of thisworld of computer, computers in medicine.
And so, during my residency, whenGeoff Hinton said, stop training
radiologists, we saw a drop in thenumber of medical students who were
getting into the radiology residency.

(04:08):
And so, I started this journal clubthrough the American College of
Radiology, just to encourage peopleto at least have dialogue around this.
So, it's really an accident, and Iended up in AI, but there's no
P.h.D.,
no formal training in this space.
Could I just follow, so we liketo get beyond the biosketch
a little bit on the show.
So, how did you get interestedin medicine in the first place?

(04:30):
Like, what was it about thatthat, uh, led you to a career
in medicine to get started with?
Actually, another accident.
I was not interested in medicine.
I was very interested in computer science.
After my parents told me that I couldnot be a pilot and so, I didn't have
anyone who would support my dream to bea pilot and so, in Kenya when you do well

(04:52):
in school you're sort of expected togo to specific professions and I didn't
make up my mind you don't really go touniversity immediately you have like this
18-month period and I didn't make up mymind till one last day in the hospital
near my village where my mother workedthen, there's this doctor called Dr.
Surmat, who had this very sickpatient and was driving all over

(05:17):
the country looking for blood.
And he told me that I would havea good time in more university
because of one thing, that thesystem of learning was problem based.
It would be very self-directed.
And so, you'd have agencyto do a lot of things.
As you know, Andrew, I like a lotof that, that type of freedom.
And so, I said, okay, I'll try it.
And it worked out okay.

(05:37):
I did well.
And now I'm an interventional radiologist,and I truly do enjoy living in this
world of computers and medicine.
Judy, what drew you toradiology specifically?
Because it's very techie,
when I graduated from Moi, I wentto work in two hospitals in Kenya and,
but I still kept doing my informaticswork, really training, doing capacity

(06:01):
building and implementing medicalrecords system to just start what
would be the first data collection.
And so, at that point, I got anopportunity to come to Indiana
University at the Regenstrief Instituteto train in health informatics.
And then I went back, and I'll doa lot of clinical decision support.

(06:22):
And so, in that process,two things happened:
one, during my training at Indiana,
what I realized, there was one day wediscussed a paper that said the decision
maker in a ward round is the attending.I felt really disillusioned because
I thought that you should never writesuch a paper and if you are already
in the space where you understoodwhat medicine was that that's like

(06:47):
a common sense type of question.
So, I figured that I would like to bethe type of practitioner who lives in
medicine and not be too far removed.
Now, I do know that one day I'm goingto walk away from medicine, but I felt
that my ideas were better for just
having, being in thehealth care space, seeing
patients and just being there.

(07:08):
So, the second thing was the CTscanner that's changed actually, but
in near my village was pretty oldand you'd have to drive to Nairobi.
I had a relative that we had todrive to Nairobi, which is two
hours away, come back home, and thenwait for two weeks to get a result.
So, you can imagine in the contextof head trauma or just head
injury, that's really difficult.

(07:29):
And so,
what I thought was that with mysuperpower of informatics that I
could probably get to a point where Iwould help with teleradiology, just,
so those were my ambitious goals.
I haven't done all those things, butI thought that radiology would be a
good fit because I saw the gap, andI was very drawn to the technology

(07:50):
side of things of radiology.
Could I ask a follow-up question?
So, I think I've heard you say twice,twice now in your career, you've
been disillusioned with medicine.
Could you talk a little bit aboutthe source of that disillusionment?
And it's a recurring theme thatwe've heard from other folks like
Ziad Obermeyer, who I think you knowwell, how did you overcome that?
And why were you disillusioned?

(08:10):
Yeah, so this was mainly thetransition that I first of all
mentioned was, you know, you'rewaiting and waiting and waiting to
be now in the inpatient settings.
And that was just thestructure of the curriculum.
But when I got there, I was like,wow, you know, it's almost like,
this is not what I thought it would be.But remember, you may not know this, but,

(08:30):
uh, during that time, it was also, Kenyawas struggling with the HIV pandemic.
And so, it was very difficult to takecare of patients without resources.
I think, that type ofdisillusionment is very different.
It's more of that forces you to fix theproblem that you see is affecting you.
Really, the electronic medicalrecords systems I was deploying
then were to support for us totake care of HIV patients, to

(08:55):
figure out when were they dying?
Who was hungry?
I mean, it's not real.
Like, are they taking the ARVs?
And that type of data was veryinstrumental in setting up one of the
largest patient care cohorts in Kenya.
And so, you know, to be able tolive in this world where you can
see how your technology is used,I think is an absolute pleasure.

(09:18):
And it continues to today where, youknow, it's an absolute still honor
that, to take care of patients.
I think that everyone is tired becauseThe busy work and the paperwork is, is
a lot, you know, and, as someone who'spracticed medicine in Africa, in many
places, and also here in the U.S. Ican tell you, it was amazing to be in
Malawi a few years ago, where the doctorwould come and say, oh, this is the

(09:42):
clinical things that I'm facing.And you would come up with a plan, and
they would come back in the afternoon,and you discuss what you found.
But here with technology, youmay not even know the radiologist
who interpreted your study.
So, there's quite a big gap and Ifeel that unfortunately the hospital
administrators keep growing and growing,but they have no love for what we do.

(10:06):
Yeah, so I think unfortunately a lot ofthat reflects what I've seen in medicine.
I also think it's a great transition tothe next thing that I want to ask you
about, which is some of your own work.
So, I think what's really unique about youis that you are a leading expert in AI,
but you still are a full-time clinician.
You still are a practicing radiologist. 00:10:25,095 --> 00:10:29,024 Unlike a lot of other folks in this space who have moved purely into research or administrative roles.

(10:29):
So I'd like to frame some of thisconversation around one of your papers.
It's a general-purpose AI assistantembedded in an open-sourced
radiology information system.
Cause I think it gets to the core of alot of what you were just talking about.
So, could you tell us alittle bit about that paper?
And then we can jump off from there.
Yeah.
So, this actually, is sort oflike the planting seed of three

(10:51):
years or four years of work.
When we've thought about how AI isgoing to be deployed, so if you think
about the evolution of AI deployment,at least in radiology initially, and
let's talk about the RSNA, which isthe Radiology Society of North America,
where all vendors would come, and theyopened a new exhibit for startups.

(11:12):
You would see that most peoplethought, well, I'm going to
lock you into a platform, right?
If you're in this platform, thenit's a marketplace like the Apple
store, the Google store that youcan get, go and cherry pick the
applications you want, you run them.
And that is still in use.
The second,
and is that this sort ofthe narrow use case of AI.

(11:32):
And this is saying, okay, I'mgoing to look at your wrist or
your chest x-ray or your brain CT.
And I'm just going to tell you, youhave bleeding, but I won't tell you
if you have tumor or anything else.
So, you find one image can have
almost 10 algorithms running onit to get you the information.
Now imagine if you're a radiologist andyou have to consume all this information.

(11:55):
So, it's really work that I do with areally close friend of mine called Dr.
Saptashi from Indiana University,where he always forces me to do this
qualitative work research, which isvery difficult to do, but it really
brings you to the heart of the
doctors and the patientswho we are caring for.
So, this, the first instance herethat you see was for us to integrate

(12:16):
a way that would allow us to see how,so we used OHEAP, which is like an
open-source viewer for medical images.
And we were putting an engine and tryingto say, could you do this, uh, I
don't want to see anything for pneumonia,you know, like, could you even get
to a point where you can personalize?
But more importantly, we were treating,it's this work that was funded by the NSF,

(12:39):
that we were going to treat your agentthat is deployed as your own assistant.
So, we are for,
you know, for us right now.
And, you know, Andrew wouldhave used, Raj would have used.
And this assistance, what theywould do is learn what Judy wanted.
And then what I would say ismaybe that Judy's very good at
maybe reading pneumothorax, right?

(13:00):
So, with time, that agent would not haveto keep like giving me more alerts
and more alerts and more alerts.
And so, this was the original workthat would allow us to do the reader
study, which we just finished, likethree weeks ago, where we now
had the radiologists using this.
We learned a lot from this process.
In fact, the final-end product was not

(13:21):
on an open-source platform.
And that's because we hadto learn about standards.
We had to use multiple models and wejust wanted something that was more
pleasant in terms of appearance.
And this work is not out of reach becausethe work that we're seeing right now is
that the human machine collaboration is anarea that requires a lot of work, right?

(13:42):
Because AI outputs, what you givecan make your best expert worse.
And so, where we're seeing at leastwork in radiology, showing you don't
really gain productivity gains.
What you get more is, and I can explainthat second part a little bit later, but
what you observe is that your experts,their performance decreases, but your

(14:03):
novices, their performance increases.
So, how do you deploy AI inthis type of real-world scenario?
Yeah.
So, it's really interesting.
So, I think there was a lot there.
It sounds like the AI assistant actuallyisn't reading the images themselves.
It's actually triaging it to the memberof the health care team who is an expert
in that particular kind of thing.
So maybe I think that thisis a pneumothorax read, so

(14:25):
I'm going to send it to Judy.
You know, I think that this is a pneumoniaread, so I'm going to send it to Raj.
Yeah.
And that's more of like a downstream use.
It wasn't really meant to be that way.
And, people have used, those typesof systems in a different way.
I actually have very strong sentimentaround them, not very good sentiment.
I think it's good when you figure outwho's your best pneumothorax reader.

(14:46):
But you can imagine, no one wants toread very complex studies every day.
If you think about cognitive burden,right, or when you've seen like autonomous
AI being postulated that, well, letit read all the normal chest x-rays,
then you read all the diseased ones.
Imagine where we work, right?
It's academic hospitals.
The cancer you see is for a 36-year-old.
Those people tend to be the sameage as you with metastatic disease.

(15:10):
I personally would not have good jobsatisfaction if those are the only cases
I read without even just the normalchest x-ray interspersed in between it.
And so, what this one wasdoing is learning almost like
your personal preferences.
And yes, you can use that as adownstream task, but you can also use
it to figure out to label using fewshot learning, which is to label, like

(15:33):
if you see 10 of your radiologistsare always moving something, so
you could go back and relabel.
The other thing was that youcould also teach these agents
things that it hadn't seen, right?
So, these tail-end cases, for example,if you have a rare disease that not all
agents based on whoever was there wasable to see, then they could say, hey,

(15:54):
do you know, you can almost think you'realmost giving personas to these agents
and trying to see, how to collaborate.
So, it was really like a proofof concept for us to figure out.
How do you deploy AI in this type of,of network and then trying to see what
do the radiologists feel about it?
So, that's a really interestingpoint that I had never like really

(16:14):
appreciated until you just said it.
So, I always hear this analogy tolike, AIs can land planes, but when
something goes wrong, they can'tland them on the Hudson, you know?
And if I'm hearing you correctly,if landing a plane on the Hudson is
now becomes your full job, that's avery stressful job to have and not
one that most people, would enjoyand that's not something I'd really
appreciate it until you just said it.

(16:35):
Oh yeah, yeah.
I mean, even just our types of work,for example, if you work in an academic,
I'm an interventional radiologist, soyou're always doing procedures, right?
And I put more testspots than remove them.
You only remove them when patients aresick or when they've cured their disease.
You see patients at the beginningof their journey, of a long journey,
and so, if you are looking at onlydiseased only abnormal mammograms.

(17:00):
I personally would notenjoy that type of job.
This ties in perfectly to a bunch ofother questions that I want to ask you.
So, you mentioned the, you know, thefamous Jeff Hinton quotes about we
should stop training radiologists.
He's since kind of recantedthat I think a little bit.
Andrew Ng also had a quote thatwas like very similar to that.
So, I'm not gonna ask youthe replacement question.

(17:21):
I'm gonna ask you over the nextfive to ten years, how do you see
radiology changing as a result of AI?
I think the most popular code thereis now that if you use AI, you're
gonna replace people who don't.
Right?
I believe that we are going tosee AI deployed in new ways, not
because of the downstream effect.

(17:43):
So, it's been a couple of months.
We're seeing the foundation models.
If you're really using them, you can seewhat they're not going to be able to do.
But I think we're going througha lot of hype again around AI.
So, it's a little bit difficult to seebetween sort of like the weeds there.
But I think that AI is goingto change in terms of the
backend, the back-office work.

(18:05):
If you think about
preprocessing.
There are a lot of algorithms,not necessarily AI, that are
used to preprocess images.
And I believe that that is something thatis going to experience a lot of uptake.
If you think about something like cardiacdisease, which, has moved away from
invasive cardiology to CT scans, right?

(18:26):
So, the CT scans needs to be very preciseabout your breathing, your heart rate,
and then reconstruct and measure calcium.
That workflow is not sustainableto humans, but this is a test if
you're coming with chest pain, it'smore frequently ordered, especially
as the radiation doses havebecome more optimized and reduced.
And so those type of things,I believe we're going to

(18:48):
see more AI deployed then.
So not necessarily like, morein an assistive role, you know?
Sorry.
I, so you're just, I think,dropping insights on me here, left
and right, because again, likewhere my brain goes is the sort of
diagnostic head-to-head comparison.
And I was trying to reconcile that withwhat you said about there are M.D.s who
use AI are actually sometimes worse,especially if you're an expert M.D.

(19:12):
And so, I was trying to figure outhow that was all going to shake
out, but you pointed out thatthere's this whole workflow before
you're even getting to a decisionpoint, where AI could come in and
actually make a bigdifference in the near term.
I don't want to answer my lastquestion, which will be the
answer that I'll hold off on.
Okay.
So, you mentioned something else veryinteresting, just now, which is that if

(19:33):
you've worked with foundation models, youknow, some of the limitations of them.
So, could you tell us about that fromthe perspective of a radiologist?
So, um, I think most of this is goingto be, so radiologists have always,
always dictated their reports, right?
So, using voice-to-text is not new.
And we're going to an era where alot of the medical content is because

(19:57):
of this ambient listening is goingto be generated using, you know,
a large language model of sorts.
And so, I think that, I mean, I have manyopinions, but one of them is the making
of an expert, or the distinguishmentof an expert in the era of large
language models is very difficult.
So, Judy does not need tobe an expert programmer.

(20:18):
I probably just need a few things.
I need to hire people who can be expertprogrammers, but I just need a few things.
And, but when you look at, forexample, these subtle differences,
and I think those are the new papersthat we are seeing now come through
like advice for oncology, right?
I may just know broad drugs, but someoneelse is going to have something different.
I think that's why we areseeing the large models film.

(20:41):
I do not want to preempt somethingthat I have no exposure to.
We know that the foundation modelscan consume images pretty well.
When we think about the diffusionmodels and using that same approach for
generating synthetic data, I think we'veseen quite a lot of gains around that.
And I believe that you will notnecessarily need to work in an academic

(21:03):
center to do amazing AI research.
I believe in the next two to threeyears, you're going to see new
types of data sets that we can buildmodels from, you know, because this
was really a privilege of the past.
And so,
the same technology that is used todictate what we know as radiologists is
that it's very difficult to get accuratedictations every time you're always

(21:27):
typing or doing something different.
And if we are going to havea fill-in-the-blanks, right?
So, the ambient listening is like,well, maybe Judy meant this.
I think we're going tosee like this voice.
Yes, we do know that the EMR contains alot of duplicated reports, but we're going
to even have situations where we have justjunk that is not really useful, you'd

(21:50):
rather have a short report that is moreprecise about what you're looking for.
And so, this challenge is again, the sameopinion that we, one of the questions that
we kicked off with, which is, should wetry to be always deploying technology or
should we really use the deployment oftechnology to understand the underlying
problems that we're trying to fix.
For example, maybe we don't need 10 notesfrom 10 medical students on one patient on

(22:15):
one day, and we just need a few good notesand figure out the billing issue, right?
Then we don't have this bloatedEMR that is full of work,
things that we cannot use.
And so, today we see the inbox, that's nota big problem for radiologists who don't
do procedures at the message box, whichis being taken over by AI systems.

(22:39):
But some of the things that are beingsaid, well, you can use order refills.
Just think about the logisticand regulatory hurdles
that need to be bypassed
to even get something like that.
I think that's whatpeople are not realizing.
So, in my opinion, I think thatthe companies that build these
technologies usually don'tdisclose what they have worked on.
What we're seeing now, my guess iswhat they knew maybe one year or

(23:02):
two years ago, the knowledge thatwe have, unless you're in the core
team that is building this work. But
the main things that are notgoing to be very straightforward
is the regulatory pitfalls.
And then when you have one disastercase and there's no thinking about risk.
So, the technology, even if may not bethere today, is gonna get to a point

(23:23):
where it does a lot more work andmaybe the radiologist of the future
is going to be also a pathologistbecause they're now empowered to read
the RADPATH correlation, but the essenceof risk, trust, and the technology
being developed by people who don'tsee patients may be what limits the
technology from ever being used.

(23:46):
So, maybe this is a good place totransition and ask this question, because
I heard a lot about overflowing inboxesand radiologists also being pathologists
in the future, and I wonder if as apracticing clinician, if you think that AI
is going to make doctors happier? I thinkwe're at a moment, so I saw your reaction,

(24:07):
but I think we're at a moment nowwhere, doctors are historically unhappy.
I think that there's a lot of evidencein the literature to support that.
And one, I think,
hope is that AI will take away a lot ofthe grunt work that physicians have to do.
But the flip side of that, I guess, isthe theory of conservation of RVUs, that
you're going to have to keep producing.

(24:28):
And if you can produce more thanyou will be asked to produce more.
So, I think I have a sense of, of whereyou fall on that, but I'd love to hear.
It's not going to make doctors happy.
We're just going to be transitioned toanother form of data entry workforce.
Yes.
Yeah.
People have looked at this.
People have looked at scribes.
Did they make life easier?
Right?
We're just replacing thescribes with the AI systems.

(24:50):
The mundane work that makes people,that would make a big difference, really
sometimes does not require technology.
It's culture, you know, forms, figuringout who's the right person to call.
I know that's very difficult forpeople to understand, but that still
remains a mystery in a hospital.
And you waste a lot of timecalling and calling and calling.

(25:12):
That does not require AI, and evenif AI has amazing results, who is it
going to communicate those results to?
So, I think that's part ofthe hype in my opinion.
So, I want to believe, X files, likeI want to believe, but I think that
probably you're right in that a lot ofthe malaise is not due to pajama time,

(25:35):
but due to deeper structural issues.
And it seems unlikely that AI willfix those deeper structural issues.
All right, Raj, you want to take over now?
Sure.
So, Judy, we want to transition tosome of your work on algorithmic bias
and shortcut features in radiology.
So, we had Marzyah on the podcasta few months ago, and we spoke

(25:57):
about your paper together.
Uh, that was published, I believe,in Lancet Digital Health, and it was
titled AI Recognition of Patient Racein Medical Imaging, a Modeling Study.
So, Marzyah talked about the backstoryof this paper, why it was surprising,
and also, how you pressure tested someof your findings in different ways.
So, for our listeners today, I was hopingmaybe you could just briefly summarize

(26:19):
the main findings of the study, butthen jump off into what the clinical
implications are of this finding.
I know you wrote also in another piece,a recent article that was published in
Science about some of those implications.
So, maybe you could weave thosetwo articles together and tell
us about both what you found andwhat this might mean for practice.

(26:40):
So, this is an area that is actuallya core focus of my research.
It's not predicting a self-reported raceonly, but it's this concept that there
are hidden signals on medical images.
And the hidden signals as oftoday between our groups and other
researchers, I can show you Judy'sx-ray and you'd say, "Judy's Black.

(27:03):
She's female.
This is her chest x-ray age, maybe 70."
And you can calculate myactual age, which is 50 years.
I live in a deprived area becausethe SD, the social deprivation
index is encoded on that.
And I am going to spend $15,000in the next three years.

(27:24):
These are all papers that have beenpublished trying to show you can
start to predict some of this, whatwe call, I mean, there's just no
way that I would tell you from achest x-ray that the insurance of
the patient is Medicaid or Medicare.
Those would be sort of myunconscious biases, but this is
not a task that radiologists do.
Then on the other hand,we see different work.

(27:45):
One is done by Emma Pearsonand Seyad, which is
the work on the algorithmic kneepredictions which is better than the
Kellgren Lawrence, which is just anolder system, not an older, really
the most, the current system used todetermine what's your severity of your
osteoarthritis and has implicationson who gets surgery and who, whose

(28:07):
surgery is covered by insurance.
We've seen work from AdamNyala that looked at breast risk
prediction using the Mirai model.
Showing that you can performbetter on these image-based
models for risk estimation.
You can imagine everything,prostate cancer, lung cancer, all
of them could follow the same way.

(28:27):
And you can show,
hey, that maybe you can get betterrisk prediction than the existing
clinical systems, and it's evenbetter for minorities, you know.
And then the next other examples yousee, for example, there's work that
got published looking at COPD, that ifyou don't use the radiologist labels,
you just use the pulmonary functiontest, you get better performance

(28:50):
than even using the radiology text.
So, one hand, I say that
it's not just race, self-reportedrace, which is just a social and
legal construct, no biology around it.
And in same like insuranceand other things that are
biological, for example, age.
And now, on the other hand,is that you have this good
performing image-based only models.

(29:12):
What we should be reading, andincluding more of our recent work
where we've shown that from test x-rays, you can tell who has diabetes,
just the ambulatory test x-rays.
So, you should read thistype of work and see.
How is this?
Right?
Is it a confounder?
The work that has subsequentlycome out is that models are lazy.
They pick the easiest thing that theycan work on to make the prediction.

(29:36):
If the easiest thing is where thedata was collected and the disease
probability, then they learn that.
And then when you remove thatshortcut, they learn, they pick
another shortcut and another shortcut.
And so that's how you can have modelspredict, caption an image without
ever looking at the image itself,it just decides, okay, I'm going to

(29:56):
say, you know, this is sheep grazingor something like that, right, with
and without the sheep, just when itlands on the simple easy confounder.
So, the importance of this workis, yes, it's great if you can
show me that the algorithmic kneeosteoarthritis prediction is better.
But if I show you

(30:17):
that the same algorithm can tell that therace of the patient is Black or white.
Do you know if it's reallylooking at the disease?
Or it's looking at the shortestthing to tell you that this is
the osteoarthritis severity.
We cannot do this work inthis type of shortcuts.
Others are easy to identify, forexample, ICU markers or something like

(30:38):
that, but we cannot do this work whenwe know, we cannot, there's no way
that I'll tell you the insurance or therisk of a patient from a medical image.
And so, this concept of these hiddensignals and their downstream impact
on algorithmic prediction is why weneed to study it in something that
we may never be able to see, but weneed to understand when it fails.

(31:00):
And what if, not all Blackpatients are the same?
Maybe in Africa it's very different.
And what are these models looking at?
And the current explanatory toolsare not sufficient for us to do that.
The last thing is that these shortcuts arenot taken away with external validation.
So, just because I bring my model to you.
They don't go away becausethere is how we collect it.

(31:24):
And so, putting a big area of work.
So, I'm excited to beworking on this though.
No, that's great.
So, I think there are many differentpaths that have been outlined by
researchers recognizing that raceand other variables are encoded
in ways that are often hidden tophysicians in these images and

(31:46):
in other types of clinical data.
And depending on who you ask, they'llsay that, that the best path forward
is to, for example, learn de-biasedrepresentations if you can, right, that
don't encode those particular variablesso that you can have some guarantee
or some sort of assurance rather,that the model is not just using, you

(32:06):
know, race to predict X, Y, Z, or tomake some type of allocation decision.
Um, how do you see, maybe for your owngroup's work, given that finding in
Lancet Digital Health, and then giventhe Science article where you outline
some of those implications that you justtalked about, do you think this is more
of a set of technical challenges to, youknow, to develop better machine learning

(32:29):
tools, to learn new representationsthat don't encode variables that
we don't want them to encode
for particular models? Or do you thinkthis is, we're still in the descriptive
stage, if you will, where we need tounderstand what is encoded in what type of
clinical data and just make that clear andapparent to the whole community, even at
a scale that is maybe appreciated by us asmachine learning and health researchers,

(32:52):
but is not appreciated by, you know, manymanufacturers of these models, algorithms,
or users, physicians at the end, whoare using these models in practice?
You know, what do you seeas the path, forward for the
next few years of research?
Yeah, I think we're stillin the discovery phase.
And here's why.
If you read all these papers, mostof them, they, I didn't set out to go

(33:13):
study what self AI could predict for us.
Really, actually, the what set outthat project was, um, around when
George Floyd was murdered, a lot ofthe journals started to say, we want
to focus on social justice and wewant to look at that in medicine.
And also remember a lot of patientswho are Black and Latinx were dying

(33:35):
disproportionately from COVID.
And so, when we set out to dothis, what happened was we were
like, oh, we, JSCR is coming.
They're looking for a paper.
I attended a data thon before theprevious year in Singapore, and
I realized people are not usingthe MIMIC test x-ray data set.
And this was just a little bit new-ishand they said, well, let's look at this.

(33:56):
And when I start my research,I always say, what's the story?
Why am I doing this?
Right.
Maybe that's the hypothesis,but I always say.
Maybe not in a very scientificway, but what am I going
to teach people about this?
Or what's the questionthat I want to answer?
And here I was going to say, thetakeaway is going to be, we need more
diverse datasets, boom, easy paper.
And then we started and yes, we had lookedat the work from Laleh, which and Marzyah,

(34:21):
which showed the under diagnosis, right?
So, you're most likely to have anormal chest x-ray when you have
a true disease, a true findingwhen you're Black or Hispanic.
But at the same time, when webrought the same models to Emory,
the amplitude dropped, but thepatterns remained the same.
So, it was this looking at thepatterns that put us into this

(34:44):
rabbit hole for two years.
Then one of the
team members came back and said,well, it's because the models are
learning the rest of the patient.
And we said, no, and we said,you're wrong and shamed the person.
And we went back to redo theexperiments and it was really the same.
So, we started now structuring theresearch and know that it's always
a shame that we never get to knowwhat's in the kitchen, how these

(35:07):
ideas come up to put that context.
But it's not that.
Today, when I understand theseissues around like even systemic
racism, remember I am an immigrant.
I would do that research very different.
In fact, I may be too ashamedto do the research if I
hadn't done it then, you know?
And so it's,
I think we don't know enough andespecially things that we cannot

(35:29):
see and using the current saliency,visual features for explanations,
we really cannot get a hold ofwhat these confounders are doing.
So, I agree that there is a role for themodel developer, but I think we need more
disclosures and transparency in termsof the data sets, how they're composed
and how they end up, especially thoseresults that are never published.

(35:53):
That's what we need more to figure out.
Oh, maybe there wassomething, a signal here.
And the way to do, if you're listeningto this and you're a researcher,
is to have more people in yourteam who are looking at your work.
Because I do feel thatit gets better that way.
That's great.
So you, your paper was on chest x-rays,and then you mentioned Ziad

(36:13):
and Emma's paper on knee scans.
Have you done a similar set ofexperiments around those modalities,
so around knee scans for example?
Yes.
Yeah.
Yeah, yeah, so the paper actuallywas more than chest x-rays.
We looked at three chest x-ray datasets, three CT chest data sets, a digital
hand atlas, that's a public data set.
We had a, an internal data set fora different project for cervical

(36:37):
spine radiographs and mammograms.
And we found the samething across all of them.
Have you done the knee scans as yet or no?
Mm hmm.
We did.
And it's, the performance there is like
0.99.
If that's the case, if that'sthe case, what does that mean
about that Emma and Ziad's paper?
We don't know.
We need to validate it inthe real-world setting.

(36:58):
Is it because the algorithm performsbetter for Black patients or it learns
that the knee x-rays from a Black patient?
Got it.
Okay.
All right.
Lots of future work to be done.
So that's great, Judy.
So, I think we are going to jumpinto the lightning round now.
Andy, do you want to, doyou want to kick us off?

(37:18):
Yeah.
Um, so this is always a fun partof the show, Judy, where we, we ask
you a bunch of completely unrelatedquestions, and you don't have to answer
them in one or two sentences, but Ithink the goal is to move through as,
cover as much of Judy as possible.
All right, so the first one is, whatcore principle informs your life?

(37:39):
Lifelong learning, andtogetherness, community.
I truly do enjoy just learning.
I'm just a really curious person,and I enjoy reading books, and
then, and challenging myselfif I agree with the authors.
So that means that if I picka book, I will always read it

(38:00):
to the end, even if it's
not, it doesn't align withmy principles, but that forces
me to listen to other differentopinions that I would not listen to.
And I truly am a child of the village,and it would not be possible without
just having other people to work with.
And work is, researchis much better this way.

(38:21):
And in terms of medicine, it's really todo things that spark joy and because we
don't have too much time in the world.
I really love that.
That challenging yourself, evenwith people with whom you disagree,
being a core principle, because Ithink that fewer and fewer of us
are able to maintain the cognitivedissonance that it takes to do that.
And more and more of usretreat into echo chambers.

(38:42):
So, I love that.
That's a core part of you.
Yeah, me too.
There's this idea of adversarialcollaboration, which we should all
engage in probably a little bit more.
So that's, that's great to hear.
So, Judy, I have the next question.
I think I know the answer to this.
Uh, although it might be completelydifferent from what I'm thinking.
Um, and I think yougive us a hint earlier.
But here's the question.
If you weren't a physician,what job would you be doing?

(39:05):
You can dream big.
Today, maybe a veterinarydoctor in the zoo.
Not in the zoo, in the National Park.
Okay, great.
That's because I would be able togo in, specifically in Maasai Mara.
I think I would love to be a radiologist.
In Maasai Mara, which is one of thelargest parks in Kenya, and I have

(39:26):
spent time taking care of patientsin Narok, and every evening would go
for a game drive, and it's probablyone of my best times in my life.
That sounds amazing.
I was going to say pilotwhere you started off.
Apparently, they're goingto be replaced soon.
Yes, fair enough.

(39:46):
Can't land on the Hudson.
Can't land on the Hudson.
No, but, uh, I feel that maybethen it was it was this thing of
just this curiosity of just flying.
But I, maybe what I wantedto do was travel and I've
gotten to do that quite a bit.
So, I don't feel like I'm missingout on that part of my dream.
Although I, I did plan to get myprivate pilot license a few years ago.

(40:10):
Nice.
I wonder if this question is now answered.
I don't think it is.
But if you could instantly acquireany new skill, what would it be?
Um, learning piano.
So, I have and I'm notso sure about instant.
But it turns out that it'sactually pretty interesting.
And the reason why I'm doingit, first of all, it's a gift

(40:30):
from my spouse for my birthday.
And so that, so I've been doing that.
But it's very mathematical like very
logical, and so I've enjoyed doing that,and I think the reality is this concept of
deliberate practice that I take from it,and I've always said that to my trainees,

(40:51):
especially in the clinical area, thatthese are words that I steal from my
friend, is perfect practice makes perfect.
So, it's not just practice, and so thisbeing intentional, and so that's been
like just something new, and I do,it's, it's also Just something to,
that I do for me and I have to maketime for it despite the busiest week.

(41:13):
And so that's also been likereally nice because it's building
some other character in me.
That's awesome.
I've always heard the, the maxiumbe practice makes permanent.
And so I, yeah, I love the idea of it.
Yeah.
But you know, in medicine, you cando like bad things, you have bad
outcomes and you think you did well.
So, they're permanent.
That's true.
Yeah.

(41:34):
Continuing with the music theme, what isyour favorite music album of all time?
Of all time?
I, I cannot stay with all time.
I, my music changes across.
Across times and across weeks, I do reallyenjoy the African music and, um, because
I feel like there are these giants whosing a lot about some of the struggles.

(41:57):
So, it's not just music for music.
Um, right, I would say that theperson I wish I'd seen her alive
present would be, uh, this ladycalled Cesária Évora, who sings
this music which moves everyonein the crowd and she would be
singing barefoot and barely moving.
And I wish I'd had a chance to seeher alive in a live performance.

(42:21):
Today I enjoy a lot of Swahili music.
I don't get to speak Swahili as much.
And so, anyone from East Africawith music that, a genre called
Bongo, I listen to them a lot.
And I, I, that's what I jam to most ofthe time now when I'm driving to work.
Awesome.
Awesome.
Yeah.
Um, so if you could have dinner with oneperson alive or dead, who would it be?

(42:42):
Ooh.
It would be Shonda Rhimes.
I hope that it's never going tohappen, but that would be the person.
And the reason is just looking ather work about making Grey's Anatomy,
this is the producer who's all this.
Some of these shows as a Blackwoman I feel like she breaks
barriers and she brings gay people.

(43:04):
She brings Black doctors.
She brings, you know,they're not just nurses.
And in, in this world where what wesee really reinforces what we think
about stereotypes, I think she's reallybroken the barrier for so many people.
And I don't even know what Iwould ask her, but I thought like
she'd, I think she'd be a prettycool person to have lunch with.

(43:25):
That's an awesome answer.
It's never going to happen, but...
Well, if she's listening,you know, maybe we can...
I don't know that she's listening tothe New England Journal of Medicine.
I've heard, yeah, I've heard AI GrandRounds is her, is her go to podcast.
Yeah.
All right.
If you could eat only one food for therest of your life, what would it be?
Chapati.
Oh, nice.

(43:45):
Yeah, very nice.
Not the Indian chapati,the Kenyan chapati.
Very nice.
All right.
So, uh, one more question.
Nice.
One more question for you here, Judy.
Um, what is the most interesting thingyou've either read or watched recently?
Oh, it's definitely reading.

(44:10):
And maybe I should do the last yeartheme, let me see it was, I would say
that the book that I gave people lastyear was, um, I have to, clearly it was
not as notable, if I cannot pull up thetitle, and it's The Psychology of Money.

(44:39):
Who wrote that?
Morgan Housel.
What's it about?
So, it's, it's, it was a great book andit talks about, I didn't really know that
this person was an, was like a financialplanner till the end of the book.
And so, it talks that, about that money,your relationship of money is dependent on

(45:02):
the relationship that you have around you.
For example, if you are born in depressionor you're born in inflation, you tend
to be very cautious, or let's say.
When you were growing up, yourfamily or someone you knew,
everything was taken by the bank.
You're very cautious abouthow you deal with money.
So, it's that this behaviorand psychology of money.
And so, to think about how to invest yourmoney is that you have to think about the

(45:28):
emotions that are around your behavior.
For example, for him, if you talk tomany, of the financial planners that
would say never sit on cash in the bank.
But it turns out that all of uswill, will experience a bad, a
bad thing in our lives, or someoneclose to us will experience that.
And so, if you invest faithfully, then,and you don't interrupt compounding, it

(45:53):
means that you minimize the, sort of like,your reactions, your knee jerk reactions.
So, what blew my mind was that hepaid for his house wholly in cash.
None of us would, especially now thatthe interest rates are pretty high, but
most of us would get into a mortgage.
But it turns out that what kepthim in bed at night is knowing that

(46:15):
his family would not be homeless.
And so, this was the biggest, thatonce that was taken care of, he never
wanted to pay anything per month,that it gave him peace of mind.
And so, it's this thing that yourrelationship of money really determines
about how you and the people youare close to you deal with it.
And once you understand that, then andthe second thing is that you don't try

(46:38):
to beat the market and get on the hype.
Then you realize what enough is for you.
And once you realize that itgives you absolute peace of mind
and you can spend the money inways that are more meaningful.
And so, I thought thatthat was such a great
lesson to learn.
And I really enjoyed the thought processabout thinking about money in from a
behavioral and emotional point of view.

(47:00):
Yeah, that's great.
I, I think I'm just gonna move onquickly and not dwell too long on
what my crypto, uh, currency purchase,uh, says about me psychologically.
So, congrats on survivingthe lightning realm, Judy.
There we go.
Thanks.
Thanks.
Thanks.
Alright, so I think we want to pullback a little bit and ask you some
big picture questions here at the end.
So, you know, we've talked about ita little bit so far, but the thing

(47:22):
that's currently sucking up all of theoxygen in AI are large language models.
I think that a lot of diagnostic medicineis trying to reckon with what the
implications for their field will be.
It's maybe perhaps less obviousto me what LLMs specifically
will portend for radiology.
So, I'd love to hear your thoughts on howLLMs will or will not impact radiology.

(47:44):
I don't think directly for sure thereare some tasks around processing the
radiology report that are important.
And this is, I think we're going to reallyunderstand what the potential is once we
get the image part of the model, right?
So, the radiology reportis written for the doctor.
It's not really writtenfor the patient, right?

(48:05):
It's a, you're the doctor's doctor.
And so, in this case, when let's say myreport is so wordy, maybe the ER doctor
just says, it's the appendicitis, andwe could hypothetically say that maybe
these models are going to be as goodthat they don't need your reports to
just ask the clinical question thatthey want, assuming that they know
the question that they want to do.

(48:26):
I think right now, as radiologists, mybiggest area is around education, right?
It's still the same.
There's no radiology education.
All our medical studentsare trained in the same way.
And they're learning in thisera of large language models.
And we do need if you think abouthow quickly the curriculum changes
or doesn't change, we need to reallytrain our workforce to be able

(48:49):
to work with these technologies.
And I think we are failing in that area.
And the second area,
to me, is around the interfaceof reports and patients.
And so, again, not directly tothe radiologist, but in this area
where you can get access to yoursigned report immediately, even
before your referring doctor getsit. If it's not written for you,

(49:11):
but you can copy that report intoa large language model and you can
start to interact with the report.
And so, it's the downstream uses of ouroutputs, I think is the most immediate
thing that is going to happen as oftoday. Before the, or I'm working together
with a chatbot or something like that,more than like a medicine specialty.

(49:32):
Yeah.
That's so interesting.
And again, I think it speaksto your experience as an actual
physician that you have that ableto have that kind of insight.
I think of an example from myown family where a family member
sent -- my wife's a physician -- sent amammography report that she got.
And the family member had very littleability to parse that themselves
and were asking, you know, my wifequestions about what does this mean.
But that seems to me to be, thattype of translation between a note

(49:55):
written for a doctor into plainEnglish seems like a dead obvious
thing that would have high yield.
Yeah, and actually mystudents have worked on this.
It's pretty interesting.
And even medicine today, eventhe radiology report, we take
care of diverse patients, right?
They, they should access their reports inthe language that they understand most.

(50:16):
It could be English, but it couldbe a certain kind of English.
It could be a dialect.
It could be anything.
So, talking to the patient in theirown voice, I think is, you know, these
large language models are so good atchanging voice and so good at pretending
or emulating a certain style thatthat's a, another really good example.
Judy, do you think within five years,discharge notes will primarily be

(50:37):
written by large language models?
Probably, but the, the doctorwill still have to edit them,
you know, they're just gonna bewordy, wordy, wordy, wordy, wordy.
I do know that the, I mean, I thinkalmost 20 hospitals, I mean, it's
in their tents for sure, who arealready using, you know, the products
now being integrated with Epic.

(50:58):
My concern about this really in theunderstanding the true ability is also
some of the changes that have happenedin the interfaces that you have.
So, it's very, it's not very different,but It's still different when you
put your query directly to the APIversus around the product that let's
say, for example, a premium ChatGPT.

(51:20):
And we've seen this work thathas also shown that the type
of response varies, right?
So, I believe that
what we're struggling with isthis believability and the flowery
language that is wrapped aroundthis large language models to really
understand, like, the dischargesummary does not need to be flowery.
It just needs to be factual, youknow, it just needs to be factual.

(51:43):
It's just busy work, but itjust needs to be factual.
So, how can you evaluate as a researcherthe quality of something like that?
If you're wrapped around, oh, I'm so sorrynow, ChatGPT cannot say I'm so sorry.
That's the human intervention that weput in between the API and what we see.
All right, our next question.

(52:03):
Extrapolating from current AI and medicinetrends, what worries you the most?
I'm excited about these technologies.
And I think what my concern is, isfor patients who are historically
underserved and now they have,we are bringing technology to the
doctor's rooms and not even puttingtransparency and understanding.

(52:28):
Maybe it's going to be better, butthere are papers that have shown,
even just focus groups, that haveshown that people are modifying their
behaviors when they go to doctors.
I have been labeled crazy when I'vebeen a patient, and it's because of
these perceptions in the hospital.
You know, this was one disagreementwhen I was requiring OB services

(52:50):
and I can see now we are bringingtechnology in and so it's about whose
voice is going to be the loudest.
So, I know that
in my own family and friends, I tellthem the key words to say so that
they trigger like action, right?
When you're not heard or even mystudents, I say, yeah, this is the
high, most, worst headache of my life.
It really has meaning about thosetype of things that trigger,

(53:12):
especially when you cannot be heard.
And then the second area is thatwe, I feel that these changes
are traditionally going tobe sold to administrators.
They'll be told, oh yeah, we cando your billing now, we can do all
these things, you can get more money.
But they're not really truly impactingpatient care or making a difference.

(53:34):
And so, we will end up with thesetechnologies that we are now bringing in
and I agree, I personally, bythe way, dictate my report.
It doesn't rewrite it, it just transcribesand then I decide what that is.
But these next steps and
just the, the LLM influencers around,I think are going to cause more harm in
an area that is very difficult to study.

(53:58):
Hmm.
You may have partially answeredour next question, but let me
just try and ask it directly.
Given the sort of like dual usecases for AI we've talked about
with, maybe it can explain somepain disparities in some cases.
What do you think about, um, machinelearning and AI and its potential to
exacerbate health care disparities?
Do you think that that's goingto happen or will it reduce it?

(54:21):
So, I think that can happen and it'snot just because of health care.
I mean, we see it in othersociety things, right?
Deciding who gets Amazon Prime, decidingwho gets hired, deciding policing.
So, we do see that and it's thisassumption that technology is neutral.
We never know how it's goingto be deployed downstream.
Now, as someone who's seen like policebrutality, I wish that it was a machine, a

(54:46):
robot that stopped you and traffic stops.
I think that more, lessBlack men would die
that way, in my opinion, ormaybe that we never needed to
stop anyone and do get a ticket.
Yesterday I was in, uh, Costcoand after my receipt was printed
out, there's another receipt thatcame out and the second receipt, I
didn't understand it wasn't for me.

(55:08):
And, you know, I was puzzled and, andthe teller told me, oh no, this receipt
just tracks how fast I am at counting.
You know, like at the counter.
And I thought, why would youever build such a technology?
You know, and I said, okay, whatare the consequences if you're slow?
Then their renewal to be a teller,right, is dependent on the speed.

(55:30):
What if I wanted to ask about somethingthat I didn't find in the store?
Just think about this simplething and how it's deployed.
You know, so we have these ideas thatalways sound good, but we never frame
them in the societies that we work.
And so that's why AI, not just inhealth care, can access a bit disparities.

(55:53):
It's terrifying to think about theanalog of that Costco, second Costco
receipt in the context of health care too.
Uh, but it's terrifying but notimpossible for me to imagine it.
Alright Judy, this is our final question.
That's what you saidlike 30 minutes ago, man.
Yeah, the last one for real, I promise.

(56:14):
So other than Indian chapatis notbeing the best type of chapati, what
is your most controversial opinion?
I don't know.
I don't know because maybeI believe in it too much.
I think that maybe the thing that maybemy friends don't like me saying is

(56:34):
like, do things that spark joy, becausethey say that comes from privilege,
that, that to have those choices.
But this is somethingthat I truly believe in.
And that means that I can, what Icall cancel my order, like I can just
walk away from something really easy.
And even, even when I should

(56:56):
pay a little more attention to it,but thankfully my spouse is there
to help me with that and my friends.
And I think that, um, in terms of opinionsof life, maybe I would say that I don't
believe it's maybe my subscription tothis model that, for example, some of the
global health initiatives are not helpful.

(57:17):
And, and maybe that, I don't know, Idon't have a good answer for this one.
I have many opinions, but I actually do,
as I speak loudly, I feel that I cando believe in those, those opinions.
So, all right, well, Judy, thank youso much for being on AI Grand Rounds.
This was great.

(57:38):
Awesome.
Thanks for the invitation.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.