Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Krishna Gade (00:06):
good morning,
or good afternoon, wherever
you're joining us from.
Uh, thank you for joining this AIexplained with, uh, Dr. Girish Nadkarni.
So Girish, uh, you know,thanks for joining.
Uh, just like from, you know, from an likeaudience perspective, could you just give
a brief intro about yourself, you know,what you do at, you know, uh, Mount Sinai
(00:31):
and you know, your role in terms of AI.
Dr. Girish N. Nadkarni (00:33):
Yeah.
Um, uh, absolutely.
So, uh, my name is Girish Nadkarni.
I'm a clinician, uh, but alsoan informaticist/AI scientist.
I've been working in the field ofapplied AI for over a decade now.
Uh, my specific interests are a few.
Um, one, um, is the safe, effectiveand ethical clinical implementation
(00:54):
and scaling of AI across healthsystems and sort of the best
governance strategies to do that.
You know, uh, that's one.
Um, the second is, uh, uh.
Particularly around LLM sort ofassurance testing of large language
models to make sure that they alignwith like best clinical practices
and recommendations, right.
(01:15):
Uh, the third is, uh, sort of the agenttake experience of AI as applied to
medicine in the terms of things uh,and we've talked about this question at
line about things that are, especiallyfor clinical decisions that are sort of
non-critical, need to be made to be fastand like are easily reversible, right?
(01:36):
So in that role, um, I, uh, I'm thechair of the Windreich Department of
AI and Human Health at Mount Sinai,and I also lead a research/clinical
institute called Hasso Plattner Instituteof Digital Health at Mount Sinai.
And I have a role on the healthsystem side of things as well,
particularly around governance,um, and clinical implementation.
(02:00):
Um, so yeah.
Um, and, uh.
Um, I, uh, have in my other lifeI've started some companies.
Krishna Gade (02:07):
That's great.
Yeah.
Thank you so much.
Uh, you know, it's, it's a, it's veryrare to see someone with like, so
much cross disciplinary experienceand background and, um, and so we are
very excited to learn, uh, some ofthe AI uh, applications in healthcare.
So, you know, I guess like generativeAI has gained a lot of traction, right?
You know, how is GenAI beingused today in clinical settings?
(02:29):
Are there any specific examples thatyou can walk us through, you know?
Yeah.
You know, like AI powerdiagnostics or whatnot?
Dr. Girish N. Nadkarni (02:36):
Yeah.
So, uh, well, so I think let'sjust talk about this, right?
Let's, let's differentiate clinicalsettings from what actually happens when
the patient interaction and patient facingstuff and like all the rest of it, right?
All the rest of it is probably like85% of the whole enterprise, right?
(02:58):
Things like billing, coding, um,back office tasks, register creation,
et cetera, et cetera, right?
So I think there is an impetus ongenerative AI to, um, uh, um, uh, push a
(03:19):
lot of the tasks that initially requiredhuman intervention or human labor,
especially in the back office field.
But then there is also nowslowly a trickle down effect
into the clinical fields, right?
So I'll give you an example.
In the back office fields, right?
So generative AI at Sinai, weare using it, uh, for automating
(03:42):
a lot of back office tasks, likescheduling appointments, right?
Bill better billing and coding,better financial management,
better insurance plan enrollment.
And those things already exist, right?
But slowly moving intothe clinical field, right?
I think where the first big, uh, push willbe is, uh removing the back office, his
(04:03):
task from the practice of medicine, right.
So let, have you been to apain uh, doctor recently?
Krishna Gade (04:09):
Yeah, just like yesterday.
Dr. Girish N. Nadkarni (04:11):
Where was it?
If I may.
Krishna Gade (04:12):
Kaiser,
Kaiser in the Bay Area yeah.
Dr. Girish N. Nadkarni:
How, how long was the visit? (04:14):
undefined
Krishna Gade (04:16):
It, you know, it took
like a, an hour like end-to-end, right?
Like, you know, basically.
Dr. Girish N. Nadkarni (04:21):
Yeah.
But how much time did the doctor spend?
Krishna Gade (04:22):
Well, the, the
doctor probably spends like
10, 15 minutes right like.
Dr. Girish N. Nadkarni (04:25):
In that 10, 15
minutes, how much time did the doctor
spend looking at you versus the computer?
Krishna Gade (04:30):
Maybe like
less than that right?
Like five minutes, 10 minutes.
Yeah.
Dr. Girish N. Nadkarni (04:34):
Yeah.
So I mean, is it fair to say that,you know, in the 15 minutes again
of a hour long thing, which shouldhave been ideally like 20 minutes?
Krishna Gade (04:41):
Yeah.
Dr. Girish N. Nadkarni:
Uh, the physician spend... (04:42):
undefined
Krishna Gade (04:45):
Half of the time.
Yeah.
Dr. Girish N. Nadkarni:
Five minutes looking at you, (04:46):
undefined
Krishna Gade (04:48):
Right absolutely.
Yeah.
Dr. Girish N. Nadkarni (04:49):
So I
mean, that's where ambient AI
and generative AI comes in.
Right?
So if you think about it, right,uh, you could, you, like you
and I are talking now, right?
We could, we could have a conversationand you know, ambient AI and now it's a
bunch of companies, Abridge, Suki, DAXcould like listen to a conversation.
Commoditized hardware and uh,uh, uh, um, then, um, generate a
(05:14):
billable note for the physician.
So doctors sort of become doctors and talkto the patient and make decisions rather
than being a data entry clerk, right?
Um, so I think that's one area wherethere's an infection between back
office tasks and, uh, um, uh, and, um.
Clinical medicine, and I thinkthat's gonna see like broad
(05:36):
adoption because physicians arealso tired of typing, right?
Uh, yeah.
You know, uh, and I think even if,uh, there is, uh, not an increase in
productivity, there's gonna be increasein satisfaction amongst physicians
because they no longer having, uh,they don't, no longer having, uh,
uh, uh, um, uh, uh, they no longerhave, uh, uh, the, the drudgery of.
(06:04):
Still listening to you and thenalso typing into a note, right?
So I would argue that someof their cognitive, uh.
Krishna Gade (06:12):
And there are also more
important, uh, serious use cases, right?
Like risk prediction and, youknow, sepsis detection and how is
like AI being used in those cases?
Like where, you know,
Dr. Girish N. Nadkarni (06:21):
I think that's
a whole new area, predictive AI right.
Which been around longer, inmy opinion, than generative AI.
So we have more experiencewith it, and that is where I
think, you know, we need, um.
To think a little bit, sort of deeplyaround the risks surrounding it.
Right?
So yeah, we deploy a lot of AI intoclinical care, but I think that
(06:43):
like the decision to not deploysomething is equally important as
a decision to deploy it, right?
What do I mean by that?
If I am physician, I make a mistakejust because of the bandwidth
and like the number of patients Isee is gonna be limited to like.
a patient or fivepatients the most, right.
The algorithm is inaccurate.
(07:03):
Who, uh, uh, uh, uh, uh, uh, algorithmis inaccurate then it scales to
like tens of thousands of patients.
Right?
Um, I think that that is where you sortof need assurance, you need monitoring,
you need privacy, and you need security.
And I really, you need one level aboutthat, which is sort of randomized
(07:25):
testing or some sort of AI assurancewhere you put things through like an
A/B test, obviously, uh, with highestlevels of patient security, et cetera.
And then, uh, you putthat into clinical care.
Now that being said, at Mount Sinai,we've deployed several of these
predictive algorithm for thingslike worsening of kidney function,
prediction of malfunction, predictionof falls, et cetera, et cetera.
(07:48):
Now, now, uh, you know, a lot of themare monitored continuously to allow
for drift, et cetera, et cetera.
And, uh, uh, um, uh, we wanna makesure that those, uh, are anything
that sort of makes this patient asnot patient facing anything that
impacts clinical care as we know itis safe, effective, and ethical right.
(08:09):
And we can have a long look on.
Krishna Gade (08:11):
So, I mean, of
course there's classical ML, as
you said, the predictive AI that'sbeen around for a long time.
And now there's generative AI, which youtalked about some really interesting use
cases like ambient experience and whatnot.
How are you seeing, seeing theinterplay of these two things, you
know, how are they complimenting eachother in healthcare applications?
Any, any interesting insights there?
Dr. Girish N. Nadkarni (08:30):
The
2, 2, 2 major insights, right.
Um, so the first is generative AI.
Can actually also beused as predictive AI.
It could be used as predictive AI in asense of you know, few short learning
or small amounts of fine-tuning.
That can actually improve prediction.
(08:50):
Right.
And it can be used as predictiveAI, especially in certain tasks
where there's not much tabular data.
My tabular data, I meanlabs or codes right.
And where a lot of the predictionthing is incorporated in notes.
. Like examples would be likemental health disorders, right?
(09:11):
The example would be initial presentationof a patient to the emergency room
because you don't have a lot of likework tabular information and then
Krishna Gade (09:20):
Right, right.
Dr. Girish N. Nadkarni (09:21):
Mostly notes.
Right?
Like what happened to you?
Krishna Gade (09:23):
Right.
Correct.
Dr. Girish N. Nadkarni (09:24):
Proof.
Um, right.
So, so that's one thing, right?
That's one thing where generative AIis actually used as predictive AI.
And then we, uh, uh, the risk of puttingmy own harm, we actually have a paper
on this in JAMIA where you actuallyshow that the, the, the accuracy, like
(09:44):
we did it for emergency room, right?
. So the accuracy for uh, uh, uh,
uh, um, uh, generative AI when
you are, uh, um, just feeding itlike 10 examples of emergency room
cases can approach predictive AI.
I mean, it's still lower, but Imean, I'm assuming if we fed in
more than 10 examples it wouldbe better because it scale.
(10:06):
It can actually give you a reasonreasoning about why it's making
the prediction, which traditionalML cannot, right, like traditional
ML, if you think about...
Krishna Gade (10:15):
You have to retrofit
explainability and whatnot,
but in this case, LLMs canpotentially give a reason out.
Yeah.
Yeah.
Dr. Girish N. Nadkarni (10:21):
That's one.
The second thing is when you canwork things in concert, right?
And that's when you can think aboutpotential agents in medicine, right?
So you have a predictive approachthat you know, I'm just gonna give
you a clinical example, right?
Krishna Gade (10:36):
Sure.
Dr. Girish N. Nadkarni (10:37):
In the hospital,
who are the five, who are the 10%
of people at most at risk of a fall?
I'm just giving you an example, right?
So, you know, you, you can have astandard, say a random forest or
some other tree-based algorithm uh,do that and it can say it has an
accuracy of like 95% and it can, Ihad a predictive probability, uh,
(11:00):
predictive, uh, uh, sorry, positivepredictive value of like 70%.
So it, 10 patients, the positivepredict will be set differently.
So instead of like having a physician ora nurse go and examine those 10 patients,
then automatically put a alert in theirchart, right, saying like, you know,
using a generative AI like saying anote that this patient has been, uh, uh
(11:22):
flagged that high risk of a fall, right?
Yeah.
You know, next to the patient you tobe, be careful or something, right?
So that's an example of where predictiveand generative way I can operate almost.
Krishna Gade (11:32):
You can take the predictive
outputs and summarize it for, you
know, and, and for, you know, uh, youknow, doctors and you know nurses.
That's amazing.
Well, one of the biggest concernswith gen AI is the whole concept of
hallucinations, or what they callfaithfulness or in a groundedness, right?
How do you, how do you ensure, howdo we ensure these AI outputs are
accurate and clinically reliable?
Dr. Girish N. Nadkarni (11:55):
So, I
mean, you know, you are much more
technical than me, Krishna, right?
You tell me whether thehallucinations can ever go away.
Krishna Gade (12:01):
I mean, see, I think,
um, models can never be perfect
and that's what we've learnedeven through the classical ml.
So I think there will alwaysbe, uh, issues with it.
It's just a, it's just, I think it's likemore like a creative feature in the, in
the whole, the way, the way the LLMs work.
Right.
So I don't think thatthey will ever go away.
But you, I think there are more andmore guardrails and more and more
(12:21):
safety checks people are putting in toensure that is, that's not happening.
Dr. Girish N. Nadkarni (12:25):
Exactly.
So, um, um, okay.
So a few things, right?
So like, so we agree that they willnever completely go away, right?
Because almost a feature, not a bug.
Krishna Gade (12:35):
Correct.
Dr. Girish N. Nadkarni (12:36):
It's
a feature of the creativity.
Krishna Gade (12:37):
Correct.
Dr. Girish N. Nadkarni (12:39):
So you can
put guardrails around it in the form
of like, you know, not hallucinateabout certain things, right?
You can improve hallucinations withRAG, basically like, uh, ragged to
a vector database of institutionalpolicies or medical knowledge, uh,
uh, or uh, uh, uh, what have you.
(13:02):
Now we have some work coming around that,that I would happy to share at some point
where you, even if you do all of thesethings, you know, you drag it and these
are actually real world cases, right?
You basically the, the experimentwas this, you basically take a
thousand real world cases and youjust fabricate a disease name or
fabricate a lab value out of this.
(13:22):
And then you watch how the model,uh, you know, response with hallis,
further hallucination based uponbad data, which just happens, right?
Data corruption, et cetera.
Um.
So even if you ragged it, thehallucination rates goes down, but it
still stays fixed below a certain amount.
Right.
So that is why I think figuring out arisk-based approach of what to automate
(13:45):
and what needs to be oversight isgonna be critical because, you know.
If you hallucinate like a, like afor a noncritical decision, right?
You know, patient needs to go frombed A to bed B. You know, it's like
if you realize it's wrong aftertwo hours, doesn't really matter.
Right?
But if if you hallucinate that the patientneeds surgery, that's a big deal, right?
(14:07):
That's why I think arisk-based approach, like.
Focus on this three parameters of likehow critical is a decision that needs
to be made, how reversible it is, andhow fast it needs to be made actually
helps with the AI governance, right?
So I think, you know, if we agreethat hallucinations are never gonna
go away, but we also agree that, youknow, the benefit of using generative
(14:30):
AI in healthcare is, is important.
Then we need to like both putguardrails, technical guardrails,
but also governance guardrails.
Krishna Gade (14:37):
Correct.
Yeah.
So you touched upon, uh, you know,like this whole thing about continuous
risk evaluation in some ways.
You know, essentially you'rescoring these, uh, general apps.
So, and then you touchedupon this governance aspect.
Could you shed some light on likethe governance processes that happen
around AI, you know, like for example,in your organization or in general
in healthcare space, you know?
Dr. Girish N. Nadkarni (14:57):
Yeah, absolutely.
So that, uh, um, governance in AI isevolving field right now, I'll say that.
So I don't think I haveall of the answers.
. But I have thoughts andsome principles, right?
. So the first thing I think governance,when you're setting governance policies,
you have to be, make sure that youhave a diverse range of perspectives
(15:18):
on board and in healthcare that means,you know, people who provide the care,
um, um, like nurses, physicians, MAs.
Um, people who enable the care to beprovided, which ensures like takes uh,
the back office and people to whom thecare is provided to basically the patient.
So you need all of those threeperspectives on board 'cause they have
(15:38):
different perspectives on everything.
And then you can have like a almost areview panel when you can have a majority
ward for anything to pass through.
But even before the things hit thatreview panel, there needs to be some
decision points or rules, right?
First easy thing is like, is itback office or is it patient facing?
If it's back office, then I would argue,yeah, you need some rigor, but you
(16:00):
need a lower level of like rigor thanif it's actually impact clinical care.
Right?
And for if things thatimpact clinical care, right?
Um, you need to figure out if it'ssafe, which requires a lot of like
validation testing and assurance testing.
Is it effective?
Which ideally requires somesort of A/B testing and which
C3 is ethical, which basicallyrequires like monitoring for bias.
Krishna Gade (16:23):
Yeah, absolutely.
And so like, so what shouldbe the best practices then?
You know, let's say, you know, lots ofhealthcare companies, we are working with
a few, uh, they're trying to, you know,integrate genAI into their workflows.
Now, how should they go aboutdoing this in a responsible manner?
You know, what are some of thebest practices do you recommend?
Dr. Girish N. Nadkarni (16:40):
Uh,
ag, again, like this honor.
Um, um, these are myrecommendations, right?
I just say that not my institution'srecommendations, you know, because I'm
one part of the whole purpose, right?
So I think like, first thingthat the best practice is like
clarity of purpose, right?
What problem are youactually trying to solve?
I mean, there is a wish right now toappear cool or like to appear like that.
(17:06):
We are doing stuff around AI and weare like this, but like, okay, and that
sounds fine, but if you're actuallyimplementing AI without a clear
organizational strategy or a clear.
Solving of a clinical oroperational challenge, right?
Like prioritizing emergency rooms,clinical decision report, patient flow
optimization, where generative AI hasbeen shown to concretely add value, right?
(17:30):
The second thing is, you know,a risk-based approach like
we just talked about, right?
What based upon the metrics orrubrics of um um, um, reversibility,
um, criticality and, uh, uh, uh, howfast you make to make the decision.
What's like the level ofoversight that you need, right?
Like, you know, for everything, every,like if you are a higher criticality,
(17:55):
but also lower reversibility, thenyou need physicians, maybe two
physicians to look at it, right?
Then, so you need like clear oversightthrough well-defined escalation protocols.
And the third thing...
Krishna Gade (18:05):
So reversibility from an
AI decision or like reversibility of
human interpretation of that decision.
Like what, what is that
Dr. Girish N. Nadkarni (18:10):
Reversibility
of the decision themselves
Krishna Gade (18:12):
Reversibility
of decision themselves.
Okay, okay.
Dr. Girish N. Nadkarni (18:14):
Like,
like is the this a door that you
cannot reverse, right, for example.
If you do a wrong surgery on a patientthat you can not reverse but if you
send a patient from an emergency room tothe floor, you can easily reverse that.
Mean, there's bunch ofexamples around that.
And then, um, um, um, the, the,the third thing is training.
(18:34):
Right, right now, um, you know,it's, it's, it's not a new field,
but it's also a field that mostpeople who make these decisions are
not custom or not used to right.
So I think that needs to be sort of largescale training across health systems
and sort of training to tailor to whatyour job is right now, but how your
(18:58):
job could evolve based upon this right.
And that training is gonnabe different for providers.
It's gonna be differentfor physician, uh, nurses.
It's gonna be differentfor, uh, uh, other people.
Right.
And then finally sort of build, uh,some sort of a feedback mechanism.
And that's a bigger conversationthat we can have, right?
Yeah.
(19:18):
So the feedback mechanism is, caninvolve, again, on a risk-based basis,
things from clinical trials to long termeverything should be monitored long term.
Sorry.
Um, but, you know, basically continuallymeasure outcomes, refine the models, get
human feedback, and ensure that any errorsare identified and corrected basically.
Krishna Gade (19:38):
Right, right.
So, yeah, so this comes back tolike assessing the risks upfront.
You know, which use case you wannaapply it or not, and then evaluating
it and properly setting up all thecontinuous oversight and monitoring,
uh, to, to make sure it works.
That's great.
Now switching gears a little bit.
Right.
So.
AI, you know, is, is seems to behelping quite a bit in this, you
(19:59):
know, discovering new antibioticsand assisting in drug discovery.
How do you see like genAI shaping thefuture of like, say precision medicine
and, you know, treatment innovations?
Dr. Girish N. Nadkarni (20:09):
So
that's a great question, right?
And I think that it's sort of a fullstack innovation for everything, right?
. Um, like if you think about whatprecision medicine is, right?
Is it's proactive rather than reactive.
It's personalized rather than public.
And it's predictive ratherthan reductive, right?
(20:30):
So if take those three things right soyou can know which patients are going
to get sick or which patients are gonnahave an incidence of disease based upon
a combination of several data points,starting off with their biological data,
which genome expose, but, uh, sorry,genome, um, uh, proteome, et cetera.
(20:53):
But also like the clinical data,their environmental data, right?
So that's the predictive part of it.
So.
that's the, you can't just do,and then you proactively try to
prevent that disease from happeningby personalizing therapy, right?
It's not a fast stretch to think about twothings that are happening concurrently.
(21:16):
Right.
One is the, the increasing personalizationin biotech with like mRNA and, uh,
gene therapies and even gene editing.
But also, um, this AIdevelopment of faster approaches
to do these things, right?
So in the future, and again, it's notpossible, it's, it's definitely possible
(21:36):
to have like a Coke machine, right?
Basically you enter in a person'sinformation and you press whether
you want like regular Coke or DietCoke or vanilla, which vanilla Coke
is terrible by the way uh, Coke.
And you get a drug or amolecule personalized that
individual patient, right?
I mean, it sounds science fiction, butif you think about the component parts
(21:59):
of it, right, it's not right, right.
You need, uh, you need, uh,
you need accurate prediction.
This person's gonna get sick because youdon't wanna give medications to someone.
. But then you can have tailored liketailored drugs to particular genes that
like decrease your risk of cardiovasculardisease, et cetera, et cetera.
(22:23):
And then you need a delivery mechanism.
And a lot of these deliverymechanisms are becoming oral, right?
So you can have, uh, mRNAmedication in an oral form
that's specifically made for you.
You tell me that this is science fiction.
It's not because havingpersonalized medicines created.
For rare diseases like you know,with names, the patient on,
(22:44):
and these are, these are, theseare severe diseases, right.
For example, a girl had likespinal muscular dystrophy
due to a mutation in a gene.
A drug was created specificallyfor her, and she does well now.
Right.
I'm just telling you that thisis possible to scale, right?
With AI, because you can have, you canscreen lots of combinations quickly and
you can rapidly generate any trade novelmolecular design for a particular patient.
Krishna Gade (23:08):
That's amazing.
So what you're saying is personalized,completely personalized medicine for some
rare diseases based on your DNA, your sortof molecular structure, you can personally
Dr. Girish N. Nadkarni (23:19):
Not just rare,
I would argue, common diseases as well.
Krishna Gade (23:21):
Common diseases
as well and personalized to you.
Dr. Girish N. Nadkarni:
Trying to prevent diseases. (23:23):
undefined
Right.
Personalized medicationsfor preventing diseases.
Krishna Gade (23:27):
Preventing decisions.
Dr. Girish N. Nadkarni (23:29):
Yeah,
like, you know, I right now, like
the healthcare system in the USis not really healthcare, right?
It's healthcare because it waits foryou to get sick and then takes, but
if you had a, a way of predicting withperfect accuracy or with like not a
perfect accuracy, with like reasonable,and then you had a personalized way of
(23:49):
preventing it, wouldn't that be cool?
Krishna Gade (23:52):
Yeah, that
would be amazing actually.
You know, I studied a bit ofbioinformatics in my grad school.
This is all very interesting.
We studied all the proteinstructured predictions and all.
It seems like genAI can automatea lot of these things and make,
make them a lot better actually.
Dr. Girish N. Nadkarni (24:04):
It can make
a, you know, you know, I, you know,
you can, you know, say that sciencefiction, but I, I anticipate that the
Coke machine, and again, I, uh, theCoke machine analogy is not mine, right.
It's my, uh, partner Alex Charney.
Coming sooner rather than laterbecause it's gonna happen.
Uh uh, and it might start off with likerare genetic diseases, but it might
(24:26):
end up with common complex diseases.
Krishna Gade (24:29):
That's amazing.
That's amazing.
And, and so basically when you sortof, uh, think about genAI across like
healthcare, you know, what are some oflike, let's say three promising use cases
that you think in the, let's say in thenext 12 to 18 months, that you would see
massive, you know, traction happening.
Dr. Girish N. Nadkarni (24:47):
So first
thing I, uh, uh, three things
and this, this is not, uh, um,um, specifically clinical, right?
Yeah.
The three things are massivetraction happening is like clinical
summarization and documentation.
And this includes, uh, the areasthat we just talk about, about
ambient AI and uh, right, but it alsoincludes like mundane things like.
(25:09):
Patient registry creation and likeextraction of data from clinical records
for sending to large quality improvementinitiatives across the country, right?
So that's why the second thing is thatI think in diagnostic imaging, uh, like
diagnostic, not just imaging, right?
Things like novel bio prognostics, uh,for, uh, predicting disease, right?
(25:33):
AI for like detecting stuffon radiology scans, right?
Uh, um, so those are thethings that I think will come
out fast and furious years.
Better ways of disease with theeventual goal of preventing them.
. And the third thing is a littlesort of left field, right?
Um, I think huge inpatient engagementand education and empowerment, right?
(25:54):
I'll give you a simple example.
So simple that we should havethought of it like years back, right?
. Um, like Brown University.
So anytime you go to a doctor, youhave to sign a form, a consent form.
Right?
Like, I'm fine with this.
Okay.
Yeah.
And those consent formsare like full of jargon.
I mean, I, I couldn't understandthem as a trained physician.
I want people, like the lowerliteracy laws gonna show, right.
(26:18):
Who also already did a great thing.
Right.
Can you take this consent formand translate it without losing
any of the information intoa sixth grade reading level?
And you know, it happened, uh, and they.
Did it across the health systemand patients love it because now
they can actually understand stuff.
Right?
Yeah.
There's a huge thing in makinglike complex medical com concepts.
More clar.
(26:39):
More clarifier, right?
Yeah.
So
Krishna Gade (26:41):
Yeah, I think that's great.
I think, I think the third thingthat you mentioned, it's like
actually close to my heart so theaccessibility of medicine, right?
And I remember when we both met at theResponsible AI Conference in New York,
you were very passionate about how AIcould transform medicine across the globe.
You know, for.
For like, people who don't have access tolike great, great doctors and hospitals.
(27:03):
Can you, what, what,what are your thoughts?
You know, how are you, you know,like, can you share some of that
with our audience, you know?
Dr. Girish N. Nadkarni (27:09):
Yeah.
So let's, let's have aconversation about that.
Right.
Um, so, um, medicine is sortof human expertise encoded.
The in heuristic form, right?
Like you take in inputs andthen you have outputs, right?
And the problem with medicine is likeexpertise, until now, any quote unquote
intelligence was limited, right?
(27:29):
And as a result of that, you know,you couldn't, um, scale this.
Forget about worldwide.
You couldn't reallyscale this to a system.
And that's the root ofall access issues, right?
If I, today I were to go and try to find aprimary care provider for a cold, I would
guarantee you I can't find one today.
Um, I mean, I can find onetomorrow or day after, right?
Because there's limited resources.
(27:50):
Right.
But if you theoretically think aboutthe fact that it's encoded knowledge
and you can scale it across the globe,then it's a problem of scale, right?
Which we've solved before.
We, as in tech has, uh, like techindustry has solved before, right?
Because it's a matter of scaleand just like blitzscaling, right?
(28:12):
. It's a little more dangerous because,uh, you know, it affects patients.
So that needs to be basedguardrails around it.
Yeah.
But at the same time, I would alsothink that a bunch of patients are
already putting their data into ChatGPTor Google Gemini, or similar, right?
So people are already doing it.
I think we just need to make it betterand put more guardrails around it and
(28:34):
like link it with via RAG to some sortof verifiable medical knowledge, right?
Because this could be huge, right?
Like for example, I'll giveyou a simple example, right?
Um.
Um, uh, where I grew up in India, right?
Uh, TB, Tuberculosis isextremely prominent, right?
Um, and you know, the, the standardworkflow was, you know, for
(28:55):
diagnosis you had to get like, uh,three tests, et cetera, et cetera.
Now there are things that, that canhave diagnosed TB with like a hundred
percent accuracy in five minutes.
That completely changesthe workflow, right?
Because now you can go from presentationto not even a doctor, right?
Someone who's, uh, sortof paramedical, right?
(29:17):
Like someone with some training,knowing how to recognize the bad
stuff accompanied by AI agent, right?
. You could get to be diagnosed and youcould get the first dose of treatment
all in all under like 15 minutes, right?
Huge workflow issue.
Right?
So I think that, I think.
The mass adoption of AI intopast clinical healthcare over and
(29:40):
beyond what's happening in the backoffice tasks is going to be, uh,
uh, in starting off in the low andmiddle income countries, right?
But there's a danger there becauseby definition, low and middle income
countries won't necessarily have the same.
(30:01):
Rigor of regulations that the U.S. Does.
So, which not and cannottake advantage of that.
Krishna Gade (30:07):
That's right.
So, but do you see theworld of virtual agents?
Like in terms of like, oh yeah.
Dr. Girish N. Nadkarni (30:12):
Absolutely.
Already happening?
Yeah.
Already happening.
Yeah,
Krishna Gade (30:13):
Already happening.
Dr. Girish N. Nadkarni (30:14):
I think
it just needs to happen in a more
rigorous and more reproducible and amore, um, dare I say, ethical fashion.
But it's already, I mean, you know.
The question of whether it'shappening or not is done.
It's how to be happening.
The question is how do we makesure that it's safe and effect?
Krishna Gade (30:32):
That's right.
So I think as a related audiencequestion on that, you know, what
are those particular clinical,clinical scenarios where genAI
should not be used or too risky?
So this is kind of a counter question.
Dr. Girish N. Nadkarni (30:43):
Well,
that's a really good question.
Right.
Um, um, I don't think it should be usedwhen there's a question of capacity.
And by capacity I mean it in aspecific medical and a legal sense.
Capacity basically means thatyou cannot, because of severe
(31:04):
mental or physical issues, um, youcannot, don't have the capability
to take your own decisions, right?
And there it should notbe used because, uh, of.
The fact that, you know, patientsdon't really have the ability to
distinguish right from wrong, right?
And that's, I don't mean to soundpaternalistic, but the specific
(31:26):
medicolegal definition of capacity,which says that, you know, then you
need like two or three physiciansor like ethics board to come in.
So I think that's one example, um,of where it should not be used.
The second one is, uh,
well.
This is starting off with nothing.
Nothing should be used unlessit's safe, effective, and ethical.
Right?
Yeah.
(31:46):
But like, you know, where it absolutelyshould not be used on, on the
pediatric and the child health fund.
I'm a little conflicted, right?
I don't know the answer to that,but that's something which has, um.
Uh, which has tricky, right?
Because of consent andbecause of, uh, issues, right?
And that's why even if you see it in thecurrent marketplace, uh, in the current,
(32:10):
uh, world, uh, um, you know, um, uh,uh, it's not being used particularly in
not a lot of products in kids right now.
Right.
Which is an issue bio in of itself, right?
Um, and the third thingis sort of specific.
Protected populations, right?
(32:31):
. That the, those are definitions, right?
Like prisoners and like, uh, um, youknow, people, uh, with like significant,
um, medical and or mental health issues.
So those are, that, that,that's tricky ethical situation.
That's a really good question.
I need to think a little bit more about.
Krishna Gade (32:48):
Yeah.
I mean there's this whole sort of, uh.
I, I don't know if it's adystopian scenario, but like, you
know, you think about like ElonMusk's, Optimus Robot, right?
Like that is, you know, running,you know, walking uphills, you
know, catching tennis balls.
Now if you think about like generativeAI systems connected to robotics now,
do you see a world where, you know, AIdriven robots could outperform human
(33:09):
surgeons are, you know, the sort of, youknow, pretty high risk, uh, situation?
Dr. Girish N. Nadkarni (33:14):
Well,
so, so, so, uh, uh, uh, um, so.
Yes.
No, the question is when, uh,uh, uh, um, the question is when
and, um, I don't know the answer.
Could, I mean, if you talkto different people, right.
(33:35):
Again, yeah.
Because the understanding, the real world.
Is harder than, much, much harderthan understanding language, right?
. That's why, um, the hardware is there,but the world models are, and the LLMs
are not particularly there, right?
. Uh, um, uh, you know, um, um,
(34:00):
um, I think it's not goingto be easy to do this.
Um.
Krishna Gade (34:08):
It's too
ambiguous and you need more,
Dr. Girish N. Nadkarni (34:10):
I, I don't
know, I honestly don't know answer.
Krishna Gade (34:12):
Yeah, yeah.
Maybe, maybe in a hundred years.
Dr. Girish N. Nadkarni (34:15):
I think
a bit sooner than that, but
like, I don't know if like, it'llrequire the, the development of
like robust world models, right?
Because understanding the physicalworld is much harder and understanding
the digital world, right?
So it requires developmentof robust world models.
Krishna Gade (34:31):
Right.
Right, right.
Makes sense.
So, and then there's, uh, somethingrelated to the, uh, so whole sort of,
uh, administrative efficiency, right.
Which we, you shouldtalked about at length.
Uh, but you know, there's anaudience question as well on that.
You know, there are some peoplewho absorb information better
when taking notes or writing.
You know, AI is taking notes.
You know, how will we includethese, those doctor styles, right?
(34:53):
Like, do you see a world that that'sgonna be a hindrance or it's a, it's,
it's kind of a template of everything.
Dr. Girish N. Nadkarni (34:58):
I mean, they, they
wanna like notes, type notes, all right?
They can do that.
And then you can just add in to the thenaddendums of notes to the end, right?
I mean, like, I would argue that lotsof doctors want to have a conversation.
Krishna Gade (35:11):
Yeah.
Makes sense.
Makes sense.
So I guess, you know.
Finally, you know, where do you see thebiggest bottlenecks in scaling gen AI
solutions in, in, in, in healthcare?
You know, is it coming from, uh, let'ssay like we talked about the reliability,
accuracy, this whole, uh, risk issues.
Uh, is it coming from pro, you know,governance side of things, process
(35:33):
side of things, you know, where,where, where are the bottlenecks?
Dr. Girish N. Nadkarni:
Lots of bottlenecks, right? (35:38):
undefined
I mean, uh, there is astanding maxim that, um.
Uh, uh, uh, um, uh, uh, uh, that,that is standing maxim that, uh, um,
workflow eats technology for breakfast.
So I think if, okay.
(36:02):
Do we agree that AI in itscurrent form is a massive.
Transformational technology probably.
Right?
I mean, you found a companyaround it, so you've probably,
so, but any transformationaltechnology takes time because there
needs the, the technology term mightbe transformational, but society
needs to transform around it, right?
(36:22):
For example, I'll give youa clear example, right?
Electricity.
The became invented, forget whatinvented, became cheaply, easy to
produce cheaply, but the time it tookfor electricity to replace steam was
approximately 50 to 60 years, right?
. That's because, uh, um, youknow, factories had to be
reconfigured for electricity.
(36:44):
Um, people had to be trained,infrastructure had to be laid.
All of those things, and itsociety had to evolve, right?
Krishna Gade (36:52):
Yeah.
Be comfortable with it and trust it.
Yeah.
Dr. Girish N. Nadkarni (36:56):
Now the, and here
I would like to hear your opinion, right?
Yeah.
Just in thinking about healthcare, right?
Yeah.
Um, right now, hospitals andhealth systems physically,
logistically, organizationally, areconfigured in a certain way, right?
Um.
(37:16):
To reconfigure them is gonnatake time, effort, and energy.
Krishna Gade (37:19):
Yeah.
Dr. Girish N. Nadkarni (37:20):
And
if to reconfigure them early,
then the tipping point is gonnabe a massive financial risk.
Yeah.
Yeah.
In this case.
And tell you what you think, right?
Yeah.
Being early is almostthe same as being wrong.
Krishna Gade (37:31):
Yeah, absolutely.
I think, you know, and in, in thiswhy in many industries and healthcare
is obviously under industry where.
What we are seeing is AI is being usedto drive more efficiencies, right?
Like you talked about things like, youknow, assessing readmission risk or you
know, getting an alert if this patientis, you know, likely to fall or something.
Or like maybe like a sepsis detectionsystem that that can give you an early
(37:53):
warning or you know, you know, thewhole ambient experience you talked
about where even if you are wrong.
It's not like gonna be so bad,like the false positives are not
bad, but, but getting that insightquickly can save a lot of time and
energy and human resources, youknow, being spent on those problems.
I think that's where we are seeinglike the early adoption happen,
(38:14):
of course, like having an AI robotdoing this ophthalm, ophthalm, uh,
like eye surgery that's probablylike, uh, tens of years away, right?
Like, you know, uh,
Dr. Girish N. Nadkarni (38:23):
I think
society society broadly, but
also specifically healthcare.
Society needs to transformaround it, right?
Yeah.
We agree that the technology,transformational, it just needs
the remaining world to conformaround it, which will happen.
Trust me.
Krishna Gade (38:40):
Yeah, absolutely.
I think the personalizedmedicine that you talked about
is very, very interesting, right?
And that's both innovative andcould we get game changing actually?
Uh, so I guess, you know, we aregonna wrap up this conversation,
you know, two, three minutes now.
If you had one final takeaway fora healthcare leader like you, you
know, trying to drive genAI in theirorganization, what would it be?
Dr. Girish N. Nadkarni (39:00):
I mean this time
this might soundy and like, uh, cheesy,
but keep the patient at the center right.
Do what's best for the patient.
If your focus and your effort andyour energy and your purpose starts
from that, then even if you'rewrong, you'll probably be right.
Krishna Gade (39:17):
Yeah, absolutely.
Yeah.
Well, thank you so much, Girish.
That's a great way to endthe, uh, conversation today.
I. And thanks for sharing yourvaluable insights and time with us.
Um, uh, that's it for this, youknow, this week's AI explain.
Uh, thank you so much for everybodyfor joining us, and, uh, we'll
see you in another session.
Dr. Girish N. Nadkarni (39:39):
Thank you.
Thank you Krishna.
And thank you everyone tothe, uh, for having me on.
It was, it was a blast.
Krishna Gade (39:43):
Awesome.
Thank you.
Bye-bye.