All Episodes

September 17, 2025 62 mins

Dr. Karandeep Singh brings two worlds together: programming and medicine. In this conversation, he explains how early experiments with code led him to biomedical informatics, why gaps between paper performance and clinical reality must be confronted, and how governance committees weigh ethics and safety. Now serving as Chief Health AI Officer at UC San Diego Health, he reflects on lessons from deploying sepsis prediction tools, the risks of hype, and the promise of integration. For clinicians, Singh’s story is a reminder that the best AI is guided by patient care, deep expertise, and humility about the limits of technology.

Transcript.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
We have access to a couple differentHIPAA-compliant, GPT-type tools.
Some of them are plugged into things likeweb search and some of the things aren't.
And so, you know, clinicians saying,well, I'm thinking of using this
tool to estimate a patient's risk.
And I'm like, wait, is thisplugged into web search?
No, it's just the language model.
And I'm like, I would nottrust a language model 10 feet.

(00:24):
It's very attractive.
'cause you can put apatient's chart in there.
It is not a risk calculator.
If there is no risk calculator yetfor this tool, LLM is not gonna
get you where you want to be.
It will give you something, and Iwouldn't rely on that thing at all.
Hi, and welcome to another episodeof NEJM AI Grand Rounds.

(00:46):
I'm Raj Manrai and with myco-host Andy Beam, we're delighted
to bring you our conversationwith Professor Karandeep Singh.
Karandeep is the Jacob's Chancellor'sEndowed Chair and Chief Health AI
Officer at UC San Diego Health.
He took us through his pioneering academicresearch on evaluating AI models across
health systems and how he approaches hisjob now as a Chief Health AI Officer.

(01:09):
This is a new and important role. Andwe had a lot of questions about how a
Chief Health AI Officer spends the dayand how he thinks about which AI tools
are safe for his health system to use.
And if you are lucky enough to knowKarandeep already, you know that Andy and
I, of course, had to ask him about his loveof the Julia programming language as well.

(01:31):
The NEJM AI Grand Rounds Podcastis brought to you by Microsoft
Viz.ai, Lyric, and Elevance Health.
We thank them for their support.
And with that, we bring you ourconversation with Dr. Karandeep Singh.
Karandeep, thanks for joiningus on AI Grand Rounds today.

(01:53):
We're super excited to have you.
I'm super excited to be here.
Karandeep, it's great tohave you on the podcast.
This is a question that wealways get started with.
Could you tell us about the trainingprocedure for your own neural network?
How did you get interested in artificialintelligence and what data and experiences
led you to where you are today?
Thanks.
You know,
I have been interested in computersand programming since I was a kid.

(02:18):
I think I wrote my first program whenI was, like, probably eight years old.
I think that might've beeneither in Basic or Pascal.
And ever since then, I think my interesthas been is in trying to solve kind of
the problems I've been facing, or thingsthat I wish I had with computer code.
And so, I think that early on a lot ofthat was being able to program games.

(02:40):
And I think as I got older, it was thingslike, writing software that could help
sync my email up with my iPod so I canwalk around and read emails on my iPod
back before iPods were Internet enabled.
And I think that kind of just graduallykept growing and growing. And I was never
a formally trained computer scientist.
I think I took one computerscience class kind of

(03:01):
throughout my career.
But it was one of those things where,by the time I was learning my probably
12th or 13th programming language andusing it to build different projects,
it was something that had become,just part of the way that I approached
problem solving was thinking about,can we solve this problem with code?
So, I think that somewhere along the way,I fell in love with medicine and wanted to

(03:25):
become a doctor, went to medical school.
Even when I was applying to medicalschool, I was trying to convey to
the interviewers, hey, I really wannado computer science plus medicine.
And I think they looked at what I haddone and the things I'd built. And they,
I think it was really hard at thattime, without kind of proper mentorship,
to really convey to people what'spossible when you apply the problem

(03:47):
solving that comes from computing,
and I think now AI, into a settingthat's deeply entrenched in the way
that things are done like medicine.
So, I think
somewhere along the way I was doingmedicine and just doing stuff on the side.
I would build kind of, uh, when I wasa Chief resident at UCLA, I built a

(04:07):
mobile app for iOS and Android that couldallow you to send pages from phone, which
at that time wasn't allowed, or therewasn't like a mechanism to do it, digitized
a lot of our paper-based educationalmaterials that we had for residents,
and I think it wasn't untilI actually hit fellowship in
nephrology where I encountered thefield of biomedical informatics.

(04:30):
And it was really there that Iwould say my kind of career was
launched in the way that I have nowfollowed. 'Cause up until that point
I really viewed computing and AI as thingsthat you did on the side to improve your
productivity, improve your learning,improve your kind of experience, and
the things that you were trying to do todeliver care and to be a good physician.

(04:53):
And it wasn't until I did my trainingin informatics where I realized it's
not only something that you can do onan individual scale, it's something that
you can actually do at a much largerscale all the way up to the medical
school scale or a health system scale, oreven on the consumer scale well beyond.
And I think that really opened my eyes.
And so, I would say the way that my careergot me to where I was today is I had

(05:16):
this aha moment with talking to one ofmy mentors back in my informatics masters
while I was still doing clinical training,where he said you can do your clinical
specialty and be the kind of computingand AI expert in that clinical specialty.
Or you can really take a muchbroader view and try to be an AI
expert who affects all of medicine.

(05:36):
And then also practice clinicalmedicine as a way to stay entrenched
and learn about the system.
And that I think was the momentthat really led me to say,
I actually want to aim big.
I wanna make sure that the things thatwe're doing in AI actually improve
patient care, actually make things better.
And the way I will do that is by alsopracticing medicine in parallel with

(05:57):
that, to be able to really understandwhat is the patient experience,
what is the clinician experience,so that we can do a better job.
And that led me to, I think, afaculty career in informatics,
growing operations role in AI,
and then led me to my role todaywhere I serve as the Chief Health
AI Officer at UC, San Diego Health.
Cool.
Thanks.
Maybe just to follow upon a couple things there.
So we've spoken many times before andI usually have some intro where, like,

(06:22):
Karandeep's one of these people wholike makes you feel self-conscious,
whether you're a doctor or a programmer.
I'm a computer scientist by training,and I think you're humble in how you
describe your programming ability.
We'll talk about this later, and allthe deep technical contributions you've
made in the Julia programming language.
And you just mentioned that youhad learned like 11 or 12 different
programming languages, and I guessI'm just curious, you clearly

(06:43):
have, like, natural ability in that.
What kept you anchored in medicine versusbeing, I mean, I'm assuming we're
approximately the same age, like, .comwas taking off, Internet was taking off,
personal computers were all happening.
I guess what kept you grounded inmedicine as your essentially core
identity where computing was thisthing that you did on the side?
Yeah, so I think we had a lot of deepconversations in our family about this.

(07:04):
My dad was actually an auto engineer, andso, he was a trained hardware and software
engineer. And I think his experiencewas that it's not the programming that
makes you who you are, it's the domainexpertise that makes you who you are.
And that being able to bring programmingto a domain really lets you take
that domain way further than if youwere kind of a programmer and really

(07:28):
working with the domain expert withoutreally understanding that domain.
I think when had talked aboutcareers in computer science,
careers in medicine, he really,very much strongly, and not just him,
I think a lot of folks in my familywere pushing. No, you should really
do medicine, and doing medicine doesnot mean not doing computer science.
And that was another thing wherethis is maybe stereotypical of Indian
families, but it was you can do computerscience, just do medicine first.

(07:51):
And that's like literally what Iended up doing was when I kind of fell
into informatics, I said, oh, this isessentially computer science and medicine.
But I didn't have a name for thisfield when I was starting. And had
I really known the field existed, Imight've gone straight into biomedical
informatics as an area, but my kindof winding path led me to do medicine
first, and I'm really glad that I did.
I think that a lot of the problems thatI learn about a health system or the

(08:16):
way things work are probably felt mostviscerally, when I'm actually working
clinically, and are not things you can readfrom a manual and learn from a manual.
Amazing.
I can relate to the parentalpressure to just do medicine first.
I don't know if I've actually sharedthis on the podcast, but I think many
folks know Zak Kohane was my PhD mentor,

(08:37):
right.
And on the evening after my dissertation,like defense, my PhD defense, we're having
dinner with my family and Zak.
And my mom, this is again, right after I finished my PhD, was saying at the time,
oh, like now it's time for medical school.
And Zak was maybe the first personto fully convince her that there
are other paths to impactingmedicine without me actually going

(09:00):
through medical school itself.
But one other thing that I've beenreally struck by what you said was,
and I think this has been such aconsistent theme on the podcast
is deep domain expertise, right?
And so in biomedical informatics,for example, there are many ways to
approach sort of medical AI problems,biomedical informatics problems.
But what we've been hearing about fromall the folks who are really having a lot

(09:21):
of impact, and that we've been lucky tointerview on this podcast, is that they are
really, really deep technically, but theyreally have this deep appreciation for
biology or for medicine.
And so, think Ziad Obermeyer reallyarticulated this quite well.
Like where does creativity really lie?
Where do really interesting problemscome from? And where do really
interesting we think about the iconicpapers in the last 10, 15 years?

(09:43):
It's because it's really hard to dothis outside of one mind where the
latency is just too low across twodifferent people talking about a topic.
If there's not enough that's shared
about knowledge about the domainas well as the sort of technical
application that you're using.
And so, I think that really resonates.
It's just gonna, you know, it'sbeen a real consistent theme.
Thanks for sort of sharing that.

(10:04):
I think we'll probably diginto it a little bit more in
some of your papers as well.
Maybe with that, Andy, maybe it'sa good time to sort of transition
to, uh, Karandeep's academic work.
Yeah, I think that's a natural segue.
So, the first paper we wanted to talk aboutis a pretty well-known paper of yours.
I think it got a lot ofattention when it was published.
It was published in JAMAInternal Medicine in 2021.
The title is "External Validity of aWidely Implemented Proprietary Sepsis

(10:26):
Prediction Model in Hospitalized Patients."
So, maybe could you set us up, whatwas the motivation for this paper?
What were the key findings?
And then I think I'd like to dig into someof the implications for AI more generally.
This paper came out because I wasinterested as I was growing my faculty
career at University of Michiganin seeing how a health system

(10:47):
turns on and monitors AItools as we're using them.
One of the things I've always beeninterested in is the gap between
what we say we should do and whatwe're actually doing on the ground.
And, also, the gap between howgood things work on paper and
how good things work in reality.
And I think those areboth solvable things.
I don't think when you say somethingdoesn't work, that's a permanent

(11:07):
indictment on it, and say it doesn'twork, so therefore it will never work.
But I think that it is helpful tounderstand, kind of, where things are
so that when you do want to improvethings, you can understand where
do things actually need improving.
So,
when I was at University of Michigan,I was in the Department of Learning
Health Sciences, which was an academicdepartment focused on understanding
and building learning health systems.

(11:29):
At the same time, the health systemwas setting up AI governance, and I
had the privilege to start on that AIgovernance as it was being created,
and then ultimately in chairing ourhealth AI governance at University
of Michigan. One of the things thatreally came up was, hey, there are
all these tools that are now availablewithin the electronic health record.
Which ones should we turn on,and which ones should we use,

(11:51):
and how should we be using them?
And I think those are questions that ona really simplistic level, health systems
have been thinking about when it comesto things like clinical decision support.
So, we've been thinking about,hey, should we turn this alert on?
And what's the implication of thisalert in terms of the work that creates,
in terms of the actual, you know, canwe move the needle on better outcomes

(12:12):
as a result of turning on this alert?
But here there was actuallya lot more variables.
It was okay, so do we turn this on?
Who do we make it available to?
Is it interruptive?
And it's interruptive, not just in thesense that when you log into a patient's
chart, it pops up. But interruptivein the sense that it can actually stop
what you're doing and sends you an alertand say, even if you're not in that

(12:33):
patient's room, in that patient's chart,you can get kind of stopped to say,
you need to go check on this patient.
That's a type of workflow mostclinical decision support never did.
And so, that was kind of a big c changeabout how interruptive we want to be.
And you wanna make sure thatif you're interrupting a
clinician doing something else,that it's kind of really worth it.

(12:53):
It was also the first time we'vebeen thinking about thresholds.
So, at what thresholdwould we alert someone?
And there was this really tensionbetween, you wanna alert someone
to tell them something they don't know.
You don't necessarily wanna alert someoneto tell them something they already know.
And so, when I came into thisrole, I said, well, let's look
at a number of these tools.
So, I think we didn't set out tolook at just the sepsis model.

(13:16):
We also published onthe deterioration model.
We also had a number of otherthings that we didn't publish but
we looked at, because the primaryintent here was not to do research.
The primary intent actually wasto understand how a health system
can make an informed decisionabout use of these tools.
And also, I think a growing recognitionthat even though the vendors give us

(13:36):
great tools to try to understand howwell these tools work in real life.
Sometimes the assumptionsthat the vendors make are very
different than the assumptionsour own clinical care teams make.
And so, this was one of these instanceswhere we, I think, dove in to try
to understand what happened andhow well this tool was working.
And, you know, I can get more into, kindof, what do we actually do step-by-step

(13:58):
to get to the point of actually evenlooking at this as a formal evaluation.
But then even after we turned it on,there was a lot of things that happened
after the fact. That helped us kind of thennot just turn this on and evaluate it,
but also then to monitor it even to thepoint of at some period of time, turning
it off period of time because of COVID.
Got it.
And just to maybe be a little moreconcrete, this is an alert for

(14:20):
patients who are at risk for sepsis?
Yep.
So, this is a model that the way itwas implemented at our institution,
it was running every 15 to 20minutes on every single hospitalized
patient and patients in the ER.
And it was trying to predict theprobability that you would have
sepsis in the next X number of hours.
And the idea here was, is that ifyou crossed a threshold of, I think,

(14:45):
6%, that at that stage we wouldsay, okay, at this point it's now
worth it to generate an alert.
And when we used some of the toolsthat were given to us by the vendor to
look at how well, like, what would thatmean in terms of things like positive
predictive value in terms of catching anumber of cases, sepsis, we felt like,
wow, this actually looks really useful.

(15:05):
And so, this is something that based on thetools given to us, feels like we should
definitely have on, because ultimatelyat the end of the day, we wanna make sure
our patients who are developing sepsisget care as fast as possible, right?
They get started on antibiotics.
There's a whole sepsis bundlethat gets put in place once
you have sepsis. It's notjust starting antibiotics.
It's making sure you get the rightamount of IV fluids given to you.

(15:28):
It's making sure that we look forother signs of end-organ damage by
checking lactic acid and the blood.
So, there's a whole workflowthat gets launched when you
think someone may have sepsis.
And so, this tool, Ithink we were excited by.
And so, when we saw these results fromthe vendor, we said, let's go ahead
based on this knowledge and turn iton in, kind of, one or two hospital

(15:49):
units just to start to gain experiencewith how well this is working.
Mainly 'cause it was one of ourearlier experiences, this plus
deterioration models, of actuallytrying to institute these into a
clinical workflow at our health system.
So, what did you find?
So, when we turned this on originally,we maintained an open line of

(16:10):
communication with our sepsis committee.
So, we had them come to our clinicalintelligence committee, which is the
name for our health AI governanceat Michigan, and talk about what was
the kind of experience of using it.
And in the early, kind of, months we askedabout, well, has this improved some of
the metrics that you guys look at in termsof time to antibiotics or other timestamp

(16:32):
based metrics that we internally track?
And the answer at that point was, notreally, but that's not surprising.
It's our first experienceturning something on.
And the other question that they actuallybrought to us is it seems like the
tool's really good at identifying peopleafter the fact that they have sepsis.
And I'll just talk about whatI mean by after the fact.
I mean, in the few hoursafter they have sepsis. Now,

(16:55):
the first few hours after you havesepsis, knowing that you have sepsis
is still very clinically useful becauseit's not that the goal of treatment
of sepsis is to treat sepsis actuallybefore we know someone has sepsis.
Like, that would be great if wecould do that, but that's a really
tall order. And so, it's okayactually to be able to predict and
act on sepsis after it happens.

(17:15):
It just raises questions of why does thetool recognize sepsis after it happens?
Is it recognizing the syndrome?
Is it recognizing the digitalfootprint of our treating sepsis?
And, like, it just opens up more questionsaround trying to unpack, is it telling
people what they don't already know?
At least, kind of, in that stage.
But the interesting thing wasis that the tool, at least in

(17:37):
the original description of it
was designed to predictsepsis before it happens.
They used the time of sepsis astime zero and said, we use the
predictions leading up to that to tryto predict the occurrence of sepsis.
So, that to me was actually adeviation from what the tool was,
I thought, supposed to be doing.
So, we actually said, wait, do you guyshave the timestamps for sepsis that you

(18:01):
use operationally as a health system?
And I was still saying you,because at that stage I was
just starting to transition intoa more operations-type role.
I was still very much an outsider in howI viewed myself as a researcher, kind
of given the privilege of participatingin this operational activity.
And so, they said actually we do, we havea set of criteria we use and that's how

(18:23):
we track our internally, you know, howgood we're doing in caring for sepsis.
So, we said, well, if you can giveus the actual timestamps you use
to decide if someone has sepsis, wewill do an evaluation against that.
And so, this was our kind of firststep to say we're not going to rely
simply on the vendor, which again,many times you have to, and many

(18:44):
times there are good reasons to,
and pragmatically it makes a lotof sense if you and the vendor have
agreed on how to evaluate things.
But because this was one of our firstexperiences, this in deterioration models
of looking at how we operationalizethese tools in our health setting, this
is a setting where we actually said,okay, we'll do this independently.
And when we did this independently,we found that the tool actually is,

(19:05):
as they found, pretty darn good atpredicting sepsis after sepsis happens,
but not nearly as good as one would expectin predicting sepsis before it happens.
And then some follow-up work that we didin collaboration with Dr. Jenna Wiens,
we found that, in fact, if you goback to before the antibiotic is
even prescribed in the first place,which is a moment where you could say

(19:27):
someone has recognized that someone'sat least potentially infected, which
is a precursor to having sepsis.
The tool is basically no betterthan just flipping a coin.
This is the NEJM AI paper, right?
Yeah, this is the follow-up NEJM AIpaper, and so this was us trying to
unpack, well, if it's not obviousbefore the timestamp, then what's, we're

(19:47):
trying to unpack this relationship, thiscausal relationship between a person
having access to a model, using it todrive a workflow, and also the model
simply also having access to what theperson's doing to drive its prediction.
And so, you can get this kind offeedback loop that's been described by
many other people, but that feed lapfeedback loop, you know, as the name

(20:07):
suggests, goes in both directions.
It can make the tool look betterthan it actually is, or in some cases
worse than it actually is, dependingon the actual relationship between
the intervention and the outcome.
In this case, I think it was setup so that the tool was looking
better than it actually is.
And at the same time, clinicianswere reacting before the tool had

(20:29):
identified that a patient had sepsis.
And so that's what a lot ofour kind of follow-up work was:
try to unpack.
It's almost like the prediction toolwould be better for automatic billing of
sepsis than it would be for proactivelyidentifying who's at risk for sepsis.
Cause it's, yeah, it's measuring theirhealth care's reaction to a patient
versus giving you prospective information.
And actually a question came up,why do we need a sepsis model?

(20:51):
Why can't we just use thecriteria to define sepsis, to
identify patients in real time?
And I think the issue is, is thatas most sepsis experts know, but
other people may not be aware ofthe criteria we use to define sepsis
are only appliable after the fact.
They rely on, for example, someonebeing treated with antibiotics
for one day or two days.

(21:12):
So, it's not something you canactually apply in real time.
So, it's one of these things wherethe whole reason the industry of
sepsis models exists is becausethe gold standard outcome is only
knowable when someone goes home.
So, when someone goes home, yeah, we canjust apply the gold standard outcome.
We don't need a model.
But there is this kind of intermediateperiod where someone has developed
sepsis, we still have actions we can taketo change their outcome meaningfully.

(21:37):
And the question is, can the modelgive us an answer in that period?
And can we try to unpack what is theadded value of things that people
did in response to the model versusthings that they would've done anyway?
So, that's great.
Let me follow up on a couple things.
As an AI researcher, if you came tome and described the sepsis scenario,

(21:57):
I would be, like, that's an awesomescenario where we could build a
predictive model to inform patient care.
Like you said, there's this sepsiscare bundle that gets rolled out,
so we know what to do about it.
Often in many health care scenarios,the disease definition is ambiguous.
And even if you know it's gonna happen,you don't know exactly what to do.
So again, like, if you came to meand I had not read your paper,
I'd be, like, this is awesome.

(22:18):
Obviously, there's a whole literature ofsepsis prediction models that we're not
gonna touch, but so, given the favorablequalities that it has, does this make you
very bearish on AI generally for medicine?
I know that this is in the context ofa specific predictive model, which is
a subset of what we generally thinkof as AI, but has this informed your
thinking about the potential for AIin health care in either direction?

(22:42):
So, let me first criticize our paper fora moment before I answer that question.
What did we not do in our paper?
We didn't look at, did actuallyimplementing this tool actually
improve patient outcomes?
We tried to infer based on informationwe had about the way the model was being
used, the way we were using it to thendrive actions to see whether we could

(23:03):
draw clear pathway between the modelperformance and then better outcomes.
But we actually didn't do is measure it.
And the challenge was, is that thereis a, there was a chicken and egg
problem really early in this fieldof AI implementation, which is you
don't wanna implement something atscale before you know that it works.
You can study all the outcomes youwant in a smaller scale, but often

(23:28):
not with the statistical power youneed to decide if it actually works.
So, you almost have to implementat scale to see if things work.
And so, I think that was alot of the early challenges.
I think that there have been folks whopublished either with this sepsis model
or with other sepsis models that they wereable to improve certain kinds of either
process outcomes or clinical outcomes.

(23:49):
So, I think that I'm not bearish onthe kind of field based on this paper.
What this paper I think just reminds us,and I think what it really cemented to
me, is that you can't just turn things on.
And when it comes to decisionsupport, we have ways to review it.
And I think the challenge was, is thatthe leap from non-AI based clinical
decision support to clinical decisionsupport that's driven by AI is that

(24:11):
you now need to think about a class ofproblems and a class of issues that you
just didn't have to think about before.
And that
when you do this kind of an effort,you can see how just how easily an
analysis like this could be gamed bymaking one assumption or the other
assumption if you don't understandthe kind of clinical reality of the

(24:31):
way these things are implemented.
So, I think that to me it was,it was something about, you
know, uh, I'm not bearish on it.
I actually think what this means isthere's a new class of professionals
who are needed who can help bridgeAI with implementation science and
help unpack whether these things work.
Yeah, I mean, your point there aboutthe chicken and the egg, like you have
to be well powered to find an effect.

(24:52):
And the effect has tobe precisely defined.
Like, those are the standards that we applyto clinical trials for pharmaceutical
interventions, for medical devices.
And I've always been curious thatthere's this subsection of research
primarily done by clinicianscalled quality improvement or QI
projects, which has always felt like,
not quite
I'm gonna design a full trialof intervention, but also
slightly more rigorous thanjust mess around and find out.

(25:15):
So, like, do you think that we needto elevate the status of decision
support and quality improvementsto be a sister discipline to these
other categories of research?
Yeah, so, I think there's a longhistory in this country of why we have
things that are in scope of the IRB andthings that are outta scope of the IRB.
And I would say a lot ofwhat is defined as quality

(25:37):
improvement is driven by intent.
Is the intent to make carebetter here or is the intent to
generate generalizable knowledge?
And one of the challenges is, and wetalked about this a lot in my, kind of,
previous role at University of Michigan,is that when you are trying to operate
within a learning health system,you're actually trying to do both.

(25:58):
You wanna make sure that yougenerate generalizable knowledge.
And so, the work has to be done in a highquality enough way where it actually.
You can stand behind it and say, this isreal, and this is something that other
similar systems may also find. But atthe same time, you don't wanna publish
and then find that the next day, the carethat you deliver at that system is the

(26:18):
same as the care you delivered yesterday.
Because even though you found this thing,there was just no kind of feedback loop
closed between you finding something andthat thing actually improving the way
that you deliver care at your system.
So, I think that, um, in my mind,really good quality improvement
should be the same as really goodclinical research because the
quality of the work is actually notwhat should be driving the label of

(26:40):
quality improvement versus research.
So, what you'll see is that even some ofmy peers here at UC, San Diego, like Dr.
Ming Tai-seale, have published randomizedwaiting studies of things like our patient
message replies drafted by AI to tryto understand, are there time savings?
So there, you know, applying more rigor,a randomized design, to try to study
it in a quality improvement settingand I think she's not the only one.

(27:03):
You know, Dr. Leora Horwitzat NYU has published on rapid
cycle learning health systems.
So, I think that quality improvementreally shouldn't be viewed from a lens
of, oh, we're now gonna use some kinda
sub-like an inferior design to studythe thing that we're studying.
It really should be based,in my mind, in intent.
If the intent is to drive better outcomeshere, and we think that there is, uh, real

(27:26):
equipoise in those outcomes, and the intentis fair, and it's things that would be
in the natural variation of the way thatcare gets delivered anyway, then I think
that's fair game for quality improvement.
And at UC, San Diego, we actuallyhave a registry of all the quality
improvement projects that are happening,maintained very similarly to the
way that an IRB would of active QI.

(27:47):
And that actually holds us accountableto make sure that the QI that we're
doing is not after-the-fact QI, wherewe did something and then we say, oh,
yeah, yeah, we did that before, butwe're actually pre-registering it in a
similar way that we would for research.
Awesome.
Thanks Karandeep.
So, uh, Karandeep, I think that'sa great transition to this sort of
next topic that we wanna talk about.

(28:07):
Just to make sure we havethe chronology right.
So, you finished your nephrology training.
You did a degree in informatics,biomedical informatics at Harvard Medical
School, started your lab at Michigan,and then most recently you've moved as
of maybe a year or two ago at this point.
Mm-hmm.
To UCSD and at UCSD you are theinaugural Chief Health AI Officer.

(28:29):
Did I get all of that correct?
Yeah,
you did.
Amazing.
So, I guess my first question is, you'reone of, I think maybe a few now, Chief
Health AI Officers across the country.
I'm sure you've spoken with someof the other ones, and you know each
other and you chat, or you intersectat meetings and things like that.
But maybe you can just start us offwith telling us what a Chief Health AI

(28:51):
Officer does, how you see your role withinthe system, what your job really is.
Yeah, so I think that the, in my mind,what a Chief Health AI Officer does
is it's someone who's accountable forthe way that a health system uses AI.
If you go to most health systems andyou say, do you use AI for something,
some aspect of your operations orsome aspect of your clinical care,

(29:12):
the answer is often gonna be yes.
And it's increasingly yes, becauseit's not that they themselves maybe
came up with AI, but if you lookacross the various productivity
products they have their electronichealth record other vendors they
work with, almost every tool nowadayshas some level of AI baked into it.
And then you have to ask who ultimatelyis responsible for making sure that

(29:34):
the AI is working the way it'ssupposed to, that someone's looking
at it, and that the ways and thatwe're using AI are ethical and really
respect the patients that we serve andrespect the communities that we serve.
And in my mind, that's theresponsibility, at least in part
of a Chief Health AI Officer.
So it's my job to make sureI'm not an evangelist for
AI within the health system.

(29:55):
My job is to make sure thatwe're using AI in a smart way,
and that as we think about
our health system strategy and the waythat we're gonna improve access to care,
the ways that which we know we're gonnaimprove, the quality of the care that we
deliver, that we think about what aspectsof that strategy could be supported
by AI rather than using AI to actuallydevelop like a separate AI strategy.

(30:19):
I think you do need some kind of anAI strategy that helps you figure
out how do we decide things, like,build versus buy? How do we decide,
like, how are we gonna upskill ourworkforce to use AI productivity tools
that the health system has approvedas being HIPAA compliant and secure?
But beyond those things, I think thereal job, I think is being plugged
into the rest of the health system,understanding the pain points,

(30:41):
understanding what the health systemdoes well, what the health system doesn't
do well, and then really thinking froma problem-oriented standpoint, where
are there opportunities to really
use AI? And where should we not use AI
because we feel like it's notthe right tool for this job.
Or what would we need to do to getto a point, like, I think a thing
that everyone's struggling with rightnow is there are all these patient

(31:04):
facing generative AI tools and chatbots that are coming about that—.
That are being used, that are beingused all the time by patients, right?
Well, patients are
using them,
sure.
But now there are health system productsthat are patient-facing AI, right?
AI call centers, AI things where a patientinteracts with a health system and they're
actually interfacing with an AI and maybenot with a human in the loop always.

(31:28):
And so, I think that is this thingwhere you really think like, okay.
How are we as a health system gonnamake a determination that this is
something we're okay with and whatkind of testing do we need to do?
Is it enough to justrely on vendor testing?
Do we need to do some vibe testingand, and actually play with these
tools and say are they okay?

(31:49):
Some of the voice AI tools might talktoo fast, and you won't know that until
you just play with it and you say,there's no way some of our patients
calling in would be able to evenfollow what this is saying and get it
redirected appropriately to a human.
So, I think it's everythingabout build versus buy.
It's about how do weupskill our workforce.

(32:10):
And then it's also about howdo we plug AI into the relevant
parts of health care operations andclinical care at a systematic way
that lets us scale the use of AI.
And then I think it's also enabling.
So, how do we enable our faculty asan academic medical center and our
staff as an academic medical center toactually use these tools to implement

(32:31):
things at scale that otherwise theymight have thought really small
about, because there wasn't a sharedthinking and a shared infrastructure.
What's been most surprising about the job?
I would say what's been mostsurprising early on is when you
meet people, them kind of assumingyou are there to evangelize AI.
Um, so I would say that having AIin your title, oftentimes people

(32:53):
will approach me and say, um, youknow, oh, like I'm using AI for this.
And I'm like, I don't know thatI would use that tool for that.
And yeah, you know, so an examplewould be we have access to a couple
different HIPAA-compliant GPT-type tools.
Some of them are plugged into things likeweb search and some of the things aren't.
And so, you know, clinicians saying,well, I'm thinking of using this

(33:13):
tool to estimate a patient's risk.
And I'm like, wait, is this
plugged into web search?
No, it's just the language model andI'm like, I would not trust a language
model 10 feet with a question of—.
Like for a, like for like ASCVD riskscore or eGFR or something like that.
Yeah,
no, it's very attractive.
'Cause you can put a, you can put apatient's chart in there or you know,
parts of a patient's chart in there.

(33:34):
Yeah.
And,
but understanding that, like, the LLM is nota risk, it's not a risk calculator that
is a different, it's a different thing and—.
What you might, reframing it asif there is no risk calculator yet
for this tool, LLM is not gonnaget you where you want to be.
It will give you something.
And I wouldn't rely on that thing at all.
Some of the language models are goodabout saying I can't do that, but

(33:56):
this is where I think it's reallyimportant for models that say I can't
do that, but also for our folks tounderstand what you can and can't do.
So, you can use it to look thingsup where you've missed something.
I've seen rare diseases on theinpatient side where I'm like, I'm
gonna sit down and read about this,but in the two minutes I have before
I walk into that patient's room,
let me quickly use an LLM-drivensearch that pulls up relevant papers

(34:19):
and things that I need to know as anephrologist before I walk into that room.
It's not the replacement for what I'mgonna do afterwards, but it will give
me context that I could never get intwo minutes walking into a patient's
room with a rare mix of conditions orkind of a unique situation where I'm
trying to figure out how am I thinkingthrough the questions I'm even gonna ask
them to get to the right place where Ican make a good solid recommendation.

(34:40):
So, I think that's one thing.
The other one I think is just educatingpeople about the fact that just
because these tools can train on yourdata doesn't mean that they have to.
And a lot of the enterpriseagreements that we have actually
greatly limit what the vendorscan do with data that we enter.
And I think that the kind ofprevailing understanding is, is

(35:02):
that in order for you to use thesetools, they have to constantly
train on the data that you give it.
And so, a lot internally is whatwe're doing is trying to make AI boring.
Right?
Make it something that is not interesting,that is just the way that we do things.
Because boring feels, 'cause boring,
feels safer.
Well, when you convertit into something
that is an analog, analogousthing that's not AI.

(35:24):
People get it. And it somehow, whenyou say it's AI, people it is just
computer code, and when youcall it AI, people will now make
magical assumptions that are both
overestimating what it can do and alsoseverely underestimating what it can do.
So, I think a lot of it's just saying,okay, this question that you've been
thinking a lot about, imagine it wasjust this, would you view it differently?

(35:45):
And they say, oh, when I use myemail, that's my enterprise email.
I don't make any assumptions thatlike anything's being trained on that.
And so, you say, well actuallythat's the same agreements we
have in place with our vendors.
They can't train in our data.
So, that's why, just because we're usingAI does not mean we are giving away our
patients data to a whole bunch of vendors.
We are pretty strict in terms of asa health system, as a university, and

(36:10):
as a university located in Californiawhere there are very strong consumer
productions in terms of what we areable to do in partnership with vendors
and what we explicitly prevent themfrom doing with our patient's data.
Awesome.
I don't think that this is cutting offany of the questions that we have in
the next section, but what are areas,given your perch where you're seeing

(36:30):
the most rapid change in adoptionfrom AI, and like the flip side of
that, which ones seem to be calcifiedand resistant to AI-related change?
Yeah, so I think if you asked anyChief AI Officer this question, they
probably say AI scribes is wherewe're seeing the most rapid adoption.
And I think that this is an areathat has existed for a while.
The companies that are aroundtoday, some of them are really well

(36:52):
entrenched companies that have been
doing dictation and kind ofother related tasks and even the
intersection between dictation andthen a little bit of note preparation.
I think the thing that is reallyinteresting to see is that every tool
that's starting off its life as an AIscribing tool is also morphing into a

(37:13):
clinical decision support tool, morphinginto a revenue cycle billing tool.
And so, I think that the categories thatyou look at and you say, oh yeah, these
were clinical decision support tools.
These were the billing kindof vendors that we had, these
were the scribing vendors.
And what we're finding isthat the generation of the
documentation is a bridge intothe rest of health care operations.

(37:35):
And so, you're starting to seethis real convergence between what
started off in one domain area andnow is suddenly a platform that is
cross health system and touchingmany different areas independently.
I wanna ask one question that'sa little bit less serious, and
then one that's a little bit more.

(37:55):
So, the less serious one
and this is at the, a very deliberaterisk of potentially setting off Andy here.
You're the Chief Health AI Officer,so presumably you have to define what
AI is and what within the hospitalcounts as AI and what doesn't.
And so, Andy and I have had many,many, many discussions about like

(38:15):
where, what family of statisticalmodels is AI versus is not.
And so, the classic one islogistic regression being
rebranded, some risk score beingrebranded as AI in today's time.
Um, you must have to actually.
This is the sort of non-serious questionor less serious question, but you must
have to actually deal with this, right?
Like, oh, Karandeep,

(38:36):
we have a new MD calc riskscore that we wanna put into
the HR as like a dot phrase.
Is this safe for us to use?
So, like it's just some, it's alogistic regression risk score.
Is that something that you wouldbe sometimes or routinely asked
to sort of opine on as part ofthe Chief Health AI Officer job?

(38:56):
Yeah, so I would say what I will tellpeople is I am a person who does AI.
I'm not the only person who doesAI in our health system, right?
And so, there's a lot of people whoare thinking about AI and particularly
AI in clinical specialties.
There's a lot of situations wherethe AI is actually directly coming
from a clinical guideline, and thenthere's times where it's something

(39:16):
that someone's built that's gonna beused to drive clinical care decisions,
but there's no kind of officialclinical guideline kind of guiding it.
On one hand, we have a fairlybroad definition of AI.
It's, I can pull it up exactly.
I don't have it exactly off thetop of my head, but I think it's
AI is a tool that makes informationavailable to support decision
making, and that's driven by data.

(39:39):
I think that predictive AI kindof very cleanly falls into that.
And I think we also might expandit to say and makes recommendations
'cause generative AI can oftenbe used well beyond just getting
information and actually generatinga bunch of information that you're
gonna directly share with someone.
So we have some definitionslaid out of what is AI in
general, what is predictive AI?
What's generative AI?

(39:59):
There's a lot of convergence happeningwhere people are using generative AI
models to do prediction, et cetera.
But just to say, I think it, it does helpto have some kind of framework in mind.
One of the things I think that'sbeen, that I get wrapped up in is
making sure what things actuallyrequire a full committee review
from our health AI governance.
And what we've actually decidedis a lot of things don't actually

(40:20):
require a full committee review.
It requires eyes on it, but it letsus be nimble and move fast without
trying to bring every logisticregression model to full committee.
And so, some of the things we lookat to decide, does it really require
all hands on deck to look at it. Is,is it high impact or is it high risk?
High impact are things that affectmultiple different service lines

(40:43):
or multiple different roles.
We have tools that affect potentiallyphysicians, nursing, physical
therapists, like a sepsis model, right?
Might touch on a bunch of differenttype of roles within a health system,
or we have things that are in the areaof revenue cycle that are clinician
facing that might affect a lot ofdifferent clinicians across surgical,
medical specialties, et cetera.
So, those things I think are high impact.

(41:04):
Get a full committee review.
The high-risk things are thingsthat we use without a human in the
loop, things where we're using it toactually drive clinical care decisions.
And there are things that affectthe way that our work is organized.
So, think about things that actuallychange people's roles in a system.
And therefore, it could haveimplications on jobs, job

(41:27):
descriptions, and things like that.
So, anything that falls into thosecategories, we take those extremely
seriously and we wanna make surethat we are very thoughtful about
those and we get a lot of inputfrom a lot of different people.
Because our AI governancecommittee actually consists
largely of our health care leaders.
It's actually got a handful offolks with expertise in AI methods.
But it's largely leaders now.

(41:48):
Leaders includes experts inhealth equity, experts in ethics.
So, it's not just leaders from a businessoperation standpoint, but it's leaders
with different areas of domain expertisewho understand the gravity of, like,
we're gonna use AI to do X, Y, Z.
So, I would say, to answer your question,most of those things would be single
service line, single clinician.
So, they may come to us and ifthey came to us, they'll reach a

(42:10):
pretty quick resolution, usually.
But I think that, you know,we, we do look at those.
It's just that, you know, we, yeah, if youlook at all the things that are happening.
The fires are not generallyin those situations.
The fires are in situationswhere we really need to spend a
lot more of our time and, yeah,
so we spend a lot time on thingswhere there's no human in the loop,
where we're like, okay, we wanna makesure that is as safe as possible and

(42:34):
as restricted to use cases where wethink it's appropriate as possible.
So, I'm gonna quote youon logistic regression.
Is AI officially in 2025?
No, that was great.
And that was, uh, that was alsoa very helpful way to think about
how you partition effort, time,and precious resources to sort of
address different AI applications.
One last question then I think we'regonna jump to the lightning round.

(42:55):
So, you have to decide when an AItool is ready for frontline use
and then presumably also monitor.
And I think this has been a bigpart of your both academic research
career and then also I think yourjob now monitor the tools that are
being used, that they're being usedsafely and they're still functioning
as intended in your health system.

(43:16):
And so, my first question is, doyou have sort of a go, uh, and
this is kind of a, maybe a, giveus a quick answer for this one.
Do you have a go/no-go checklist forthinking about whether an AI tool is ready
for frontline use or something similar?
I would say no, we don'thave a go/no-go checklist.
We do what I would sayis a holistic review.
Okay.
And that
is to measure it with what theimplication is on a patient's

(43:40):
safety, like on the health system.
And I would say there are things wherewe really do need to go deep and do a lot
of double, triple, quadruple checking.
And then there are things thatreally, it's not gonna take us there.
I would love for us to have a
standardized set of tools thatwe could use to evaluate all
of the AI models that we use.
I think that would be amazing.

(44:01):
The reality is, is that we have modelsthat sit in our electronic health record.
We have models that sitin imaging vendor data.
We have models that sit completelyoutside in various clouds that are
securely connected into our data sources,but are running completely elsewhere.
There's really not an easy way in 2025 toactually have a standardized evaluation

(44:22):
toolkit that you can use across modelsthat are filing their scores and relying
on totally disparate types of data acrossthe range of the sort of vendors that we
work with and the range of the in-house,uh, kind of efforts to, to build things.
So, I think that I've shifted mythinking a little bit on this where
I would've said, you know, maybe ayear ago, two years ago, we should be

(44:44):
evaluating every single AI that we use.
What I would say in monitoring everysingle AI that we use, I would say
in 2025, my pragmatic thinking iswe are not resourced to do that.
So, we need to make sure that ourcontracts with our vendors really hold
them accountable to helping surfaceissues to us that may not lead to them

(45:04):
to make different decisions, but maylead us to make more informed decisions
about whether a tool's working.
There's also a whole open science inmodel monitoring that I think, um,
maybe, you know, wasn't appreciated
a year or two ago. People had kindof talked obliquely about feedback
loops and things like, oh, thisis kinda some hypothetical thing.
But it's very true.

(45:25):
The moment you start using an AI modelto change an outcome that that model
was designed to predict, it becomesextremely difficult to actually say
that that model is now not working well.
Because if your model was predicting anoutcome that you are trying to avoid and
you have a really effective intervention.
If your model's working, you willhave less of that outcome and

(45:46):
your model's AUC will get worse.
Your performance measures will look worse.
And in fact you might say, Ieven need to like my, it's even
mis-calibrated, I need to recalibrate it.
If you recalibrate it, you willcause more people to experience
the outcome, 'cause fewer peoplewill receive the intervention.
So, actually, I would say ifyou have a really effective
intervention, you want the model.

(46:06):
If the model is designed to avoidthe outcome, to actually get
worse and be less well-calibrated.
'Cause that's a sign of success.
On the flip side, if you aredriving an outcome, that's a process
outcome where when you see thattool, it leads you to do something.
Well then, if the tool is workingor the intervention's working,
you better be doing more of thatthing in response to that model.

(46:27):
So, you would expect the AUC toget better, you'd expect it to
get mis-calibrated in the otherdirection, and those would be
good things and signs of success.
So, I think that, you know,everyone in their heart feels like
we need to monitor models, especiallypredictive models in this way.
Um, but I think the reality is,is it's just not that simple.
Probably most health systems don'thave the expertise to unpack what's

(46:49):
real and what's actual miscalibrationas a result of dataset shift.
And so, I think that that's why apartnership with vendors is required.
Generative AI evaluationis an open book in 2025.
I think we've kind of come toterms with how we internally
plan to evaluate those tools.
But I think that that'ssomething where, um, you ask

(47:10):
different people and they willgive you totally different answers.
Some people are extremely metricsdriven, some people are vibe driven.
And I think I probably have shiftedmore towards being vibe driven mainly
'cause if you look at, um,you know, things like, uh,
uh, LMSys Arena. Yeah, right.
Or LMArena, where you find that, youknow, a lot of the evaluations are really

(47:31):
vibe-based and you, you can actuallydifferentiate models with if enough
people use it for enough different tasks.
Very generally, I think that, uh,a mix of vibe based evaluation
and kind of the, uh—. Did you
read Rick Rubin's book?
Rick Rubin's book
The Way of Code?
Yeah.
That, and I think, uh, and I thinkit's like HealthBench actually do help.

(47:55):
Yeah.
Or MedHELM where you have a specific,um, scenario where you want the LLM to
say something in response to somethingthat a patient said, and you can test it
to see does it actually do what I wantit to do and judge it on like a rubric.
So, the, I think that the largeranswer to the question was, yeah,
we have ways to evaluate it, but Ithink that it's not as simple as

(48:16):
we will do all the evaluation in astandardized way across all the different
modalities of AI that we have and allthe different ways that scores get
calculated and filed in different places.
We are just not resourced and I, I doubtany health system actually is truly
resourced to do that at operation scale.
So, I think we have to be smartabout how we approach that.
I think you made a lot of subtle pointsthere and uh, you also just illustrated

(48:40):
really well, I think why you're, uh,a Chief Health AI Officer 'cause it's
very thoughtful and subtle points.
So, so thank you, Karandeep and I think, Andy,are we ready for the lightning round?
Yeah, just one quick point andthen we'll go to lightning round.
Like, I also, one of my big memories of youis, like, working with Andrew Vickers and
advocating for decision-curve analysis, andthe fact that Karandeep is now advocating
for vibe-based analysis is probablylike one of the biggest moves—.

(49:03):
System two to system one.
I don't know.
Yeah,
I think it's gen. I thinkit's for generative AI.
I think for predictive AI I reallydo think vibe-based is not enough.
Um, and that's because for predictiveAI, the vibes are always positive.
Got it.
And I think that for generative AI,what I think is actually the smartest
thing to do is rubric-based analysis.
Give people a, what would youwant the language models to do?
So, when I say we do vibe-based analysis,what I really mean by that is we look at

(49:26):
does the thing do what it's supposed to?
Mm-hmm.
Does it not do whatit's not supposed to do?
And can we drill, break it?
Yep.
And those are like three of the areaswe look at for journey AI model.
But a lot of that is through youtyping and, and playing with it.
I say that you can subtly figure outlike, oh, this is just not gonna work
because even though it meets our rubric,
it does it in a weird way.
Cool.
And so that, that's the kind of part whereit's that open-endedness that's left.

(49:50):
But I, yeah, I, I don't think vibesare sufficient, but I think that
absence of vibes leaves a big gap.
Got it.
Cool.
Alright, we're gonna hopto the lightning round now.
The answers are a mixof serious, non-serious.
The goal is, uh, short answers,but we'll ask for elaboration

(50:13):
perhaps where warranted.
So, the first question isnear and dear to my heart.
As a father of two young kids,and I think having followed you
on Twitter for a long time, you'vebeen extremely successful at this.
So my question is, what tips do you havefor parents to get their kids into coding?
I don't think there's any tips.
I think your kids naturally have to beinto it, but I think that Scratch is a

(50:33):
great entry point into coding because Ithink when people say, oh, I can make like
this character walk two steps to the rightand then do this, if that resonates with
a kid, I think that kid will love coding.
Some kids that resonates.
For some kids it doesn't.
And so, I think that you can'tforcibly make your kids love it,
but I do think that for a kid thatgets that, then they're like, wait.

(50:56):
At some point they'll get to,how do I do this without blocks?
Mm-hmm.
And I think that was the moment wherewe kind of switched away from blocks,
but the blocks make the kid understandwhat you can actually do with it.
And it actually lets it so that for thefirst time as a parent you can step away
and say, why don't you go play with this?
And then really let kids build theircreativity in the same that they
would way that they would kind offall in love with something like art.

(51:18):
Cool.
Awesome.
Thanks.
All right.
Our next lightning round question. Whatis more therapeutic: going on vacation or
spending an afternoon coding in Julia?
Oh man.
So, I don't think you guys know this,but last week I was on clinical service
working in the hospital and I got oneday of coverage and in that one day

(51:41):
I flew from San Diego to Atlanta toPittsburgh to give a keynote at JuliaCon.
And then flew to Minneapolis,back to San Diego so that I could
go back on service the next day,uh, and take care of patients.
Wow.
Uh, and so I would say thatthey are both therapeutic.
There's a time when creativeenergy builds up and
you need to code?

(52:02):
I don't know that it has to be Julia.
I think it's whatever problem you'resolving and whatever tool that
is that that's in front of you.
Um, I spent, um, the last couple weeksbuilding a, a simulation model of our
hospital in R using the simmer package.
And so, it's one of these things where,you know, when you have that creative
energy built up and you can see theclear way that you would solve it,

(52:22):
and you just haven't had the timeto sit and write code, I would much
rather be coding than be on vacation.
But I think that once I've donethat, then I would be like,
okay, now I go on vacation.
Awesome.
Nice.
Cool.
Um, next question.
What was your first job?
I tried to get a job at McDonald's, butthey said, you can have this job as long

(52:43):
as you're willing to shave your beard,which at that time was very small.
But as a Sikh, I don'tshave, and so I had said,
I would love to work here.
I just can't deal with theone restriction that you have.
So, actually it was not McDonald's,but it would've been 'cause I
think I applied to, um, McDonald'sas a job in high school.
So, I think my first actual job, um, and Ithink I did like some paperboy stuff, was

(53:09):
um, believe it or not, being a resident.
Nice.
I would not have had McDonald'son my Bingo card for first job.
So, that's a super interesting response.
Alright, our next question.
My son wants to work at Chipotle'cause he loves Chipotle.
He is like, all the peoplework there, get free meals.
Um, and so he is like, that's mygoal is to just work there through

(53:30):
college and then just not haveto like, ever order any food.
Amazing.
Alright, I think you mentionedthis earlier, uh, so you are a
nephrologist and so, uh, I'm justgonna give our listeners that context
and then ask, uh, this question.
Was it a coincidence, uh, or was itdestiny that Bud Rose, creator of
UpToDate was also a nephrologist?

(53:51):
I think it was destiny.
I think that a lot of the waythat nephrology is both taught
and experienced is through a bunch of mathand a bunch of mathematical relationships
between things like electrolytes, thingslike kidney function, different types
of cells you have throughout the kidneysthat really keep you in homeostasis.
Homeostasis is the product in a urineis the, like, pollutant is I think

(54:15):
what Joel Topf, uh, who goes by KidneyBoy online, uh, had kind of created.
So, I think, uh, it'sall about homeostasis.
And so, I think, you understand alot of these mathematical things and uh,
even now, I mean there are things where.
You know, someone comes in with a justcompletely electrolytes that are off
and you're mathematically thinking,what am I gonna correct first?
How am I gonna do it?

(54:35):
How does their clinicalpresentation relate to their labs?
Because it's not just about fixing labs.
You can't fix labs without knowingwhat's going on with the person that
created those labs in the first place.
So, there's a real connection there.
And actually one of the mostviewed UpToDate pages from
what I know, is actually the pageon low sodium or hyponatremia.
And that's because that's one of thosesituations where, you know, when people

(54:57):
come with extremely low sodium, there'sa lot that you have to do to unpack why
it's there and how to fix it and to workup why it happened in the first place.
So, it's one of these things where I thinkmath comes together beautifully with
kind of human biology and medicine, andthat's honestly what probably led me to
nephrology myself before I was really ableto channel that energy into informatics.

(55:21):
Cool.
Thanks.
Um, so we mentioned at the top ofthe episode how you have taught
yourself somewhere between 11 and 12different, uh, programming languages.
So, this question is about skill transfer.
So, how has being a programming polyglothelped you outside of programming?
I think you see problems inherentlyas solvable, um, because I think, you

(55:45):
know, look, when I set out to makea, uh, Windows app, uh, that could
download your email and put it on youriPod, the user interface was written
in Visual Basic, which was somethingthat I knew at the time, the actual.
All of the interfacing with the webwas written in PHP that was running

(56:07):
locally on your machine, which wassomething I actually did not know.
And so, I remember being like, when Icame across this problem, I, I didn't
assume that just because it wasn't therein Visual Basic that it wasn't doable.
I said, there's gotta be a way to do it.
And so, the first thingI did was I learned PHP.
The next thing I did was, well,how do I run this securely?
I can't run this on the web.

(56:27):
I don't wanna send all my emails to theweb, um, and have like, you know, my,
my entire inbox coming through the web.
So, I figured out how to run itlocally and then I figured out how
to get PHP to talk to Visual Basicusing standard out and standard in.
So, it's like you would file it here andthis would be monitoring that stream.
And it was something that I, no one hadI'd ever come across had made these two

(56:47):
languages talk to each other in that way.
But coming and looking at how peoplehad done it for other tools, it made
me understand that it was doable.
So, I would say I am someone who isway happier reading documentation than
probably all of my non-programming peers.
Um, and I assume that if I can readenough about it, that it's learnable.
And I would say that, you know, manynon-programmers will kind of need to

(57:08):
learn it in a lot of different ways.
And for me, if I can readsomething kind of front to back.
I can feel like I can get the beginningsof trying to figure out how it works.
Um, and it's something that Ithink is visceral, that you just
have to have faith in yourself andhave faith in what's out there.
And, you know, then that's how Ilearned Julia was I literally one
day just sat and, like, read a bookand then read a bunch of the manual,

(57:31):
just like almost cover-to-cover.
And then I sat down to writemy first lines of code.
And then of course it was an error.
And then you're like,okay, why is it there?
And then you kind of play with it.
But when you conceptually understandsomething, you can do the things.
And so that's what, that's Ithink, a thing that really makes
you different as a programmer.
Awesome.
Alright.
Our last lightning round question, andthis is sticking with, uh, the nephrology

(57:52):
theme, uh, from the last one that I asked.

Um, currently Cystatin C: underused or overhyped. (57:55):
undefined
Oh man.
I think underused, I thinkcreatinine as, as everyone knows,
is a breakdown product of creatinewhich comes through your muscle.
And I think that we have agrowing population of people who
are chronically ill and theirmuscle mass is, I mean, depleted.
And so I think that any way that we cankind of get at better estimates of how

(58:19):
injured your kidneys are, is helpful.
That said, what do you do with chronickidney disease that is intermediate?
It's not super severe.
That, I think, is wherethe real question lies.
And I think if someone has diabetes,we now actually have medicines
to treat that and prevent that.
Um, so I would say underused from astandpoint of understanding kidney

(58:39):
function maybe overused from a standpointof what we can do about it, independent
of knowing other things about someonethat we have medicines to treat now.
Awesome.
So, I think we're gonna, um, zoomout and ask you one or two big
picture questions before we wrap up.
Um, and so, we kind of liketo end on a positive note.
And so, the first question I'll askyou is, uh, we're gonna get the

(59:00):
pessimistic stuff out of the way first.
So, outside of health care, things arechanging super-fast in AI as we've
discussed, like, in some areas ofhealth care, they're starting to change.
As you're watching this changehappen in the health care system,
what gives you most concern?
What do you, what are you mostworried about going wrong?
I think the two things that worryme most, one is that I think funding

(59:21):
models are gonna take a long timeto catch up to new care models.
So, I think that we can come upwith the best way to do remote
patient monitoring that actuallykeeps people outta the hospital.
If we don't have a way to make that
revenue neutral, revenue positive,
it's just not gonna happen.
And I think that when you combine thatwith the deep cuts that are likely
coming to Medicare, Medicaid, I'm reallyworried that all the AI in the world

(59:45):
won't be able to fix bad health policy.
And so I think it's one of thesethings where we need the funding metals
to really catch up so that we areincentivized to do all we can do to keep
people home and to keep people healthy.
Yeah, I mean, I totally agree.
Like, we pay for treatments,we don't pay for prevention.
It seems like that coupled withthe cuts that you mentioned
could be a perfect storm for, um,lots of bad stuff in health care.

(01:00:11):
And a lot of the AI that'scoming is patient-facing AI.
It's things that inherently have thecapability potentially to keep people
healthy at home or to keep people whoare chronically ill getting care at
home without having to come into aclinic or to another care environment.
So, I think that really themore we can do to help there,

(01:00:31):
will really help us build the capacity weneed to take care of the new generation
of people who are really kind of survivingbecause of the wonders of modern medicine.
Yep.
Totally.
Okay, so now let's turn that aroundwhat are you most excited about?
What gives you most, the most causefor optimism that you see happening?
I think integrations, um, betweendifferent genres of technology.

(01:00:54):
It's always been stuck in my headthat the EHR is just outta bounds and
does not connect with other thingsand that your productivity tools just
don't connect with your other things.
You can work in those playgrounds,but you can't work outside.
And I think that model context, protocoland some of these other kind of ways
that you can plug-and-play thingsto work completely across genres, I

(01:01:15):
think has the capability to actuallyenable patients to build things that
would be really useful that patientscould never have built before.
So, I think we kind of undervaluethe value that our patients and our
general populace at hand can do toactually improve their own health.
And I think that if you don'tgive them API access into
things, they can't do anything.
And so, I think what I'm encouragedby is that we're seeing more and more

(01:01:39):
things be able to talk to each other.
I just hope that it's able to talkto each other in an easier and easier
way, such that we're able to get newclasses of products that don't exist
today because those integrationswould never have been possible before.
I think that's a great note to wrap on.
Thank you so much for being onAI Grand Rounds, Karandeep Singh.

(01:01:59):
This was a pleasure.
Thanks guys.
Yeah, thanks for coming.
This copyrighted podcast from theMassachusetts Medical Society may
not be reproduced, distributed,or used for commercial purposes
without prior written permission ofthe Massachusetts Medical Society.
For information on reusing NEJM Grouppodcasts, please visit the permissions

(01:02:21):
and licensing page at the NEJM website.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.