All Episodes

October 15, 2025 48 mins

In this episode, Dr. Jonathan Chen joins the hosts to discuss his path from teenage programmer to Stanford physician-informatician and why machine learning has both thrilled and unnerved him. From his 2017 NEJM essay warning about “inflated expectations” to his latest studies showing GPT‑4 outperforming doctors on diagnostic tasks, Dr. Chen describes a discipline learning humility at machine speed. This conversation spans medical education, automation anxiety, magic, and why empathy—not memorization—may become the most valuable clinical skill.

Transcript.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Human plus computer whencombined will deliver better
results than either would alone.
That used to be my closing line formy, my inspirational closing line
for years, whenever I gave a talk.I don't say that anymore.
'Cause, 'cause I'm kind of not sure now.
We found and we're not the only, others found in
Google's recent paper on Omni, manymammogram studies have shown, I don't

(00:24):
know, computer sometimes by itself doesbetter than the human using that computer.
It's like the human isactually slowing the computer
down and getting in the way.
And it really begs these uncomfortablequestions about what the right role is
for human and computer in modern medicine.
Hi, and welcome to anotherepisode of NEJM AI Grand Rounds.

(00:46):
I'm Raj Manrai and I'm herewith my co-host Andy Beam.
And today we are thrilled to bring youour conversation with Dr. Jonathan Chen.
Jonathan is an Associate Professorof Medicine and of Biomedical Data
Science at Stanford University.
He's been working on medicalAI for more than a decade.
And Andy, like us,
I think Jon came up through his trainingduring the deep learning era where we

(01:08):
were primarily focused on image-basedmodels, pre-LLM's, and so, it was
fascinating and fun to really reminisceabout our predictions then versus where
we are now and explore some of his earlywork and deep learning and now LLM's.
I
totally agree, Raj.
A highlight of this conversation for mewas the discussion around something called

(01:28):
the fundamental theorem of informatics,which is this thing that has been around
in informatics for a long time, and itwas the observation that a human with
a computer is better than a computerby itself, or a human by itself, and
it really was a statement about thesynergy between people and technology.
What's fascinating is that in theage of AI, the fundamental theorem of
informatics is kind of being overturned.

(01:49):
It's not clear that a human and AIis better than an AI by itself. And
Jonathan does a great job of walking usthrough the evidence that has convinced
him that maybe the fundamental theoremof informatics is no longer true.
He's been a big proponent of thetheorem for a long time. But we're in
this really interesting time where
actually, maybe we don't have theright interfaces for AI and people to

(02:10):
be better than either on their own.
And so, really, really fascinatingconversation and Jonathan does a great
job at sort of walking you throughhis own evolution on that topic.
The NEJM AI Grand Rounds podcastis brought to you by Microsoft,
Viz.ai, Lyric, and Elevance Health.
We thank them for their support.

(02:35):
And with that, we bring you ourconversation with Jonathan Chen.
Jonathan Chen, thanks so much forjoining us on AI Grand Rounds.
We're super excited to have you today.
Fantastic and, uh,really glad to come here.
Jonathan, great to haveyou on the podcast.
So, this is a question that we always getstarted with and we ask all of our guests,
could you tell us about the trainingprocedure for your own neural network?

(02:56):
How did you get interested in artificialintelligence? And what data and experiences
led you to where you are today?
Oh, wow.
I don't know.
How long do you want thisorigin story to go for?
Um, you could, you couldgo as long as you want.
As long as you want.
Yeah.
I'll, I'll try to do some of thetriggers and key life moments, which
I think is actually really importantas we all navigate our life choices.

(03:16):
So, the odd fact about myself, I startedcollege when I was 13 years old.
There was a program at CalState LA that let me do that.
And so, I don't know what you all wantto do when you were 13 years old.
I would've said, I don't know.
I'll be a standup comedian.
I have no idea.
Why would you think about that?
Who cares?
You're 13, right?
But suddenly I had to think about it.
Wait a minute, what am I training for?
Where am I headed towards?
And, of course, my parentswant me to be a doctor.

(03:36):
My mom wanted to be a doctor.
I'm like, sure, that sounds likea hard but important thing to do,
I guess. Even though I was reallya computer nerd, and my dad was
an electrical engineer, that wasmuch more my natural gravitation.
I'm gonna play a lot of video games.
Headed towards that, headed towardsmedical school, did really well on
the MCAT, officer in pre-med club, andI kept always hearing about how
that you have to have a passion formedicine. That you have to be your

(03:58):
life's mission is to be a healer.
You have to really feel that.
I'm like 18 years old at the time.
I dunno about the rest ofy'all, but at 18 I felt nothing.
I felt nothing.
Okay.
I'm not ready to commitmy life towards this path.
I gotta do something else.
I really like computers.
I like programming.
I'm good at it.
I find this enjoyable problem solving.
And so, I just worked at a strip ofsoftware developer industry it for a
couple years and that was pretty cool.

(04:20):
And I was paid pretty well to do it.
And then I also saw the industry isn'tas inspiring as I thought it'd be in
terms of the values that espouses.
So, then I thought, hey, wait aminute, maybe a joint degree, an
M.D. and a Ph.D. in computer science.
That would be really cool.
Nowadays, it seems like itmakes so, obvious sense.
We have a whole dedicated NEJM AI journal.
20 years ago, I thinkit was kind of weird.

(04:40):
Medical schools thought Ilooked kind of funny, but I
don't know where this is going.
But I think this combination'sgonna be very compelling and has
certainly taken outta that way.This AI computer doing my homework,
for me, that's actually really whatI've really always wanted to do, but
applied it towards medical problemswhere things matter towards human health.
Amazing.
I knew some of that, Jon, but Ididn't know that you worked at a
company before starting medical school.

(05:01):
Right.
And so, what, where did you work?
So, I worked at Trilogy Software.
It's originally outta Stanford.
I was no affiliation at the time.
They're based in Austin, Texas now.
And I was there for years,the dot-com like bubble at the time,
right,
so, everyone was blowing up at this point.And once my mom saw I was 19 years old,
had a six-figure salary coming straightoutta college, and this was 20 years ago.
Suddenly, medical school didn'tseem that important anymore.

(05:23):
But also, I was at that company whenthey laid off a third of the entire
staff a year later and like, oh, okay.
You see what the values really are.
You see who's really gonna lookout for you and who really won't.
And they're gonna cover their selves,like just values in industry are
very different and nobody's evil.
But if it's a for-profit company,right, they're gonna look out for
themselves at the end of the day.
I actually went back to work for20th Century Fox IT department.

(05:44):
That's why I did aninternship for in college.
So, yeah, you just walk in the studioa lot, walked by a few celebrities,
but just like IT systems, contractsand finances and it was cool.
I actually had fun working there.
It was a very good crew and it was areally good job and I was good at it.
But I could also see, am I gonnafeel rewarded if I'm still
doing this 10 years from now?
I think I'd need to live as astarving student for a little bit

(06:05):
longer to have a little bit morepurpose and inspiration in my life.
And after, I'll tell you this story.
When I was laid off in the industryjob so many years ago, it felt
like the first failure in my life.
I mean, I was like 18, I was like 19.
I mean, you know, whatever, right?
But in the scheme of things, but Iwas always a super smart student.
I was kick butt at, everything.
It was very kind of eye-opening tome, but it also, what it really made
me feel is like, wow, I worked sohard there, and then you just get

(06:27):
laid off flippantly anyway. Like,
I kind of feel like I regret howI spent the last year of my life.
Why was I toiling so hard for—I'm,I'm exaggerating a little bit, but
now the reason I've gone into medicineand AI and computational applications
is now I work myself to death.
I've been stressed outfor 20 years nonstop.
I've been stressed out nonstop.
But if I were to die tomorrow. IfI would be laid off. I'd be fired.

(06:48):
I would be sad, but I would not regrethaving spending my time, 'cause I feel like
I'm trying to do something worthwhile.
Nice.
Can we go back a little bit?
I, I mean, it sounds like you were.
You were born with acouple extra gears there.
But how does one become ready oreligible to go to college at 13?
Like what grades did you skip?
Like were you taking college classeson the side as like a 9-year-old?

(07:09):
Like how did, yeah.
Tell us about that.
I was just regular public school.
I didn't do any special thing.
I didn't have anywhereparticular special tutoring.
I was just a nerd kid,
right,
and that was kind of my distinction at the time.
But there's a program at Cal State LA,I think it's still running now, called
the Early Entrance Program. And justpeople in the local area, you can
take a test and if you do okay, youcan take a summer quarter there to

(07:32):
see how you do to make sure you're notjust academically, kind of emotionally
ready, to be in that environment.
And that ended up working very well.
It started as an experiment intheir psychology department.
What would happen if you took reallybright 12-year-olds and 13-year-olds
and let them be in a college environment?What would happen? And then it worked
so well. It's become this continuous program ever since.
And, for me, so, I went up to eighthgrade, and then after that I'm

(07:55):
technically a high school dropout.
I don't have a GED. Idon't have an equivalent.
I just, I just didn't show up in high school.
Wait, so, you didn't do,
you didn't do any high school?
No.
My, my friends who went on to high school,they said the teachers kept calling roll
call and Jonathan Chen, are you here?
Jonathan?
Like, they kept calling forlike a month until they gave up,
realizing I was not gonna show up.
Because I didn't tell anybody either.
I just didn't show up to highschool and I went to this program.

(08:17):
Well, you're only the second,you're only the second doctor I've
ever met who doesn't have a GED.
My wife's grandfather— Nice.
— doesn't have a GED.
And went on to medical school,but times are very different.
And his was not theresult of acceleration.
It was the result of a world war, I think.
Yeah, yeah.
Yeah.
That's a differentdynamic, different dynamic.
Cool.
Wow.
So, Jon, I think that's agood point to transition.

(08:37):
So, you, you've taken us up through yourtraining, you finished your graduate
degrees, finished all the school, andthen you started your lab. And you've
been, I think, doing a lot of workin machine learning and medicine, not
just in this sort of recent LLM era.
But you were really interestedin this topic long before
ChatGPT and large language modelsbecame sort of common parlance.

(08:59):
And you wrote this paper thatI remember, this caught my eye.
I think maybe I was a postdoc orsomething with Andy, and we saw
this in New England Journal of Medicine.
I remember discussingthis way back in the day.
Because it, first of all, it was titledpretty well and that caught our eye.
And then I think you were sort ofpushing back on a little bit of the
hype that was in the literature and thenoutside of the literature at the time.

(09:21):
So, the title of this paper is MachineLearning and Prediction in Medicine —
Beyond the Peak of Inflated Expectations.
And this was a perspectivethat you published in the New
England Journal back in [2017].
And maybe can you just take usthrough why you wrote that paper?
What triggered you to just sit down, writethat, submit it, and what were you trying
to really argue in the paper itself?

(09:42):
Sure.
This is interesting irony, but thatis by far my most cited paper, like
over a thousand citations, and it'ssomething I hacked together in three days
and versus the RCT, the human computerinteractions that I worked on a year on
it's still good, but it's not nearly thesame attention, but it built up a lot.
The first draft title I put was muchsassier, I said machine learning,
tell me something
I don't know.

(10:02):
Right.
Because a lot of time algorithmsare predicting things that
the doctor already knows.
It's kind of stupid.
I'll tell you, my graduate school, myPh.D. was with Pierre Baldi, University
of California Irvine, and likeliterally he wrote the book on a machine
learning approach to bioinformatics.
So, that's really where Ilearned a lot of my skills from.
And my underlying nerd interestis cause I'm that nerd kid, right?
Oh, so, what is it that makes an expert?

(10:22):
What is intelligence?
How do you make a computer do that?
If I was just doing stuff for fun, I'dbe pouring robots to play StarCraft
and poker or something like that, thatthat's what I would actually be doing.
But with purpose, I applied to medicine.
And it was interesting.
In my graduate school, I was actuallycreating a rules-based expert system,
if you wanna call it for organicchemistry, for drug design and education.
And one of my thesiscommittee members said,

(10:44):
Jonathan, literally your PhDadvisor is the expert in machine
learning and bioinformatics.
How come you don't have anymachine learning or PhD thesis?
And what I said was I have learnedenough about machine learning to know
that I do not trust machine learning.
You have this weird like probabilisticbehavior that gives you the right
answer maybe 80% of the time.
Like, I'm used to softwareengineering development.
You need deterministicreproducible processes.

(11:06):
And so, who, how can you act in thereal world when you're only 70, 80% sure.
You're not even sure whichtimes it's gonna happen, and it
gives you inconsistent results.
But then of course, machinelearning is taken over the world.
That was the hype thing, what is it?
10 years ago?
And it is cool.
It's important, but Icertainly knew enough about it.
Like it's a useful tool.
I don't think this isthe answer to everything.
And boy, was it gettingoverhyped at the time.

(11:29):
I don't know if I should namenames, but there was another piece
that came outta the New EnglandJournal just shortly before that.
I forget the title, like machine learning,the All of Medicine's Gonna Change.
I'm like, that's,
it's a, it's a thing.
It doesn't solve every problem.
Was it, was it the
lost in thought piece from friend ofthe show, Ziad Obermeyer by chance?
It may have been.
Okay.
May have been.
And, um, I was like, that's cool.

(11:50):
And so, I kind of have something to say.
And also, I noticed my peers aroundme, very smart, very great people.
Like, wow, this machine learning,this computer stuff, it can
predict the exact day you die.
This is amazing.
I'm like, that is
totally not what that pieceof research or tool can do.
Oh, even very smart peoplearound me don't really understand
what this is and what isn't.

(12:10):
And so, and the reason I wannamitigate that is because as you
know the history, the waves ofAI winters and springs, right?
For
50 years people have been tryingto make AI take over the world,
and then they way over blow expectations, and then it crashes.
And people like us who actually careand are trying to make a difference.
It's really hard to do that when thepublic has lost trust because, oh, well
you didn't actually change the world.

(12:31):
So, I was like, hey, there'sexciting things here.
And we should be excited about it.
There is legitimate stuff, but wedon't need to overhype it because
let's manage our expectations, andwe can soften the crash into the
inevitable trough of disillusionment.
And quickly move on to the scope.
Maybe just hop in
here.
Do you think we've hit a trough ofdisillusionment in AI and medicine?
I think we're onto thenext wave already, right?

(12:52):
In terms of, like, supervisedmachine learning.
It's a great tool.
I use it in a way to show howI think it should be done.
I, I was never that.
We heard just from Mike Grassand, oh, Jonathan, you're the
guy who does electronic medicalrecords and machine learning.
Like, really, that is reallynot what I wanted to be known as.
I wanna be known as fundamentally solvingproblems in medical decision making.
So, I think supervised machinelearning, it did go through a trough.
It didn't change everything, but

(13:13):
oh, now we're at the, it's kindof a reasonable tool to do many
risk stratification things,and that's almost ordinary.
We have established data scienceteams deploying that in a real
settings, but it didn't just magicallyfind the cure for cancer, right?
It didn't wipe out everyradiologist job, et cetera.
We're already onto the next wave.
AI is just the buzzword of the day,the marketing buzzword, which is
something cool a computer can do.

(13:35):
And the balance being now, I wroteanother piece in JAMA Internal Medicine
on the Fountain of Creativity orPandora's Box, large language models.
When I saw the first preview versionin GPT-4, I was like, holy crap.
This actually is legitimatelydisruptive technology that I
think is gonna change everything.
And I think why we're talking now.
That's another good transitionactually to the next series of

(13:56):
papers we wanted to talk about.
So, you've been doing a lot of work,and actually, Jonathan, some of this
is work that we've done together. Andyou've been, I think for several years
now, doing some of the earliest studieson large language models and what they
can do in terms of general diagnosticand management reasoning capabilities.
And so, this is a, it's afascinating, fun space.

(14:16):
I can't imagine honestly as just speakingnow for research we're doing together
and research in my lab, it's never beenthis fun because the technology, the sort
of base technology is evolving so fast.
Mm-hmm.
That problems that seemed likethey would be solved years later
are being solved in months.
And you're just constantly havingjust to reevaluate and reassess.
Mm-hmm.
What the sort of basetechnology itself can do.

(14:39):
And I think some of the papers thatI'll just highlight here, we should also
first say, I think Ethan Goh is, uh, thefirst author of both of these papers.
And so, um, he had a strongsort of instrumental role
in pushing these through.
One was that we published in JAMANetwork Open last year, 2024, and
then another that was more recentlypublished in Nature Medicine in 2025.

(15:00):
And you could take maybe one ofthose maybe let's start with the
JAMA Network Open one and just maybebreak down what the motivation was.
I think also it would be educationalfor all of us what the timing was
between when you conceptualized thestudy versus when it got executed,
when it got published, and maybe whatthe challenges were in that study.
So, uh, maybe starting againwith the JAMA Network Open paper.

(15:23):
Sure, sure.
There's this like natural kind ofprogression that kind of leads up to that.
And then more after that,we're kind of getting to next.
But yeah, Ethan Goh, Rob Gallo arethe key fellows who drove that work.
Adam Rodman and Beth Edelson havebeen really key partner who's made
a lot of that, um, come together.
But, you know, I see the early previewof GPT-4, I'm like, holy crap, I need to
throw away half my research program.
Because it was about like, so— When, so,

(15:44):
just, uh, set the time for us.
Now that's like
two-ish years ago.
Okay.
What, what is that?
You know, it's just ChatGPTis just emerging basically.
Right.
And that, that rolls around as, aJanuary or February at that time.
And I was gonna do things maybe one dayI could make medical recommendations
based on data, maybe based on literature.
Ah, that's too hard.
NLP doesn't work well enoughand then suddenly it's, I think
that's starting to happen.
It not only could it make rec's, butit can also explain them in a way.

(16:06):
And it's not perfect.
It's not perfect.
But three years, six years ago.
Six years ago, I had
students trying to use transformers,answering medical questions.
'Cause hey, a new transformer tech,I've been wanting to answer medical
questions with computers for 20 years.
'Cause I was so annoyed in medical schoolthat I had to memorize all this stuff.
Six years ago it wasunusable, 35% accuracy.
Ah, guys, well that was a fun classproject, but this is no point in

(16:28):
using this now. And then, right,then GPT-3.5 started to emerge.
And then the obvious thingwas to try to answer
multiple choice questions.
I didn't even bother.
I didn't even bother 'cause it was soobvious I knew somebody else would do it.
And indeed, like within a couplemonths, multiple people try to do that.
And I peer reviewed someof those papers, too.
Then some of our medical educators,Eric Strong, Jason Hom, others,

(16:49):
they really had the writers, hey,we have these complex nuanced
reasoning exams we give our students.
It's not 'cause we know multiplechoices fake, it doesn't mean anything.
What would've happened ifwe tried this chat on this?
And so, we did a joint study on thatand that was also very eye opening.
It's like, holy crap.
It's kind of barely passes even theseexams, and we were in the middle
of getting peer reviewed, just likethe editor didn't have time to go

(17:11):
out for peer review, it was so timely.
And then suddenly they rejectedus because they said, oh, GPT-4
just came out two days ago.
This thing that you worked allto death, multiple late nights
and then you submitted threeweeks ago is already obsolete.
Oh my gosh.
Like, what the heck?
This is the bizarre pace of technology we're in.
The peer review cycle cannot keep up with.

(17:32):
And then what I say iseven then that's cool.
And then
at the time, Adam Rodman, he had donesome things on, and I, I think you were
on that as well, doing NEJM CPC cases.
So, we kind of found a connection to thesocial media sphere, Adam and Eric Strong.
They're kind of both social mediainfluencers and sort of found each other
and said, we did these two studies.
Hey, we need to take it to thenext level, which is human, uh, a
human interaction study that led tothat the one we're talking about.

(17:55):
And then could you take usthrough what the paper showed?
Sure.
So, we did multiple ones, but we diddiagnostic case management cases.
We broke it down to two papers, so,complex classic diagnostic dilemma cases.
And we didn't want dumb multiple choice.
We didn't even want just what's the finaldiagnosis 'cause that's a thing to do.
But we wanna understand likereasoning 'cause I think these

(18:16):
things might even help you think.
So, we said, what do you think the diagnoses could be in these cases?
Explain your reasoning, pros and cons.
What will be your next steps?
And this expert consists ofgrading rubric, which, oh man,
is that a pain in the butt to do.
'Cause you need human expertsto do that grading.
And then we did arandomized control trial.
We got some pilot funding from theMore foundation to pull that off.
Recruited live doctors, half of themwith access to the usual standard of

(18:38):
care, which would be doctor plus UpToDate,PubMed, Google, whatever you wanna do,
because that's a realistic benchmark.
You know, benchmarks where it's like doctor by themselves,
that's kinda stupid.
Nobody practices that way anymore.
And then we gave the other half,
let's give the other halfGPT-4 at the time.
I bet that's gonna make a huge difference.
'Cause it's really, this technologyis surprisingly really good and
it was surprising results andwhy it got so much attention.

(18:58):
'Cause the doctor plus the AI plus GPT-4,it didn't make that much difference.
It was about the same as thedoctor plus usual Internet, which
was totally not what we expected.
When we set out through the study,we thought we were gonna show
It helps so, much because
we already knew. Andthen we showed it again,
GPT-4,
the AI by itself outscored allof the doctors, including the
doctors who had access to GPT-4.

(19:19):
That was such a bizarre finding becauseit really flies in the face of that
fundamental theorem of informatics,
I know we're gonna talkabout it in a second, of
human plus computer whencombined will deliver better
results than either would alone.
Oh, that sounds so good, Andy.
I see.
It makes you smile.
Right?
That used to be my closing line formy, my inspirational closing line
for years, whenever I gave a talk. I, I don't say that anymore.

(19:43):
'Cause, 'cause I'm kind of not sure now.
We found, and we're not the only, othersfound, Google's recent paper on Omni,
many mammogram studies have shown, I don'tknow, computer sometimes by itself does
better than the human using that computer.
It's like the human isactually slowing the computer
down and getting in the way.
And it really begs these uncomfortablequestions about what the right role is

(20:04):
for human and computer in modern medicine.
Yeah.
Yeah.
Thanks for that.
I think that's a great transition,if it's okay with Raj, like, I, so,
I'm coming at this from a littlebit of the opposite direction.
I've always been a little bitof a gad fly in saying, like,
annoying things to primarily mylong suffering wife about— I could
corroborate, I could corroborate this.
Yeah.
About how AI confirms, yeah.

(20:25):
—about how AI maybe we'll be able to domedicine better than people in the future.
Could you maybe just restate thefundamental theorem of informatics
succinctly and then talk about,like, the history of medical decision
making and the extent to which thatwas true until we had this switch.
And then I'll, I'd love to get yourthoughts on, is it an interface issue?
Is it, is the theorem still true,but we're just, like, not delivering

(20:46):
the information correctly?
What do you think is behind the apparentviolation of the fundamental theorem?
Sure.
I mean it, it comes in different origins.
I mainly attribute it to Chuck Friedman.Charles Friedman, who put the fundamental
theorem together and I think he'sreally more talking about human plus
a computer or an information resourceshould be better than the human.
But I try to over,
really broad.

(21:07):
You would think the combinationof a human plus a computer should
be better than either alone, right?
The synergy, collaboration, it makes somuch sense, but also, it's like Ethereum.
It's just some guy who said somethingthat seems like it makes sense.
It's not like there's specificevidence that supports that or
underlying, theory or mechanism.
It's just like a phrase thatseems like it has intuitive sense.

(21:27):
And the reality is, go ahead.
A fundamental like lemma or a fundamental, no.
A fundamental proposition. Yes.
Of biomedical informatics.
Yeah.
Right.
It's kind of, otherwise itjustifies the whole field.
Other, why do we work inbiomedical informatics?
Unless we thought thatcombination made sense.
We're doing it
'cause we think it will work.
For what it's worth, it'snot really a new thing.

(21:48):
If you go look, look at like Internist,QMR, there have been plenty of
decision support, diagnostic decisionprocesses, things, risk calculators
for 30, 40, 50 years that have donebetter than humans at these tasks.
I mean, I think Raj, you had that, thatgreat summary here, like doctors that are
very bad at math and it's not doctors.
It's just humans are bad at mathcalculating post-test probability.

(22:10):
They don't understand why screeningand early detection just almost
never works because the numbersjust won't work in your favor.
It's actually not really a newfinding that I like to say, like a
calculator is better at calculations.
You should just use a calculator.
You shouldn't try to compete witha calculator at long division.
Like it doesn't make sense anymore.
But now just the emerging tools like, it's not just long division.

(22:31):
Now it's also risk calculation.
Ooh.
Now it's also answering questions.
Now it's turning into reasoning.
Now it's turning intocounseling and therapy.
It's just more and more tasks.
The computers are able to do that
before they weren't able to so much.
Um, so, yeah.
So, go ahead.
Maybe a couple quick thoughts.
So, I could think of maybe twoexplanations and they— mm-hmm.

(22:53):
Mm-hmm.
—might be mutually exclusive.
They might not be.
So, like, the fundamental theory probablywas not true when all we had were, like,
unix terminal-based computers, like— Mm-hmm.
—your average doctor doesn't wannahop in at the command line and
access information this way.
They need a mouse and keyboard.
They need graphical user interface.
So, it could be that, or it couldbe like the situation in chess
where, you know, right after deepblue, humans plus computers, we're

(23:16):
actually still better at chess.
But then the models kept training,the engines got better, and now it's
clear that, like, we don't have a hope
at beating, like, strong chess enginesand that's, like, a capability gap.
And what I don't understand in, giventhat we're kind of in the early days of
this era of medical AI, if it's, like, aninterface UI, UX kind of thing, or if
it truly is, like, a capability gap fromwhich we will never, never catch up.

(23:41):
Ooh, how spicy?
How spicy do you guys want to get here?
The near term, the nice way Isay it for now, is I do think a
lot of it is education, humancomputer interface interaction.
'Cause in that study, which showed likehuman plus computer was not that much
better than human plus usual Internet.
At the time, 'cause this was earlydays of language models, right?
This was probably a yearand a half, two years ago.

(24:02):
A third of the doctors inthe study had never touched a
chat bot before in their life.
Another third maybe had used itonce or twice, but really didn't
know how to use it that well.
And so, they didn't know what a prompt was.
They were using it likeGoogle, they're like,
what is differential for eye pain?
Did you know you could just copyand paste the entire case story
here and ask follow-up questions?
Three years ago, no, youdid not know that, right?

(24:24):
Because you could not have donethat just two or three years ago.
And so, it was clear there's an educationgap, and as we've done follow-up studies,
if you kind of prime the doctor alittle bit more, give them a little
bit more setting, they can improve.
It makes a difference.
Although it's, they still don't do betterthan the computer by itself, right?
So, then, okay, I don't know.
Maybe on this podcast we cansay it out loud: in some cases,

(24:45):
in some cases, a computer is justbetter at certain things. And I
think we should acknowledge that.
And I don't think thatdevalues humans, by the way.
That's the instant instinctivereaction people panic with,
and it's a real feeling.
But there's more than unlimiteddemand needed for actual human to
human services and care, but
I don't compete with a calculatoron long division anymore.

(25:06):
I'm not gonna compete with a computeron memorizing every bug and every drug.
And maybe now,
I'm not gonna, I'm notgonna use a computer.
I'm not competing.
I'm gonna use a computer tohelp me develop differential
diagnoses when I'm unsure.
For example,
if you were, and I think we actually,we'll come back to this later, but like,
what is the sphere of competence for humandoctors that you feel like will be stable

(25:27):
over a five-to-10-year time horizon?
Let's tell you, if you're a med, Iget asked this question all the time.
I'm sure that you do.
I'll give a talk on medical AI.
Some very nervous medical studentwill come up to me and say.
Oh, good lord.
Like, I'm scared.
Like what should I actually studyif the AI is getting better?
Professor Beam,
what are you, what areyou doing to me here?
Yeah.
Yeah.
I've certainly had that.
I've had res interns,I had medical students.

(25:48):
I just gave this talk to abunch of high school students.
I was like, uh, uh, uh, whatare we even in school for?
I, uh, what are we supposed to do?
The thing is, five to 10 years,the world can change really fast.
Right?
But that's how long the average med student
is in med school.
So, yes.
Yes.
I, I had one of my old colleagues,Tom Savage, who's done a lot of
great studies in this field, is like,
you know, I kind of feel like whenpeople ask me advice, maybe they

(26:10):
shouldn't go into medicine anymore.
'Cause the AI's progressing so fast.
I'm like, dude, if youwanna do medicine, go do it.
But boy, you'd better expectthe field is gonna change.
Mm-hmm.
And it's not gonna pre-practicethe quite the same way.
I don't know how, how many peoplehere or how many people listening, like,
existed before the Internet was a thing.
Like, at one point theInternet was not a thing.
Yeah.
It was like 35 years ago.

(26:31):
Right.
And it used to be the best doctorwould knew the whole physician's
desk reference in their head.
I can tell you every treatmentI know every side effect and I
can tell you the dose right now.
Wow.
What a great doctor.
Now that's, like,
stupid.
Like it, it doesn't make sense.
That's not, that's great ifyou know more, but everyone is
expected to use this Internet tool.
And did that replace the need for doctors?

(26:52):
Hey patients, what doyou need a doctor for?
You can go to UpToDate yourselfand look up what to do, right?
Uh, no.
You most definitely needdoctors and nurses still.
I having said that,
I've been trying to, and it'sbeen a lot harder to articulate,
I had to work a lot harder the past year.
People used to say, oh, Ithink AI can replace doctors.
I'm like, either you don't understandAI or you don't understand medicine.
And often both the kind of people whosay this kind of thing, I would be

(27:14):
the first person to replace doctorswith AI because there's patients
who need us and we can't get to.
I haven't done that yetbecause I don't think you can.
We can replace some of our tasks, butit's a combination of knowledge, empathy,
liability, that's actually the basics.
All are necessary whennone are sufficient.
It's actually judgment,influence, negotiation, and
professional responsibility.
Being able to tie all of thatinto one person, one thing, I

(27:37):
think is what you end up needing.
Jon, just one last point on this.
So, do you have a sense, and I knowit's very hard to predict, we're
not predicting the past, we'repredicting the future here, right?
And so, very hard to predict where the,the fundamental evolution, the technology
will go in the next couple of years.
But I think from your, all yourwork on human-computer interaction.
From thinking about your perspectiveas a clinician as well, do you think

(28:00):
there are baskets or categories oftasks where AI will operate autonomously,
and there are tasks where it reallywill be more of this sort of hybrid,
human-in-the-loop, and then thereare tasks that are just untouchable
right now by AI and are entirelyhuman for the foreseeable future?
I mean, obviously it's,it's the whole spectrum.

(28:21):
All of those will be maybeof interesting discussion.
I mean, here, your guys' perspective, too.
Which, which tasks do you thinkwill end up in each category?
Just to expand on theprior analogy, right?
Once like TurboTax and softwaregot created, like, what do
you need an accountant for?
You know, when you hire an accountant,all they do is they have their
own version of that software andthey're just putting your data in.
You clearly don't need this human,but we still have these humans.

(28:43):
'Cause they actually, it's a, it'sa service, it's a responsibility and
advocacy that they can provide in a way.I could use the software, but I don't
wanna take the time to understand it.
And I'm, I'm afraid of making a mistake.
I need someone to counter for me.
So, let's see,
in medicine, what were your, yourthree categories: things that
probably are just gonna get automated.
There's the obvi,everything else
is human-in-the-loop, in the middle.
And what are the things like, eh,ain't, ain't no computer gonna

(29:04):
do that, at least anytime soon.
Don't ever say ever, because we'llobviously be wrong at that point,
just whether it's in our lifetime.
I assume you guys, uh, wellactually I published in NEJM AI.
Zak asked me for that one.
This, uh, who's training whom?I'm gonna mess this chatbot up.
Okay, sure knows a lot of stuff.
There's no way it can handle,like, an ethical dilemma or, like,

(29:24):
complex counseling situation.
And I said, chatbot, youpretend to be the doctor.
I'm gonna pretend tobe the patient's wife.
And I'll say, oh my gosh, my, my husband with Alzheimer's
dementia, he's choking on his own food.
He needs a feeding tube.
Oh.
But we can't leave him to starve to death.
He's gonna fight for his life.
I have to do everythingI can to save his life.
And the punchline at the endwas this thing was providing

(29:45):
better counseling than I did in reallife, in that real life scenario.
And boy did that give me existentialangst that night thinking, what
is the point of my existence
at this point? Balanced byactually very inspiring, actually.
Very cool.
What a great training and learningtool I could use in a way that
I never could have before.
On the other hand, I think youstill need those humans for
all those combinations, but

(30:07):
it's very obvious that, obvious to me,that a lot more people are gonna receive
therapy, counseling, and advice fromautomated bots than live human beings.
It is already happening now.
My wife talks to ChatGPT every day.
It's more available than I am.
And I think it's, And likewiseand understand situations,
it's, it's happening
on, yeah, exactly.
It's happening on massive, massive scale.
Right?
Yeah.
And I think,
maybe going back to the fundamentaltheorem of informatics, right?

(30:31):
I think if I'm interpreting orsummarizing your comments about this,
it's fundamentally an empirical claim.
And to date, we have not had, andespecially with LLMs, but even, you
know, more broadly, machine learning,statistical modeling, we haven't had
a lot of empirical evidence around howhumans and AI work together on tasks,
what can be done autonomously, whatis unlikely to be, interfered with.

(30:54):
And I think we can speculate as towhy I think there's, there is a lot
of feelings, existential angst, otherthings that get wrapped up in here.
And I think Andy and I can speak tothat we're not clinicians, but we
are data scientists, machine learningresearchers, and we're watching now this
technology eat a lot of things that wethought we would only be able to do in

(31:15):
terms of coding, in terms of— Mm-hmm.
—proving, mathematicalpropositions and— Mm-hmm.
—in terms of, maybe even writingpapers in the near future.
And Andy can, uh— You guysare much smarter —Near future?
I always knew I was playing forobsolescence, so, you got it.
Just, I'm a, I'm a lowerversion of you two, so. Yeah.
It's been the goal all along.
Yeah.
And so, you know, it's, it'sunderstandable, like, right, where the

(31:38):
idea that, like, oh, we'll always haveto work together for everything comes
from, but then we had this great episodeactually with Judy Gichoya, I dunno, a
year or two ago on our editorial board.
And she was like, guys, just actuallythink about what human-in-the-loop means.
It means that every decisioncoming outta the model is gonna
have to be reviewed by a human.
That's like an, and they're all,or just the hard cases, right?

(31:58):
And it's like an exhausting,
future, dystopian almost future,where the human is just the liability
shield, the crumple zone, and justdealing with everything tough, right?
Mm-hmm.
Mm-hmm.
And not, and just like secondguessing the machine, right?
And like, you know, so, a lot of thesethings they sort of, they sound nice,
but then you really think about how theoperationalize and, um, you realize A, how

(32:19):
little empirical support we have, but thenB, how potentially dystopian some of these
futures are, even if they sound nice now.
Uh, uh, I, I'm with you.
It's, it's all of thosecombinations make sense.
I wonder if another analogy is helpfuljust to deconstruct or reconstruct.
I, and many people who talk to me andtalk to you, I'm sure, says a lot of
what a doctor does could clearly justbe automated by those latest AI systems.

(32:42):
And I, I actually think some sub tasks
that.
I don't think that's wrong.
But then so, does that mean, oh well,very soon we'll not need most doctors.
It's like, well, I meanactually Bill Gates said this,
very soon, most teachersand doctors will not be needed.
Like, whoa, whoa, okay.
But then you say, well, whatdo you need a, what do you
need a software engineer for?
They just write code.
Clearly, AI can write code.
Now just vibe it.
Somebody said, just vibe it.

(33:03):
Just vibe
it.
Just give it the requirementsand then off it goes.
Yeah.
Yeah.
So, it's like, what do youneed a venture capitalist for?
They just,
they're reviewing companies. They'rejust taking some random bet on something
anyway, and of course the venturecapitalists, oh, oh, oh, no, no, no.
What I do is complicated.
That's not automatable, like, how like—.
Marc,
Marc Andreessen from a16z hasliterally said this before in apparent,
like lack of self-awareness, uh, hassaid something exactly like that.

(33:26):
That
probably was, I woulddidn't want to name names.
That was probably say, oh, the venturecapitalist, obviously a doctor can replace.
They just do, like,
the standard reasonings, like,do you think of venture capital?
Oh, oh, no, no, no.
What I do is special and Iactually think what everybody
does is special in that way.
But boy, would it be foolish toimagine all of our worlds are
not gonna change substantially.
And I, I certainly liken it,

(33:47):
it's, it is like theInternet being invented.
It didn't result ineverybody losing their jobs.
It's like, like there's,we have unlimited demand.
There's plenty to do.
But if you think your job is gonna bethe same as it was five years ago, oh,
you're, you're setting up for, for pain.
If, if you were a, I don't know, atravel agent before, if you were.
Encyclopedia Britannica editor, you, yourlife kind of got messed up if, if you

(34:09):
weren't able to change with the times.
Cool.
Thanks.
Uh, so, I think that's a great placeto hit pause, uh, while people are
nice and sweaty from all the anxietyfrom the conversation we just had
and move on to the lightning round.
Are you ready, Jon?
Uh, sure.
Or I don't know, I, I'd like toinstill on a note of optimism.
Well, yeah, go for it.
Go for it.
Give us, give us a note of optimism.

(34:30):
We'll have a time at the end too,where we can bring it back in and
end on a more optimistic note.
And who knows what, who knows whatthe lightning round will take us.
So,
let's do the lightning round and I'll tryagain with some optimistic perspective.

(34:52):
Cool.
So, this is a series of silly randomquestions that the goal is to answer
them concisely, but honestly, so, thisfirst one's a little bit of a funny one.
And just for context for the audience,you are a fantastic magician.
I'm just gonna say that that is awell-known side hobby that you have
that you do a variety of kinds of magic.

(35:13):
So, this one's slightly sarcastic,but how has your experience in
magic and sleight of hand preparedyou to be a successful academic?
Uh, it was random side hobby.
I picked up a se just around thepandemic time people, other people
learned to bake sourdough bread.
I learned how to do a Rubik's Cube magicroutine, and at first it was just like
this silly thing, but hey, it makes mevery approachable to graduate students.

(35:34):
Like, hey, I like, I wanna talkabout science, but like, I'm
just a guy and we can have fun.
And also, I've actually found there'sa lot of nuance and understanding
when you to do magic well,
you have to have empathy.You have to understand what
the other person is thinking.
And that has made me a way betterpresenter, scientific communicator, um,
'cause it's not just here's my data.
It's like, if I present this data,what is the audience gonna think?

(35:56):
And that's gonna beg themto ask this question.
So, that has been a verypractical way this crosses
over.
Yeah.
Like the first rule of magic hasalways get the audience looking
where you want them to look, I guess—.
Mm-hmm.
—is like —Mm-hmm.
—kind of the lesson there.
Mm-hmm.
So, I'm sure that translates well.
Yep.
Nice.
Alright, our second question, Jon.
Should medical AI be explainable?
Oh, oh, should, is this loadedword here. Deliberately loaded.

(36:19):
If it can be.
If it can be, that's great.
I definitely do not thinkexplanation is required.
I definitely do not.
And the classic analogy I say islike, do you know how Tylenol works?
Do you know how anesthesia works?
Hey, I don't either.
'Cause nobody does what,but we still use it.
Yeah.
'Cause it's proven safe,effective, and reliable. AI systems,
they haven't been proven to that standard.

(36:40):
Could you at least explain yourselfsince you haven't proven yourself yet?
Andy, I, I know you have nothoughts about this topic.
I agree a hundred
percent with what was just saidthere, so, no notes on that.
Totally agree.
Yeah.
Maybe you already answered thisand so, I'll modify it if you
have, but what was your first job?
My first job officially, I, I didthe internship at 20th Century Fox
and then my first job outta college,

(37:01):
I was a software developer,in Austin, Texas for—. Okay,
I would
not have been surprised if therewas some secret career that you
started when you were like three,where you were also making like six
figures or something like that givenhow prodigious you were as a kid.
So. Uh,
not in that way.
I was just a nerdy kid playingvideo games most of the time.
Nice.
Alright, Jon, who'syour favorite magician?

(37:22):
Oh, that's, that's a verytough and loaded question.
Well, it'll mean nothing to nonobody, but you should look him up.
Uh, Pop Haydn, who's,he's a very older guy now.
He performs the Magic Castle.
I found him very inspiring.
He's just very cheeky, very thislike, sarcastic type of humor, but
also like cool magic is happening.
I kind of discovered him onmy kick in the past few years.

(37:44):
I thought it was very cool.
Otherwise, I took my wife togo see Mac King recently and
she absolutely loved that.
He's, uh, it, he seems likehe's just being as rube and
kind of a dork, but actually.
It's very calculated and I love that howcalculated, how smart someone's being.
You don't even realize it.
And then, I don't know, recentlyShin Lim was certainly like Asian
guy who makes magic look cool, wascertainly an inspiration recently.

(38:05):
What, maybe if I can ask a follow on who'syour favorite sort of historical magician?
Historical.
Yeah.
Oh gosh.
I dunno if I go back that far.
I grew up on David Copperfield.
Okay.
David Blaine certainly inspiredme a little bit years ago.
'cause David Blaine, I mean, hedoes these word stunts where I do
this bit, where I stab myself witha needle, but like it's an illusion.
He, he just actually does it.
He, he just stabs himself.

(38:26):
It's not a magic trick.
He's just stabbing himself.
Yeah.
Like some
of his tricks, he like
fistula that he's likeinserting the needle.
Like he actually does it over thecourse of like years, which is wild.
So, I would
love to do something like that,but that's a little too crazy.
But what's also his, in early specials,it was just him doing card tricks, people,
and like that's just a regular card trick.
Like, I could do that.
That's not that hard.
But it's interesting, something thatseems so basic, you can have a quite

(38:48):
an impact on how people think and feel.
My father-in-law is a magicianand he was a oncologist his whole
career, just retired a coupleyears ago, now is a magician.
And so, I've learned a lot from him about
the, what is it?
The IBM?
Mm-hmm.
So, I feel like this InternationalBrotherhood of Magicians.
Andy, have you heard this term?
I know you know the other IBM.
Right.
But yeah.
No, I have not.

(39:08):
Yeah.
So, this is so, and then he, now there'slike just unlimited doctor magician puns.
I'm sure actually, Jon, you guys willprobably cross paths at some point, but
there's unlimited doctor magician puns.
And my favorite of his is that when heis like presenting or performing for an
audience, he's like, I'm a cardiologist.
And then he shows— Nice.
—a card trick out. Nice.
Or something like that.

(39:28):
Yeah, it's great.
It's great fun.
And uh, like his grandkids love it too.
So, it's, it's a lot of fun.
If there's
a reference or a list of, send it to me.
'cause I to, I'm building likethis Doctor Magic Act right now.
In fact, I'm performing tonight.
Yeah.
It's great work.
Some more.
I'll,
I'll connect you guys,I'll connect you guys.
That'll be fun.
Yeah.
Alright,
Andy.
Uh, I would just like to point outthere's a zero chance that our, uh,

(39:49):
listeners, language models would havebeen able to predict the tokens that
just, uh, were auto completed there.
So, super fine.
The temperature justturned up a little bit.
That means we are humans.
It means we're humans, right?
Yeah.
There you go.
You, you're fast.
Um, okay, next question.
And I actually, I'm reallyinterested in your take on this.
Um, will medical AI be driven moreby computer scientists or clinicians?
Oh, oh, that's, that's,that's challenging.

(40:11):
Um, I, I think the computationalinjury folks are, have a lot
more power influence in a lot ofthe, the broader world right now.
Um, and why a lot of my mission currentlyis communicate and get the community and
like, get the medical community engaged.
Like, yo, this is gonna be, the EMR againis just gonna run all over you and, and
you're not gonna be happy with a result.
You better get involved and lead this.
But it's kind of wherethe locus of power is.

(40:33):
Having said that,
tech companies have fallen their face overand over again trying to bring tech to
health care 'cause they don't understandthe complexity of the actual domain.
So, I think clinicians have a spacegetting here balanced by clinicians often
see their power to like business peoplein large health care systems as well.
So, I would like to say clinicians,let's get in here 'cause we're the
ones who really value patient care andhave the right values to drive things.

(40:57):
If I had a bet, I actually think bigtech is gonna have is push things.
Um, they're gonna, they have more leveragein the near term, maybe, maybe longer.
Alright.
Last lightning round question.
What was it like to go to,and I, you already addressed
a little bit of this. Mm-hmm.
But, maybe you could just giveus a few, few, uh, few feelings.
What was it like to go tocollege at the age of 13?

(41:19):
At the, there was thatprogram I was in, right?
So, I wasn't like the onlyone 13-year-old kid here.
There was like 20, 30 kids and we, we hadour own little room to hang out within.
Oh, that was fine.
You're just gonna a programand you have a peer group.
I transferred to UCLAwhen I was 15 years old.
That was tough.
That was really hard.
I was a 15-year-old living in thedorms on a campus with 30,000 people
and nobody gives two craps a value'cause they don't have time to,

(41:39):
there's just too many people there.
I had to grow up real fast there.
There were a lot of.
Many nights, those first few weeks,like I just went to the, the dorm cafe.
I sat there and ate dinner by myself.
And, it was not a great feeling, but alsoI was like, oh, I'm gonna have to grow up.
I can't just study hard and dowell and that's good enough.
I have to figure out how to takecontrol of my life and figure out
where I'm going and find opportunitiesthat can make the connections.

(42:00):
And, that was a veryrapid growing experience.
You know, here's a joke to plugthat and also like life happens too.
We do our work, we do our hobbies, butalso like life's supposed to happen.
I say started college at age13, first girlfriend at age 21.
So, you can draw your own conclusions.
Nice, nice.
You got there it sounds like, so.

(42:21):
Well, Jonathan, you did a greatjob in the lightning round.
Well done.
Alright, Jonathan.
Cool.
So, yeah, maybe go ahead, Raj.
Yeah, we just have a coupleand I think we can make this
end on an optimistic note here.
But we have a couple sort of finalbig picture questions for you.
And you really do straddleat least two communities.
I know many more than that.
And, uh, I want to ask you a question thatI think is most interesting, most relevant

(42:45):
to the sort of physicians-in-training.
We're listening to this and medicalstudents, residents who are interested in
artificial intelligence and want to getinvolved whenever we give talks we have
a lot of great conversations with them.
One of the most sort of consistentquestions they ask is okay,
I'm going through medicalschool right now in residency.
They're busy, they're studying orthey're taking care of patients, but

(43:07):
they really want to get involved in AI.
And so, what's your message to themspecifically about what the sort
of best ways are for physiciansin training to get involved
in artificial intelligence work?
I'm sure you guys get these, too.
The, a lot of them come through thewoodwork and sometimes I put 'em tests
like, hey, if you really wanna getinvolved, come to our group meeting,
try this little pilot project, andmost of them flake out and disappear.

(43:30):
It's just a hype, cool,shiny object right now.
And most of 'em don't actually havethe earnest, they want the juicy thing.
Everybody wants the prize.
How many people arewilling to do the work?
So, I, that's a thing forpeople to think about.
On the other hand, I do wannaencourage, inspire them 'cause
it's now a lot more possible.
The reason I think ChatGPT wasso groundbreaking, it actually
wasn't language model it.
GPT-3 existed like a year or twobefore ChatGPT blew up, right?

(43:53):
What made it blow up was the interface.
If you can chat, if you cantext with somebody, you can
now use this thing, right?
You don't have to be a programmer,you don't have to be a data scientist.
So, really encourage 'em to think,hey, just start using these
tools and trying them and that.
And you know what's really ironic?
I'm sure you guys realize this, too.
When people ask me this,I'll give you my answer.
If you just
ask the chatbot this question,it would probably give you

(44:13):
a really pretty good answer.
These kind of general typequestions, how to get involved.
So, start learning how to use, try thesetools yourself, use 'em your own everyday
life and start asking questions about whatdo you think the next thing should be?
And then go find people like our hostsright here, who can connect you to the
next way to develop those resources.
In the meantime,
also, I would say if they're likea medical student or a resident
also go learn to be a good doctor.

(44:34):
Like, that's not a bad thing.
That will eventually be necessary andmake you essential and distinguish
what you can do from others towardshow you can contribute to that
combination if you're a bad, I mean,and, and I know these people, they're
really bad doctors or bad students.
I, they, they don't evenwant care about patients.
They want to glorify themselves.
They wanna use some cool tech,they wanna be a billionaire.
And I really have don't time.
I don't make my time for thesepeople because I feel like their

(44:56):
values are really in the wrong place.
It's like, hey, when youwanna do that combination
it's really powerful, compelling.But if you don't have that value it's
gonna lead you in the wrong space.
And how a surprise to me was I was gonnabe one of those people. Gimme the MD
degree just so I can prescribe drugs andthen it helps me do, do something else.
I'm gonna go into industry, I'm gonnado research or something like this.
And it surprised me in clinical care thatI liked it a lot more than I thought it'd

(45:19):
be, or it, I realized it mattered a lotmore and it was a lot more complex and
compelling than I thought it was gonna be.
Cool.
So, I think we'll headto the last question.
The first one will be pessimisticand the last one will be optimistic.
So, given kind of the violation of thefundamental theorem, what are you most
worried or concerned about in medical AI?
Do you have a realisticdystopian scenario?

(45:41):
Are there opportunities for patientharm that are keeping you up?
What's on your topthree most worried list?
Always know patient careshould be the primary value.
Uh, you, you should have worry, likeharm is gonna happen, although I think
net, but a lot of harm does happenwhen we don't use good tools already.
On the balance, I worry about that,but I, I think, uh, the, the balance
is gonna be overall positive.

(46:01):
Um, if you look at prior industrialrevolutions, it actually is a really
good thing overall for everybody
on average, right?
It's good that we don't all have to workon a farm or be hunter gatherers just
barely getting enough food to survive.
Now, 2% of the population with automationtechnologies makes enough food for
everybody, that, that's a good thing.
But in the near term,very disruptive change.

(46:23):
It, it hurts people a lotin very predictable ways.
Jobs will get displaced, right?
And, other people will be harmedin, uh, unexpected growth.
And this adolescent technology isvery foreseeable, unfortunately.
Got it.
Okay.
So, now you can bring us homewith something optimistic.
So,, uh, why are you doing this?
What are you most excited aboutspecifically with AI in health care?

(46:44):
I dunno, it's, it's kind of NorthStar, which I think is a good thing.
It's like, oh, but what about this job?
What about that technology?
This is all in the means ofsomething that is good for everybody.
I was just on an interview, it's like,well, I'd like to think the humans are
gonna win against a computer next time.
I'm like,
you know what, if this goeswell, the humans do win and the
humans are the patients, right?
It's actual populations.
We're even in technology, even as doctors.

(47:05):
We're the intermediaries and the goalof that is to actually improve patient
care and help people live better lives.
And now, wow, this is openedup new opportunities for
democratization of access.
You're worried about who's providingthe best medical decision, like tens
of millions in the U.S. alone, theycan't even get to see a doctor at all.
So, what's happening to them?
What's happening is like all sortsof compromise and harms, which we've
normalized and accepted, which I thinkAI very optimistic, is gonna open up

(47:29):
that pathway, make us smarter, better,faster, and let us have more reach, impact
to reach all the patients who need us.
Cool.
So, that's probably a good place to end it.
Um, so, Jon, thanks for coming on, man.
This was great.
That's awesome guys.
We should hang out just for fun next time.
Totally.
Yeah.
I'm overdue for a trip to Stanford, so,I'd love to, love to catch up sometime.
Be in touch.
Thanks so much, Jon.
This was great.

(47:51):
This copyrighted podcast from theMassachusetts Medical Society may
not be reproduced, distributed,or used for commercial purposes
without prior written permission ofthe Massachusetts Medical Society.
For information on reusingNEJM Group podcasts.
Please visit the permissions andlicensing page at the NEJM website.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.