All Episodes

January 2, 2024 60 mins

In this episode, Dr. Zak Kohane shares his journey into AI and medicine, reflecting on early influences from science fiction authors and programming experiences in his youth. He discusses his academic path, moving from programming and machine instruction to medical school, driven partly by practical advice and personal ambition. Kohane highlights his realization during medical school that medicine was not as scientifically advanced as he expected, motivating his interest in improving medical decision-making through AI. He recalls his time at MIT, contrasting the intellectual freedom there with today’s academic environment, and reflects on the impact of large language models in medicine, emphasizing their real-world applications and potential to transform medical practice. Kohane also discusses the importance of mentorship, his approach to nurturing talent, and the role of his department at Harvard in advancing the field of biomedical informatics. Finally, he shares insights on the NEJM AI journal, its objectives, and the challenges and opportunities in medical AI today.

 

Transcript.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Now we're in a very different realm,and that's because of just empirical
success, and not just empirical successin the hands of a few experts, but the
fact that we actually have these tools.
Let's get concrete.
The fact that a mom enters thehistory of her child who has

(00:25):
been having endless headaches,trouble walking, trouble chewing.
No doctor is able to tell her
what's going on.
And out of desperation, she types inall the results of the reports and of
the histories and physicals into GPT-4.
And she's given a diagnosis, whichshe then goes to a neurosurgeon, gives

(00:46):
him the imaging studies and says, GPT-4thinks it's a tethered cord syndrome.
And he looks at the image, looks at thechild and says, yep, that's what it is.
Hi, and welcome to another episodeof NEJM AI Grand Rounds.
I'm Raj Manrai, and I'm herewith my co-host, Andy Beam.

(01:08):
Andy, this episode is a special one.
We have our editor-in-chief,Zak Kohane, as our guest.
We've both known Zak for a long time,but I think even we learned some new
things about him during our conversation.
He brought some nice scotch tothe recording studio at the NEJM
offices for our conversation, andas always, it was tremendously fun
and insightful to listen to him.

(01:29):
Yeah, it's hard to sum up Zak succinctly.
I always like to say that he'sone of my favorite high entropy
personalities. And by that I mean thatit's very hard to predict what he's
going to say next, even for those ofus who have known him a long time.
So, I think he would breakmost of the language models.
Even them trying tobreak what he would say.

(01:50):
Would it be fair to say thathis temperature has turned up?
He's a high temperaturelanguage model for sure.
GPT-Z has a temperature of 1000.
I think the other thing that struck meis just what a huge impact he has had on
me, and I'm sure that you would say thesame, but also on the field as a whole.
We've been to several events recentlythat Zak organized, and his progeny

(02:14):
are just both prolific and numerous.
He has, uh, really launched.And still in touch with him.
And they're all still in touch with him.
So, we talk about it on the episode,you know, the first student he ever
mentored and his most recent graduates,they were all there and they were all
talking about their experience with him.
And I think, you know, as a youngfaculty member myself and a scientist

(02:37):
and a father, he sets like an exampleacross all three of those verticals.
And I know I feel super lucky tohave gotten to do a postdoc with him.
He's one of the highresolution simulations that
I try and keep in my brain.
Like what would, howwould Zak react to this?
Or what would Zak say?
You know, again, given his level ofentropy, that's a difficult thing
to do, but it was really great todrill down on what's going on in AI.

(03:03):
But also, what's going onat the journal with him.
And again, uh, he brought some verynice scotch to this conversation.
Uh, you know, Zak doesn't evercheap out on things like that.
So, uh, this was one of the mostfun conversations we've had this
year for a variety of reasons.
I totally agree.
And I couldn't agree more that I thinkhe's had such a big impact on the

(03:23):
field and on the lives of his mentees.
And we asked him about this duringthe episode too, you know, what's
your approach to mentorship?
We've seen it firsthand, the twoof us, while in his lab and now as.
as members of the journal and asfaculty members in his department.
He said it very succinctly, is the firstand most important thing is to care.

(03:46):
And so, he cares about his students and itshows in the way that he interacts, the
way he runs his group, the way he runs thedepartment in the journal and I think it's
a big part in addition to his, you know,being funny and entropic and everything.
I think it's a big part of why people wantto stay in his orbit for a very long time.
Stay around him, learn from him,and be close to him because he

(04:08):
really is a special, special person.
Completely agree, and if I also,you know, my high resolution
simulation of Zak right now probablyTells me he's so uncomfortable.
He's so uncomfortable right now.
He's so he's like, let's move on guys.
Stop.
Stop.
Stop sucking up.
Like let's, let's keep going.
Yeah.
So, that gives us a natural transition,uh, to talk about another important event.

(04:29):
So, uh, season one, the firstyear of AIGR is in the books.
We've interviewed some amazing guests.
I have to admit that I neverthought that Mark Cuban would be
on an AI in medicine podcast.
That was kind of a serendipitous eventand again, was really impressed with him.
He's a fellow nerd.
I walked away from that conversationreally appreciating how deep and

(04:50):
how nerdy he will get on a topic.
He will go deep and actually understandsthe technical side of health care in a way
that I found refreshing and surprising.
Yeah, I totally agree.
It's also extra funny to me because wespent a serious amount of time during
our postdoc days in Zak's lab justtalking about basketball and Mark Cuban

(05:10):
and the argument he had with physiciansat the time and now in a weird way, it
feels like all of that time that manypeople probably thought we were sort of
just wasting time together has preparedus for some very interesting and fun
conversations with folks like Mark.
I also think that we interviewed a lotof really amazing clinician scientists.

(05:30):
I'm in the middle of a conversation with acouple of scientists who've really put two
sets of both very demanding and difficultskills into their minds at the same time.
And so, they articulatedthis skillset very well.
I'm thinking about the conversation wehad with Ziad where he talks about both
the medical knowledge and the sort oftechnical economics and machine learning

(05:53):
skillset that he developed to be able todo such creative and interesting work.
Lily Peng is another one that comes tomind, one of our early conversations,
really amazing clinician scientists.
That was a huge trend for me from thisseason was the, you know, the rise
of the clinician scientists in AI.
So Ziad, Lily, Atul, Ewan, like so manyfolks who are bilingual in both medicine

(06:16):
and AI and that has proven for them tobe like a really powerful combination.
I think another big trend, our firstepisode launched December 15th, 2022.
So, we were still one month, not even amonth out from the launch of ChatGPT.
And I think by any measure, theconversations this year were dominated
by large language models in ChatGPT.

(06:37):
You know, we were pretty lucky.
What's a large language model?
Uh, well, listen, yeah, I think weall, we all know what that is by now.
I think we were really lucky toactually get to talk to Peter Lee
about GPT-4 before it was made public,that I felt like really privileged
to get sort of get to get to hearhis thoughts, um, before the madness
of GPT-4 had been unleashed and thatmomentum carried through the rest of the

(07:01):
conversations that we've had this year.
I totally agree.
All right.
Should we jump into ourconversation with Zak?
I think so.
And on to season two next year.
The NEJM AI Grand Rounds podcast issponsored by Microsoft, viz.ai, and Lyric.
We thank them for their support.

(07:21):
And now we bring you our conversationwith Zak Kohane on AI Grand Rounds.
All right, Zak.
Well, welcome to AI Grand Rounds.
We are thrilled to have you here.
I am truly honored.
You two are the most famouspart of NEJM AI at this point.
The podcast exceeds the journal.
I hope not for long, butfor now, it really does.

(07:44):
Congratulations.
Thank you for that, Zak.
So Zak, this is a question we always liketo get started with on AI Grand Rounds.
Could you please tell usabout the training procedure
for your own neural network?
How did you get interested inAI and what data and experiences
led you to where you are today?
Well, first, let me just say,feel free to cut me off because

(08:05):
I love to talk about this topic.
I remember distinctly in high schoolin Geneva just being flabbergasted that
programming a computer actually worked.
And so, I had an HP 25C, and Iprogrammed it, and I could run a

(08:28):
little moon lander on it, and itwas, it seemed miraculous to me.
And using retrospective editing, I'm goingto claim that it was just the beginning
of the ever-moving goalposts for AI.
Having a machine do some intellectualactivity, systematically,

(08:49):
again and again, amazing!
And so, that was also informedby my love of science fiction.
Growing up in Geneva, I, I'm notexaggerating, I probably read about
50 science fiction novels a year.
And Arthur C.
Clarke, and Asimov, and StanislavLem were three of the writers who

(09:16):
were writing a lot about robotics.
And so that gave me perhaps the longview, although it seemed very long.
So, when I first came to this countryat Brown University, we still had
mainframes and there was a greatsocial nexus around the different
terminal centers around campus.

(09:38):
I went to one such, and I saw for thefirst time a, what's called a detail
terminal, which had one of these IBMSelectric type balls, moving along and
typing furiously, and I was looking at it.
It was a lot better than my HP 25, and Iasked one of the students who were super,
who was supervising that center, how didthese things work, and explained their
computer terminals linked to the IBM 370.

(10:00):
And, I said, how do
you control them?
He said program.
What's a good programming language?
He said APL, literally, a programminglanguage, which turns out to be a really
good language for doing linear algebra.
And so, I picked up the book, told myselfhow to work in APL, and I said to myself,
how in the heck does APL get executed.

(10:23):
So, I asked someone, theysaid, machine instructions.
So, I took a course in Assemblerand built higher level
abstractions out of Assembler.
And I said, that's great.
It's no longer a high-levellanguage, but how does that work?
And so, I took another course atBrown on building electronics.

(10:44):
And they gave us a book, Zenand the Art of Motorcycle
Repair, as the required text.
And the rest of it was building circuits.
I think the before final project wasa 4-bit CPU built out of flip flops.
And so, literally every time I would flicka switch, it would be one clock cycle.
And I remember I needed a carry bit togo from one set of 4-bits to another, and

(11:08):
it was getting there too fast, and so Icould actually just lengthen the wire, and
it would arrive there at the right time.
And so, that was one way of thinkingabout how these things work.
Then I took a course with,uh, Eugene Charnyak, who was
an early AI investigator.
And among the many projects we did wassomething called a Constraint Propagation

(11:29):
Network to do visual recognition of edges.
And that was my first hintof how very simple operations
could result in recognition.
And then I did what I thoughtI was going to do anyway, which
is I went to medical school.
So, as an immigrant, you'dsay that's a natural.
All immigrants are told by theirparents to go to medical school.
That was not the case.

(11:49):
What had happened was, I thoughtin my ambitious, naive self in high
school, I was going to get a
Ph.D.
in biology.
And when I went with my father to visitcolleges in the United States, much
against his wishes, I ran into someone.
It was just a two-minute meetingat Yale, and he told me, he
asked me what I wanted to do.
And I, like a naive young man,I said, I want to get a

(12:11):
Ph.D.
in biology.
He said, don't do it!
The doctors will treat you like dirt.
Get an
M.D.
Otherwise, they're going to getmuch more resources than you.
You're not going to be happy.
Get the
M.D.
And I turned around, told my dad, andhe said, makes a lot of sense to me.
And so, that made me go to medical school.
And, but I was the first in myfamily to go into medical school.

(12:34):
So, I arrive in medical school, andat first I'm just overwhelmed by the
amount of data I'm being force fed.
And then I realize -- So you'rein Boston at this point.
In Boston at this point.
And I tell myself, uh oh.
This is not science.
I'm in deep trouble.
I've always wanted to be a scientist,and instead, I'm in a noble trade.

(12:55):
What am I going to do?
I started thrashing, and looking around, andfortunately, I had some very nice mentors.
And mentorship for me isa very important theme.
And they said, go meet this person,Rob Friedman at BU. And he said, what
about this guy, Pete Szolovits, at MIT?

(13:15):
And I met Pete Szolovits, who was and isa professor of computer science at MIT.
Then he was fresh out ofCaltech and had just inherited a
clinical decision-making group.
And he was taking very seriously theoperations of diagnostics in medicine.
And all of a sudden, I realized this wasgoing to be the science I was involved in.

(13:36):
And I was thrilled.
And that's really where Igot bit hard by the AI bug.
And frankly, a lot of the lessonsthat everybody's learning now
about AI, I learned back then.
So even though we had as it'scalled, an AI winter, a huge

(13:57):
and well-earned disappointment.
A lot of the fundamental lessonsabout decision making, utilities,
probabilities, bias, were all therein the Ph.D. work I did back in the day.
One of my favorite pastimesis to go back and read papers
from medical AI of that period.

(14:18):
Because in many ways, obviouslywe're using different methods,
but the messages are still fresh.
You will see in many papers, like,the promise of computer-aided decision
support has been long promised,but has never been delivered on.
Uh, what do you thinkabout what's similar now?
Like, what's different?
The era of expert AI has always beenso fascinating to me, but like, what's

(14:40):
different now and what's the same?
So, it was very strange being a graduatestudent in the 1980s because you'd hear
some luminaries like Marvin Minsky andthey'd paint verbally huge edifices
describing how AI was going to emerge.
And it was like religion.
But I never understood how itwas actually going to happen.

(15:02):
And it may be that none of us were smartenough to fully understand his vision, or
maybe it was a little bit too abstract.
Like society of mind stuff, like, yeah.
Exactly.
Society of mind stuff.
And so, I mean, he did some veryconcrete stuff in the day, including
slowing down neural network advancesfor a few years by his XR result.

(15:22):
But now we're in a very differentrealm, and that's because,
frankly, of just empirical success.
And not just empirical success in thehands of a few experts, but the fact
that we actually have these tools.
Let's get concrete.

(15:43):
The fact that a mom enters thehistory of her child who has
been having endless headaches,trouble walking, trouble chewing.
No doctor is able totell her what's going on.
And out of desperation, she types inall the results of the reports and of

(16:03):
the histories and physicals into GPT-4.
And she's given a diagnosis, which shethen goes to a neurosurgeon and says,
gives him the imaging studies and says,GPT-4 thinks it's a tethered cord syndrome.
And he looks at the image, looks at thechild and says, yep, that's what it is.

(16:23):
So, this is, we're no longerin the theory or promise.
It's the present.
That's why I'm so much more confident.
I think that model, so that example thatyou just highlighted, also indicates
an interesting model for the safe orresponsible use of AI in medicine, right?
Which is a way to prompt anew type of conversation with
the patient and the doctor.
Um, I'm curious, so going, you know,if we can dig a little more into your

(16:47):
Ph.D. with Pete Szolovits in the 80s,um, can you compare, and maybe contrast,
the intellectual climate of MIT inthe 80s with today, with how a
Ph.D.
student would approach this today.
Is there, do they have the sameintellectual freedom to pursue AI in
medicine today that they, that you didwhen you were in, you were doing your

(17:11):
Ph.D.?
So, clearly you're tryingto get me canceled.
Um, so. Intellectual climate.
The realm of intellectuality is broad.
It's broad or can be definedin ways that are limiting.
And so, um, I can't speak for all ofMIT, but certainly in Pete's group, you

(17:31):
were free to do whatever you wanted.
He never told you what project to work on.
For the better or worse, bythe way, that's become my modus
operandi as a mentor as well.
But we were free to think thoughtslike, you know, I was thinking about
the fact that I was seeing that history,temporal history, and the partial

(17:52):
order of events in a history were justas important as the actual events.
In other words, to diagnose someonewhen they first have joined us
before the transfusion, as opposedto joined us immediately after the
transfusion, as opposed to joinedus 45 days after the transfusion.
Could be the difference between irrelevantto transfusion reaction to hepatitis.

(18:14):
And I decided I was going to focuson representing temporal histories
and alternative temporal worlds.
My next-door office mate wasworking on qualitative physics
and how to model physiologicalrelations using the qualitative
version of differential equations.
These were very, very broad topics.

(18:34):
But you could see both of that, thatrange still happening today, right?
In Pete's group, with the right mentor.
With the right mentor, you can.
Yes, you can.
But I think the big differencebetween MIT then and MIT today is
MIT today is kinder and gentler, bothto its benefit and to its detriment.

(19:00):
It's still less kind and gentle thanStanford, and I'm told this by graduate
students who visit Stanford, but backin the day when you presented in Pete's
group, there was always someone whowas just very eager to take you down.
And that turned out to beincredibly useful in mental hygiene
because it saved you months ofsterile, irrelevant explorations.

(19:22):
It was a bit egotistonic at thetime, but incredibly helpful.
So, I think that the notion that for yourPh.D. you get to explore an entire world
and find out where you want to makeyour particular contribution is a huge
gift that's very rarely given to us.
I think we should bedoing more of it today.

(19:46):
I also wonder too, like outsideof like institutional climates, if
the effect of large language modelshas reduced intellectual diversity
of like what we're working on.
Because it's hard to imagine gradstudents sitting next to each other now.
Like in generative AI, there'sthis concept of mode collapse,
and it's when the influence of theprior is too strong and everything
just collapses on like one thing.
Do we have societal mode collapse now?

(20:07):
I think we have intellectual mode collapsewithin AI, and like just hearing you
talk about like the different range ofthings that were going on in the 90s.
I mean obviously like large languagemodels are having their day and doing
a lot and we'll cover that. But I wonderlike what we're missing and the
tales of the distribution since we'reall focused on the mode right now.
So, it's interesting.
There's both more science andless science, both then and now.

(20:30):
So, every one of us had theories, butthey were not well grounded in empiricism
about what kinds of methods would workbetter for the diagnostic problem.
And these theories were more intuitive andbased on cognitive psychology and perhaps

(20:51):
pragmatic insights than anything else.
But they were fundamentallydifferent methodologies.
Now, at the base layer, theselarge language models fit a
very well-oiled apparatus.
But at the top level, we'retreating them like psychology.
Most of the papers that you read,even in very technical conferences,

(21:15):
are about different effects onthe performance and behaviors.
of these models, and a very few of themreally try to dissect out how specifics
of the model are actually related to that.
It's like behavioral machine psychology.
Yes.
Robot psychology.
Yes.
I've had a discussion with Raj.

(21:36):
If you look at the iRobot bookcollection by Asimov, it's about the
three laws of robotics, but most ofthe stories are apparent breaks of the
law, and the whole story is how, infact, they're not breaks of the law.
So, what logical pretzel twist gotthe robot to do what it takes?
But the person who is there throughout,there's multiple robots, the person

(21:59):
who's out there in every single storyis a woman, Susan Calvin, who's a robot
psychologist.
Uh, this, continuing with the theme ofyour career, Raj, do you want to go on
to the DBMI portion of the questions?
Um, I think it makes more sense.
We'll come back to that.
Can I interrupt as I usually do?
Yes, go for it.

(22:20):
There's another thing that reallymotivated me about AI in medicine.
It's when I came back to medicineand I could not believe how
primitive it was and how it was
a telephone game, someone told someone,someone told someone, and we were, you
know, working off of written notes andhearsay and how this is all being turned

(22:44):
into practice on very sick patients.
And I felt very much like there'sa Star Trek movie in which McCoy,
they have to go back to getthe whales back to the future.
It's a long story, but on the way,they go back in time and McCoy, the
doctor, is going across the hospitalwards and he sees a patient and

(23:07):
he's saying, why are you here, dear?
He says, I'm in dialysis.
He says, dialysis?
That's incredibly, you know,primitive, barbarian, and
he gives her a pill instead.
And I felt that we were in thatprimitive state and that we were not
learning and that we're not using
optimal decisions.
And so that was incredibly motivating.

(23:28):
And let me tell you that anybodywho does not get calloused, if you
stay in medicine long enough, yourealize so many suboptimal decisions
are being made, uh, both in the dataacquisition, the decision theoretic
level, and then at the, uh, action level.
And so that has become incrediblymotivating for me, and it's

(23:48):
always, I think, a slight
hint of somewhere between urgency andum, a teensy bit of anger that we're
not doing better by our patients.
I'll say like, one of the greatintellectual revelations of my life
was when I did a postdoc with you.
So, I was dating a med student, marrieda med student, and I was having the
exact same thoughts that you did.

(24:09):
We're like,
these problems have been solved for30 or 40 years in computer science and
other areas, but I would talk to medstudents and medical people, and they
would look at me like I was crazy.
And then I came up here for a postdocand I was like, oh, okay, there
are actually, these are my people.
Like there are actually other peoplewho are thinking the same thoughts
and trying to solve these problems.
So, I think that's a credit to youthat you've established this like

(24:30):
community of folks who want to improvemedicine through quantitative methods
and there's like a real Zakosphereof people who think like that.
Well, thanks for the compliment.
And, uh, but that's in the end why Icreated my first lab, why I created
a center, why I created a department.
I needed to have my people around me.
I had a community.
I needed a community around us.

(24:51):
So, when you go into a fancy academichealth care system and you want
to have an academic career in the1990s, probably what you want to
be working on is on gene knockouts.
I had zero interest inworking on gene knockouts.
But, there was no great example ofcareer advancing in an academic mode,

(25:13):
so I wanted to create that communitythat would allow it to be not an
exception, not a psychiatric outlier.
An archetype.
Yeah.
To create the archetype of advancing.
Even without an M.D.
So, I think that's a perfect transitionto our next topic, but I want to
ask one more question about yourcareer trajectory before that.

(25:35):
So, I'm jumping ahead a little bitbecause we're going to ask you this as
a sort of concluding question as well.
But you did an M.D., you did a Ph.D.
We've talked a lot with our previousguests about the training journey and
the skill set that they need to haveimpact in medical machine learning
or in an inherently interdisciplinarydiscipline like biomedical informatics.

(25:56):
And so Ziad Obermeyer in particulararticulated an extremely, I think,
cogent case for both skills needingto coexist within the same mind.
So, the technical and the domainexpertise are the medical skills.
This is as opposed to what seems like atempting narrative or tempting approach,
which is to take a deep-clinical expertiseand pair them with an AI researcher

(26:17):
who doesn't know much about medicine.
And the problem as Ziad pointedout was that they often don't
have much to talk about.
They don't speak the same language.
They can't see the connections.
They can't whittle away.
And they're like effectively lobotomized.
They're lobotomized.
They can't eliminate the bad ideas, right?
What is good research?
It's sort of eliminating lots of badideas or inferior ideas, inferior
approaches to the same question.
So, you have an M.D., you have a Ph.D.

(26:38):
You were the first person, I think, toconvince my mother that I don't need to
go to medical school on my defense dayof all days for the Ph.D. Do you believe
that, uh, so I know that one of theanswers, which is the credentials, you
know, you don't need to have both degreesto have impact. But do you believe that
both of these skills need to coexistwithin the same person to have impact?

(27:04):
So, the short answer is yes, but youdon't have to get it professionally.
Peter Szolovits used to have in hisgroup a challenge that he had done
himself, which is read the entireHarrison's textbook of medicine.
And I do think the thousand pages.
The thousand pages.
And he did it seriously.And I can have with Pete as

(27:26):
good a medical conversationas I can have with any doctor.
And so, I think that's what it takes.
It's not necessary, but it's much better.
And it's very simple from thearchitectural perspective, having a
relatively fast interchange in yourown head than between two individuals

(27:47):
makes a difference. Youknow, we're talking in minutes.
It's latency.
It's latency.
It's back to latency.
And by the way, for me, latency isa huge issue across all of science.
Can you just say your mantra here abouta knowledge processing discipline?
Yes.
So, I do think that medicineis, in fact, fundamentally a

(28:07):
knowledge processing discipline.
I think the understanding andmisunderstanding of that relates to both
a lot of the opportunity that we seehere today and a lot of the despair,
dysfunction, and, uh, resentment thatwe, that we are witnessing today.

(28:28):
I do think that the latency, even betweenindividuals, as it's encompassed in
institutions, where we have to get aresearch project going, there are so
many hop, skips, and jumps that haveto happen, that shortening them by
a factor of 10 would be tremendous.

(28:50):
There's a story, which I'm embarrassedto say I wrote in this book
I wrote with Peter Lee and KerryGoldberg. But Peter Lee was
smart enough, smarter than I.
He actually cites it. And I was neverciting it until I saw him citing my story
and I said, I should be citing my story.
And here's the story.
My first day as a doctor.

(29:12):
I go into the basement of the BrighamWomen's Hospital, I'm in the NICU,
the Newborn Intensive Care Unit.
I had never, I didn't know I wasgoing into pediatrics until the last
moment for a variety of reasons.
And this is the first time I ever sawthese little translucent creatures.
And that was scary.
They were all in vents, and these kidsare so immature that if you don't,

(29:34):
they sometimes forget to breathe.
And then the crazy thingis you flick their toe.
You flick their toe, which is like, youknow, I went to medical school and now
the nurse is telling me, flick their toe.
I said, what the hell is this?
A little shake.
But then I get my first patient admitted.
A newborn, full-termbaby, big, chunky, a baby,

(29:55):
but his lung had collapsed. And whathad happened, uh, with pneumothorax
and they quickly stuck a, a needlein so it reinflated, but it convinced
his blood circulation in his lungsthat it was back in the womb because
the oxygen saturation went down.
So, the resistance inhis lungs are very high.
And so even though we were, um,giving a lot of oxygen via bag,

(30:18):
it was not getting into his blood,
'cause the blood
was trying to go to the placenta,which no longer was there.
It was not going through the lungs.
And so, I was hand bagging this child forabout 12 hours and then I gave back a
dead baby to the parents at the end of it.
My then girlfriend picked me up after my,this first night on call, and I cried.

(30:43):
And it was the only time Iever cried, uh, in residency.
And it was not just out ofsadness, it was out of frustration
and feeling of uselessness.
This was compounded when I foundout that next door, two months
later, a therapy was approved calledExtracorporeal Membrane Oxygenation, ECMO.

(31:04):
They had just finished the trials toshow that for this disease, it worked.
So, if that trial had justcompleted two months earlier,
that baby would have lived.
Or if you could haveenrolled them in the trial.
In the trial.
Yeah.
Actually, it turned out the trialhad been stopped and in fact because
of dirty, dirty, dirty laundry.

(31:25):
Yeah.
It's, ECMO is expensive.
The hospital was trying to figureout, before insurance funds
it, are they going to do it?
Right.
Because you need to have two techs onit, and it's so it taught me a lot about
medicine, but, and I frankly, I wasin fear of running into the parents.
Because I was, I didn't knowwhat to, if I, what would I tell

(31:47):
them if we had that conversation.
But that told me, you know,a weak delay on the IRB.
A piece of equipment thatwas not debugged correctly.
Getting the right respiratory tech.
All those delays.
If you eliminate them, thatbaby might be alive today.
We tend to focus only on diagnosis, right?

(32:08):
Right.
On that sort of, the encounteritself, getting the right diagnosis.
But not on latency, on all the sort ofslowness and the hurdles and the delays.
Lower the coefficient of diffusion.
Yeah, and that's where it's at.
And it's the asymmetrytoo, from institution-to-
institution or room-to-room.
Yeah.
Where the knowledge is notflowing or being processed.
And I can tell you, thousands ofAmericans die every year because of that.

(32:31):
Yeah.
That would not have to die otherwise.
And so, that is heartbreaking.
Yeah.
The future is here, it'sjust not evenly distributed.
It's not evenly distributedand it could be accelerated.
And I am an unapologetic accelerationist,before the term ever was conceived

(32:51):
by certain crypto traders,
um, because of the understanding thatwe already had a lot of the good ideas,
but they have to be tested, and we haveto be able to engage in science all
the time in the practice of medicine.
It's not, I don't think it'sunderstood deeply enough.
So, I think that's alsoa good transition point.

(33:12):
So Zak, I realize this is an amusingquestion for me to be asking you as
a member of your faculty, one of yourfaculty members, but Zak Kohane, why
did you start a department at Harvardfocused on biomedical informatics?
What is your vision for the department?
So, given that we agreed that I havebeen repeating endlessly that medicine

(33:34):
is a knowledge processing discipline.
I asked myself, how will I get acommunity that not only understands
that, but is able to implement theexperiments that will be convincing
to industry to make this happen?
And creating a critical mass of suchindividuals at Harvard seemed important

(33:58):
because by the time I was in position tohave a department, Boston very clearly

was the epicenter of a lot of activities: pharma, biotech, hospitals, academia. (34:05):
undefined
And it seemed like if we could demonstratethat critical mass here, it'd be a
great model for other academic centers.
And so, once I'd convinced the dean,then Dean Flier, that this made sense.

(34:31):
Things went relatively rapidly, andit turns out that every single one of
the faculty that we've recruited isa star, even the ones like you, Andy,
who we did not recruit, or we tried torecruit, is a star and are making, you
know, huge inroads at this very specialtime where you do have to understand

(34:54):
not only that medicine is an informationprocessing business, uh, enterprise,
but what works and what does not work.
What are the limitations?
And I remember well the variousconversations that we've had. For example,
right when IBM, uh, Watson started goingto medicine, we were all very impressed

(35:14):
with its performance in Jeopardy, butwhen they started telling us about
medicine, maybe we spent a month or twosaying, what secret sauce did they have?
And we said, this is not real.
This is not right.
It took us several years, but I thinkwe have a very good bead on what is real
by virtue of addressing the challenge,not as a purely academic challenge, but

(35:38):
how are we going to transform medicine?
And so that is a research mission, butit's also an educational mission, right?
And so you recently announceda new Ph.D. program in artificial
intelligence in medicine.
At the department, could you tell usa little bit about that Ph.D. program
and what you're hoping to achieve?
So fortunately, because of the successof these large language models,

(36:02):
there's even a further increase inthe number of qualified undergraduates
who have machine learning skills.
What they don't have is appreciationof what the challenges are in medicine.
What is medicine and thepracticalities of medicine.
Application of these varioustechnologies to medicine.

(36:23):
And so what we hope to do in thisPh.D. program, which we just got our
first set of applications, and we'llbe matriculating our first group
in the fall of 2024, what we aimto do in this program is very much
modeled on a much older program thatwas started between MIT and Harvard
called HST, where we take very strong,quantitatively trained individuals.

(36:47):
Ph.D.s in math, computer science,physics, mechanical engineering,
and then immerse them in medicine,medical applications, while deepening
their methodological knowledge.
And so the goal is just to create, I hopethis doesn't sound overly arrogant, a new
set of leaders who can actually lead us inthis new world where machines are going to

(37:11):
be at our side in medical decision making.
So, we could spend literally 10 hourstalking about all the different
things that you've created.
Um, but I, I now want to talkabout something that we haven't
had actually talked about in thepodcast yet, which is NEJM AI.
So, we're all sitting here enjoying somevery tasty scotch, uh, courtesy of Zak.
Uh, and if you roll the clock back,cheers, uh, two years ago, uh, we

(37:34):
were sitting in Zak's backyard, alsodrinking some tasty scotch on a very cold
winter night in Boston, talking aboutcreating a new journal called NEJM AI.
So, could you talk to us about thegenesis of the journal, what it is,
what its mandate is and why we need it?
So it's an odd thing for me tobe even talking about NEJM AI.

(37:55):
If you talked to me as a grad studentthat there would be such a thing, I would
think it was an elaborate prank and,but I'll give credit where it's due.
The current editor-in-chief of theNew England Journal of Medicine and
the prior editor-in-chief, Eric Rubinand Jeff Drazen, thought this was a
good idea about five or six years ago.

(38:16):
And then they approached me two years agoabout it, and I told them, absolutely not.
There was just not enough good articles.
There were a lot of great AI articles,but not enough good articles that
would interest a clinical audience.
So, then they approachedme two years later.
Of course, it was deeper now in theuse of convolutional neural networks,

(38:37):
particularly for image recognition.
So, I felt there was goingto be enough interest.
And then, using my other favorite adage,better lucky than smart, uh, large
language models erupted on the scene.
Not when they were first written upin 2017, but when they started being
released to the public in, uh, 2022.
And all of a sudden, this seemed likeperhaps the most momentous occasion

(39:03):
in medicine for many decades.
And so I was very lucky to be able tothen recruit you two guys to be among
our deputy editors and an amazing groupof members of the editorial board.
But what's our mission?
Our mission, first and foremost, issomething very practical, which is which

(39:23):
of these widgets is clinical grade?
It turns out there's no there's a lotof authority out there that's going
to tell us that this AI program isbetter than the other AI program.
There are some FDA processes whichdon't generalize well to multiple
hospitals or don't specializewell to multiple hospitals.

(39:47):
That don't work well with the waymedicine changes over time or with the
way practices is varied by location.
It doesn't, um, really come, uh,involve comparing one to another.
So, there really is a huge gapbetween this promising technology
that we're already all using andmeasures of clinical quality.

(40:11):
And so that, and its embodimentin clinical trials is the bread
and butter of the journal.
But in order to make it a little bit moreeducational and palatable, in addition
to this, we're going to have, and wealready are publishing in our advanced
online issues, perspectives, casehistories, and very importantly, because

(40:36):
data is so important to the performanceof these models, benchmark data sets.
And so we want to bothadvance the field by.
evaluating these programs, but alsoadvance the field by knowledgeable
commentary, knowledgeable reviews,but also creating benchmark datasets

(40:57):
and benchmark questions aroundthose datasets to allow others
to move the field even faster.
Yeah, I think that's awesome.
And when you like approached me andRaj about this two years ago, it was
like, I could not have said yes faster.
It's like such an obviousthing to like want to do.
And I'm super excited, uh, how the,how the journal has grown, I guess so

(41:19):
that Charlotte doesn't get mad at us.
Yes.
Say that you're an author out there.
Yes.
And you have a great AI paper.
Yes.
Why should they submit it to NEJM AIover a general interest journal or some,
or like a medical specialty journal?
So, it's like the old army commercial.
We're going to make your paper bethe best it can be because we really

(41:41):
care, in the spirit of our training,that these articles be relevant and
safe and useful for our readership.
And that means we're going to put a lotof effort in working with you, the author,
to making sure it's maximally relevant.
And so, yes, we love, as much asanybody else, a splashy finding.

(42:02):
But we're going to make surethat the splashy finding is,
first of all, has substance.
But also that we're clear about itslimitations, and we're gonna make
sure that the message in the paper isneither smaller or larger than what
the actual data and analysis can show.

(42:22):
We really want this to be usableby patients and doctors and
technologists as a reliable measureof, frankly, the real thing.
I think that's a good answer.
I think, I think we're readyfor the lightning round.
What about you, Raj?
Let's do it.
Okay.

(42:44):
Okay, so just, this is a little bitof a Zak deep cut, just because
I know you, um, but lightninground question number one.
What is your favorite Radiohead song?
Oh my God.
I'm gonna have a meltdown.
Okay, uh.
What's the one, uh. The end creditsone is one that I hear you,
uh, songs, or it's the one thatcomes at the end of OK Computer.

(43:07):
Yeah.
I'm having a senior moment.
You bastards.
That's a good one.
I am truly in awe.
That's good.
"Lucky"?
"Paranoid Android"?
"Paranoid Android."
"Karma Police." "Karma Police" is one.
"India Rubber," "The Bends."
Actually, because of this, becauseof how well it got rendered by... okay.

(43:29):
Hang on.
Radiohead, Social Network.
Oh.
"Creep"?
In social, the, the. Yes.
In the movie?
Yeah.
Well, it's like a different version.
It's like a, it's adifferent Facebook movie?
It's like a female version?
It's not female.
It's um. Okay.
It's like quarrel or something.
Yeah, it's a quarrel version.
Let's, let's, let's get this right.

(43:49):
Yeah.
Radio... Which is funny, becausefor like Radiohead nerds,
"Creep" is like, not the one.
I know, I know, I know.
But I, but I love this cover.
Cover.
Um.
Can we
sample that
in the
podcast episode?
Yeah, we can be the lead in.
Yeah, alright, we got the thumbs up.
We have Zak Kohane and AI Grand Rounds.
I'm a creep.

(44:13):
Okay.
Yes.
So my favorite, it's not my favoriteRadiohead song, but it's my favorite
song because of the cover byScala and the Kalachni brothers.
It's the version of "Creep" thatappears in The Social Network.
Excellent.
That is a deep Radiohead, uh, cut.
So, that's a good answer.

(44:33):
Zak, if you weren't in medicine,what would you be doing?
Writing science fiction.
Well, that's a very, okay,so that tees us up nicely.
Um, so the next lightning round questionis, what is your favorite piece of
science fiction book or movie about AI?
About AI.

Yeah, 2001 (44:52):
A Space Odyssey.
Yeah, that's a good one.
Yeah.
Yeah.
Alright.
The second best one is, um, a shortstory by Asimov where this computer
is asked to solve a bunch of things.
The last answer, or the last question?
Yeah.
Yeah, that's a good one.
I'm amused that I get toask this question today.

(45:13):
How much time do you spend on Twitter/X a day, and true or false,
did your daughter buy you a mugthat commemorates your addiction?
I recently been trying my best to cut downon Twitter and I'm ratted out by my iPhone
and I truly believe I have cut down.
It's still 1 hour 30 minutes a day.

(45:35):
But I would not be surprised if beforemy lean diet, I was doing 3 hours a day.
Alright, so 50% improvement, good.
And yes, my daughter did buy me.
"You are a Twitter addict," and it'sgetting to the point that my friends are
beginning to try to shame me as KerryGoldberg just did, uh, on Twitter, X, to

(45:56):
say that I'm spending too much time there.
This is one of our highlights.
I will say, like, the most reliableway to get an answer from you
is to send you a DM on Twitter.
Actually, I think we sent you the prepquestions for this on Twitter DMs.
Yes, yes you did.
You're about to getspammed from Twitter DMs.
You better close the DMs.
This is, here is the, a good AI comment,but in the end, it's just a, a excuse.

(46:24):
When ChatGPT came on the scene, LLMprogress became so rapid that I truly
believe the only way to stay up-to-date,
even with casual conversation withother computer scientists was to look
at the preprints being referenced.
I still believe that.
Yes, so that's convenient.
I wish I could say that all my time onTwitter was spent looking at which are the

(46:47):
right preprints to look up from Twitter.
I'll say it is right now thefastest way to get access.
Uh, so our last two lightning roundquestions are an attempt to get
you into a little bit of trouble.
Okay, go ahead.
Uh, so this, this one I'm ad libbing.
Yeah.
Um, but in 20 years from now, will NEJMAI have a higher impact factor than the
New England Journal of Medicine itself?

(47:09):
There will be no NewEngland Journal of Medicine.
NEJM AI will have subsumed NewEngland Journal of Medicine.
It's going full board.
Alright, our last question, Zak.
Who will win the 2024
U.S.
presidential election?
Oh my God.
And do you have any bets attached to this?
Yes, so. With your deputy editors.
Yes, so I had bet, and Ireally regret this bet.

(47:33):
Because it was way too optimistic.
I had bet with my deputy editors.
That Trump would not win the primary,so unless he decides to go in as a third
party candidate or something else happens.
This was in 2016?
This was in 2016?
Yep.
No, the Trump's first presidency.

(47:54):
It was 2016.
No, no.
But bet against. We lost.
We bet.
On the other side.
Right, right.
I'm talking about the bet right now.
Right now.
That bothers me.
Yeah.
Yes.
The bet that bothers me right now isthat I took on a bet with you after
Joe Biden was elected that Trump would notbe the nominee of the Republican Party.

(48:14):
It seemed to me at thetime a reasonable bet.
At this point, it no longer seems likea good bet unless either he runs for
third party or something else happens.
Or Sam Bankman Fried pays him5 billion dollars not to run
or something crazy like that.

(48:35):
I don't think, well Sam Bankman Fried,can I just point out, yet again,
Another way to pronounce BankmanFried's last name is Bankman fried.
Yeah.
Nomen est omen.
Nomen est omen.
Can you tell ourlisteners what that means?
That means, it's a Latin saying,which is, your name is your destiny.

(48:55):
And, yes, um, I don't think the5 billion from Sam would do it.
I think there's an issue of pride.
But at the same time, I truly do notknow who's going to win the election.
Because I think that the number, it'slike a truly chaotic process, both
in the colloquial sense, but in thetechnical sense, because there's so

(49:18):
many initial conditions that we don'tknow of that may affect that process.
It could be Trump.
It could be Biden.
It could be someone thatwe didn't see coming.
Fantastic.
All right.
Well, you survived the lightning round.
Cheers.
Congrats on surviving the lightning round.
And congratulations

(49:39):
on a great year of podcasts.
I was joking about it at the beginning,but I've gotten so many compliments, um,
around such that I've almost convincedmyself that these podcasts were my idea.
Congratulations.
The ultimate, the ultimate compliment.
You deserve a lot of the creditbecause you put me and Raj

(50:00):
next to each other in cubicles for our postdocs.
Well, we were, we were eventuallyseparated because we were
having so much fun and therewere multiple noise complaints.
And so actually, we've been trainingfor this for a long time, so you
actually do deserve indirect credit.
I think we spent a day of our postdocas postdocs through discussing the
Mark Cuban debate with physicians.
Yeah.
And so, it was perfect trainingfor, uh, for this, Zak.

(50:23):
So, thank you for creating, creatingthe setting for Disrupting Cowboy
Library and training for this.
And so, if you want us to mention youin next year's, we'll have next year,
we'll do, we'll do this again in a year.
We're going to pick out two articlesthat we want to commemorate after a year.

(50:45):
So, here's my challenge.
Whether it's through drama,science, excellence, or
impertinence, submit that article.
That will get you mentioned inthe December episode, in the
December of episode of 2024.
All right.
So Zak, we just have a couple of bigpicture concluding questions for you.

(51:07):
Uh, I'm going to change this one thatI was going to ask because I think you
addressed it in your career arc already.
So instead I'm going to askabout your mentorship philosophy.
So I have to say, I was at thismeeting that you organized in Maine.
The RAISE meeting, very important,excellent group of individuals
that you recruited to the event todiscuss safe and responsible use

(51:31):
of AI in health care and medicine.
And I was struck by the agenda, thelevel of discourse, the investment that
everyone had in the goals of the meeting.
But I was almost just as struck bythe fact that you had some of your
Yeah, I actually think your firsttrainee, Dan Nygren, uh, and your

(51:53):
most recent trainees, and me andAndy, somewhere in the middle there.
At this event, and we were all connectingwith each other, we were all speaking,
we were all on A, excellent termswith you, and B, we were all sharing
our positive experiences with ourtime in your lab and in your orbit.
And I think it speaks a lotthat you're still at 2020.

(52:14):
This was a couple of weeks ago that you'restill connected to all of your mentees.
I hope you can maybe distillfor especially our new PIs who
are creating a lab culture.
So, I think still includes me and Andy.
We're still building ourlabs, developing a culture.
We've tried to, I think, emulateaspects of how you run your lab.
Can you give us your philosophyfor mentoring, how you

(52:36):
approach running your lab and.
Nurturing scientists at very criticaltransitions in their Ph.D. and in their
postdoc, their clinical trainingat these points in their career.
Well, I think there's one aspectwhich you take for granted,
but I assure you, you don't.

(52:57):
You two already have it.
But it's actually caringabout your mentees.
Turns out it's not that common.
And I think it's something thatyou pass on, just like child
abuse is passed on generationally.
I think good mentorship and bad mentorshipis passed on generationally as well.
And I think it doesn'ttake that much caring.

(53:20):
Just ask yourself, is thisperson heading in the direction
that's going to make them happy?
And you don't have to think aboutit in extremely altruistic ways.
Just ask yourself dispassionately.
Just by doing that, you're makingyourself three or four standard
deviations better mentor than most people.
So, if you have that edge, then thenext, uh, comment becomes irrelevant,

(53:41):
which is... That's the first order effect.
That's the first order effect.
The second order effectis let them explore.
Yeah.
And let them fail.
Yeah.
And let, absolutely let them fail.
And, you know, just shrug if they fail.
Don't try to make themfeel better about it.
Don't make them feel bad about it.
Just shrug.
It's what happens.
On to the next thing.
Yep.
Alright, so I think we probablyhave like 10 minutes left.

(54:03):
So, I think this is the Decemberepisode, so I would like to look
back at the year that was in NEJM AI.
Uh, first, I would love yourthoughts as editor-in-chief of like
any big trends that would be worthpointing out to our listeners.
And, also maybe, I think, teasethe editorial we have on the use
of LLMs that will be coming out.

(54:24):
I think that might be aninteresting discussion point too.
Right.
So, I think, well, one trend that, Ijust want to note, it's not a trend,
it's like, because it's a flat line, it'sessentially all our submissions involving
large language models involved one model.
ChatGPT

(54:44):
3.5 or 4.
I think, I hope, that's going to change.
And so therefore, there was notmuch in the way of a comparison
of other modes.
It was more like, what can this do?
And so there was a lotof, isn't it amazing?

(55:06):
It's like the dog in the opera.
It's not how well it sings.
It's the fact that it can sing at all.
And so we were seeing GPT-4 doinga lot of tasks that frankly, most
of us would have been skepticalthat any AI program could do.
And even if there were a lot ofproblems, as people had pointed out,
uh, the fact is it was doing it.

(55:26):
In some semblance of something competent.
In terms of trends, I thinkwhat's delightful, I think you
just shared with me a clip
where our associate editor,editorial staffer, uh, Morgan
Cheatham, was hosting Daphne Koller.
And she was commenting about the factthat we're on the steep exponential.

(55:49):
And just to make it real for our readers,the fact that we went from a GPT that
scored lower on the national medicalboards than any human, to a GPT or
a MedPalm that scored better than90% of those taking the boards in.
I mean, you heard me in your officeas a postdoc for years talking about

(56:14):
trying to get an AI to pass USMLE.
And I would bang my head against thewall and we couldn't get higher than
40% on actual like USMLE questions.
Turns out that we were just a couplehundred of billion parameters short.
And the fact that this went fromlike completely intractable to
now almost completely trivial.
Like we take it for granted, right?
Over a year, like every day Iwake up, I'm just, I'm shocked.

(56:37):
So that's great.
So that's what Daphne wasthinking about exponential.
She also said we don'tknow what the exponent is.
We don't know the exponent.
And so, therefore, we should be extremelyreluctant to make any prediction except
to say A, whatever the problems thatwe have are not the problems that
are going to be relevant next year.

(56:58):
So, I think hallucination andup-to-dateness will be less relevant
next year than they are now.
But there'll be other problems.
I think one of the bigger problems willbe what if this thing ends up saying
things that are right and we don'tknow how to prove that it's right.
I think the bigger problems, frommy perspective, is how do we get our

(57:21):
educational system and our medicalsystem to keep up with this crazy
approach increase in capacity?
Because all of a sudden, we actuallyhave the potential now, next year,
to give superlative performance.
And it'll become a legitimate question,just like my baby, my first patient

(57:42):
baby, why are we not giving that patient?
Where it's uncertainty about what to donext. The benefit of a second opinion.
And I think that question will notbe a ten year, from that question,
it'll be a next year question.
Yeah.
We also think about equipoise alot in medicine and not using AI

(58:03):
will not have equipoise in thevery near term, it seems like.
Um, so, I, I think that's it, Zak.
Are there any concluding comments thatyou would like to leave us with before we?
Yes.
I think that, uh, you're talkingabout the future, and I think I
referenced this before, which is,what a strange era we're living in.
Doctors, more than ever, arediscouraged and stressed, and yet

(58:28):
we have this exciting technology.
That's going to revolutionize medicine.
And there are other technologies thatare going to revolutionize medicine.
But medical doctors have neverbeen more stressed, depressed.
I think a survey showed that young doctorsin the U.K., somewhere like 50% were
looking for careers other than clinical.
In the United States,I think it's about 30%.

(58:50):
At Harvard, I think itmay be closer to 50%.
And it's a sign that the bestand brightest don't feel that
they can change medicine.
In the standard process, and so I reallydo think this is a wakeup call for
those of us who seek to be leaders tosay, how can we take all these pieces,

(59:13):
these new biotechnologies, new AItechnologies, the current broken ways
of paying for health care and turn itinto something very, very different.
I think it's a real challenge, and youhave to ask yourself, why is it that still
today, the most recognizable sign of adoctor is the filthy stethoscope that

(59:39):
they're hanging around their neck that'stouched so many other people and carries
the virus from one person to another?
One person to the other.
Why is that the symbol, thisvery old acoustic technology,
the symbol of medicine?
Yep.
Well, Zak, on behalf of the entire staffof NEJM AI, it's been a great year and
we look forward to a productive 2024.

(01:00:01):
Thanks for being on AI Grand Rounds.
Hear, hear.
Thanks, Zak.
This was great.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.