Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
What if we could use Clalit's insanelywide database to create a machine learning
algorithm that would identify the highestrisk members of Clalit and will begin
screening of those top risk individuals?
So, we did exactly that and theresults show that when we proactively
(00:26):
screened less than 500 individualsat top risk, among them we
found 38 additional HCV patients.
So, 38 out of less than 500 versus38 out of more than 50,000.
That's a 100-fold improvement.
(00:46):
I haven't done many thingsin my life that have
shown 100-fold improvement.
And for us, I think this study symbolizesthe paradigm shift from the classic
public health towards predictive care.
So, this is what modern day
public health could and should look like in the age of AI.
(01:15):
Welcome to another episodeof NEJM AI Grand Rounds.
I'm Raj Manrai.
I'm here with my co-host, AndyBeam, and we're delighted to bring
you our conversation with Drs.
Noa Dagan and Ran Balicer.
Noa and Ran are at ClalitResearch Institute in Israel.
Andy, this was a reallyinteresting conversation.
You know, Noa and Ran are bothvery high impact researchers, and
(01:37):
they've used this truly unique dataset, which spans, I believe, more
than half of the country of Israel
to publish these groundbreakingstudies on Covid and many other topics.
But they're also very advanced inimplementing predictive models at
the point of care to improve care.
They've been doing this for years.
So things we talk about as hypotheticalsand as viewpoints, they have hard
(01:57):
data on for more than a decade.
All in all, this wasreally fun and educational.
I agree, Raj.
And I think this episode hopefullycorrected what has been like an overly
U.S.-centric
bias that we've had on the podcast.
I think seeing their work, not only onmodel development, but as you said, also
in implementation, they're doing thewhole clinical translational pipeline.
(02:20):
They have this amazing dataset that few others have.
They did groundbreaking work duringCovid and are really, like, I think at
the forefront of clinical informaticsin a way that few other people are.
So it was a great conversation.
They're both like superenergetic and great to talk to.
They were even kind enough tohumor me with a somewhat tangent
conversation on what constitutes AIand what's the distinction between
(02:43):
machine learning and statistics.
And so all around an extremelyenjoyable conversation, and we're
excited to share it with the listeners.
The NEJM AI Grand Rounds podcastis brought to you by Microsoft,
Viz.ai, Lyric, and Elevance Health.We thank them for their support.
(03:07):
And with that, we bringyou our conversation with
Noa Dagan and Ran Balicer.
Noa and Ran, thank you for joiningus on NEJM AI Grand Rounds today.
We're excited to have you here.
Great to be here.
Absolutely.
Noa and Ran, this is a questionthat we like to always get
started with on this podcast.
So, could you tell us about the trainingprocedure for your own neural network?
(03:30):
How did you get interested inAI and what data and experiences
led you to where you are today?
And maybe Ran, we can startwith you and then go to Noa.
Sure.
It's really great to be here today, Raj.
I think my professional careerwas shaped by kind of constant
parallel work in different planes.
There were always several streams,progressing and synchronized side-by-side.
(03:53):
I'm talking about medicine, research, and data.
I think that the best way to describehow my neural net was shaped is
through two specific events that are,I think, still fairly vivid in my
memory, and I'll share them with you.
The first one, I think, iswhen I was a medical student.
At that time, Internetjust became a thing.
(04:16):
I don't know if you remember that time.
In parallel to med school, which wasa very humbling experience because
I was taught by amazing scholars,I began having a second parallel
life as an Internet evangelist.
So, I became an expert in searchingmedical literature and information on
the Web, which was not as intuitive asit is today. But suddenly, I was bringing
(04:39):
something else to the daily rounds of thesurgical department, a skill that even the
feared head of surgery, I'm sure you haveone in your memory from your own training,
even he couldn't match.
So for me, that was truly transformative.
I started making my livingas a student from teaching
doctors to adapt to the new era.
(05:00):
I wrote my first book aboutInternet and medicine and how
those two can actually intermix.
And then I decided to takeit one notch further and I
taught myself how to code HTML.
And I created Israel's firstWeb portal for doctors.
And eventually I sold itto an Internet company.
So that was my first entrepreneurial act.
(05:21):
So, what this pre-training, if you'dlike, what it did to my neural net
was the understanding that there's asecret door out of the rat race. That
by elevating technology and data, you'reno longer bound to your rank in the
extremely hierarchical medical ladder.
So, uh, I think that that, and thatthe fact that information technology is
(05:44):
going to basically change everything.
So, this became crystal clear to me.
And from that point on, what Itry to strive is to always be the
earliest adopter, to take an activerole always in trying to kind
of use tech to change practice.
So this is one core memory.
I was wondering if you could also takeus back a little further to your initial
(06:04):
conditions and like, how did you even getinterested in medicine in the first place?
Where did you acquire the background tobe able to do this Internet evangelism?
And can you take usback a little further?
Sure.
So, I think going to med school wasa decision made when I was too young
to make such difficult decisions.
So, I was kind of young and stupid and madedecisions based on what seems exciting.
(06:27):
And medicine seemed the most excitingthing on the university booklet.
So, so I took that one.
And as time went by, I thinkI began understanding better.
That I was actually fortunateenough to have my calling.
And it wasn't that early on.
In terms of my understanding ofInternet, it was all self-taught.
(06:47):
You know, when I had to teach HTML,I went to a guidebook on the Internet
called the Bare Bones Guide to HTML.
And I kind of memorized it andstarted working step by step.
This was all self-taught capacity.
I didn't go through formal training,although as a kid, I used to have
a computer fairly early on fromthe age of five, and I tried to,
I actually started programming
before I knew English. So, it was kindof funny because I did it word by word.
(07:10):
So yeah, these are allkind of older memories.
And, and when I graduated med schooland went into my residency, that's
where my second core memory kicks in.
I was a public healthresident by accident.
No, I'm serious.
I'm serious.
I was sent to have a job interview and Iaccidentally knocked on the wrong door.
(07:35):
And the guy in that room wascurious enough to tell me to come
in and sit down and start talking.
And the rest is history.
That's how I started mypublic health residency.
So here I am, a youngpublic health resident.
And as a resident, I was reading about the1997 Hong Kong avian influenza outbreak.
Don't know if you remember that event.
It was an event in which 18 peoplegot sick with avian flu and one
(07:57):
third of them actually died
from the virus. Was a really bad bug.
I couldn't grasp why no one wastalking about this risk of this
becoming a pandemic. And, andnobody was talking about it,
2002 I think it was.
But by then I already had some MPHtraining and I went to play with the
numbers and it seemed like it would be agreat economic gain if one would stockpile
(08:19):
antiviral drugs for this event, even if aflu pandemic would happen only once every
33 years, which is actually three in acentury, which is what you would expect.
So, I showed this Excel sheet to my mentor.
My mentor at that time is my friendtill this day, Professor Itamar Grotto.
He then became Deputy DirectorGeneral of the Ministry of Health.
And you know, Itamar said, youknow, this is quite convincing.
Let's show it to the Surgeon General.
(08:40):
So, we went to the Surgeon Generaland he says, you know what?
That's pretty interesting.
Let's show it to the Ministry ofHealth, to the Director General.
So the Director General of the Ministryof Health at that time, Professor
Avi Yisraeli, a scholar in publichealth, he says, you know what?
This is serious.
I wanna show this to the treasury.
So then few months later, I findmyself trembling at a cabinet
meeting, the Israeli cabinetmeeting with the Prime Minister.
(09:03):
And the Prime Minister is questioningthe minister of health saying, so
why do you wanna buy drugs in 300million shekels in $100 million?
And they're trying to explain andthey're kind of saying, you know what?
Get up and explain the Excel sheet tohim. So I stand up there, young and
frightened, and I start talking, and he'scross-examining me, and yada, yada, yada,
10 minutes later, they make adecision to buy drugs in $100 million.
(09:26):
And that's, you know, so if you askwhat was my take from that specific
event, I think it was that with theright scientific lever, and the right
mentor that supports you, and enoughperseverance, no matter your rank, your
age, you can have a large scale impact.
And don't be afraid to go allthe way with what you believe in.
You know, don't wait forsomebody else to do it.
(09:47):
Just be the guy.
So, when I came to Clalit, I hada strong sense of self efficacy.
I knew you could make a huge systemicchange, fuse mathematical modeling, common
sense, and what we call here chutzpah.
That's audacity.
So just go ahead and do it.
And most importantly, I think, youneed to have someone who believes
in you and pushes you forward.
So, I set up to create a team, andthat's where our story begins, I think.
(10:13):
Ran, that was amazing.
That was very, very inspiring.
And now I feel like I can kind ofgo take over everything, and I have
no excuses to realize dreams.
I totally agree.
I, I think you said something at the endthere, too, which is having a mentor and
the right scientific lever to sort ofpath break, and to encourage you, and to,
you know, to help you, uh, along thejourney, I think is very, very critical.
(10:34):
And I think multiple of us on this,this call also have Zak Kohane.
So, we have someone in our corner as well.
Noa, maybe we can turn to you now.
So, I'll just repeat the question here.
So, this is a question that we askall of the guests, which is, can you
tell us about the trading procedurefor your own neural network?
Maybe you can emphasize this.
You could be broad here.
How did you get interested inAI and predictive modeling?
(10:55):
You know, what led you tothe work that you're doing?
What data and experiences broughtyou to where you are today?
Sure.
So, I really wish I could answer yourquestion with like a big strategic
focused decision, uh, or a plan,but that was not the case at all.
And to be honest, it wasmostly due to chance.
(11:16):
So maybe in the neural network analogy,it was only the number in the set seed
command in the beginning of the process.
Which was for me a good numberbecause I'm pretty happy with where
I am today, but uh, it was a lot ofrandomness, uh, throughout the process.
So, I think it begins with mebeing a nerd, uh, but not a
computer geek kind of nerd at all.
(11:37):
I was not into gaming or anythinglike that, but just the kind of
nerd that does what she's told.
I think it was my high school days wherethe high school teachers and headmasters
really tried to encourage female studentsto major in computer science and programming
because back then more than 95% ofthe students in those classes were males.
(11:58):
So, they really, really encouraged femalestudents to pursue these kinds of majors.
And I was just, to this day,I don't know how to say no.
So, I just said yes, and I went into it.
And I kind of liked it.
So, I'm still, to this day, not thecomputer geek kind of nerd, but I really
liked the elegance and aesthetics ofwriting code, of solving algorithmic
(12:21):
riddles, these kinds of things.
And so, I really had no intention ofever using these skills, but I liked it.
I had fun and I went through my highschool doing that, knowing what I
really wanted to do is be a doctor.
I think that med school, later on, wasprobably the only part of this training
process that was not due to chance.
I really, this was my lifelongdream to be a physician since, from
(12:45):
ever since I can remember myself.
And then I was in med school andthen chance intervened again.
I think it was somewhere in the middleof the clinical years where, uh,
when I was I was pregnant with my firstson, and I was put, towards the end of
the pregnancy, I was put on bed rest,and the length of that bed rest was
(13:05):
long enough that it had to translateinto me being one year delayed compared
to the rest of my class, and I hadto like skip a year in med school.
I didn't want to completely waste thattime and completely waste that year,
so I did what everyone else who for anyreason had to wait a year during medical
school, where I studied, which was todo an MPH, a master's in public health.
(13:28):
So, I pursued an MPH in that year,and during that time I actually
heard a lecture by a professorvisiting from Boston Children's.
And describing the kind of workhe does in predictive modeling.
So, you can probably guess itwas Ben Reis, and it was the
first time I've ever met him.
And I was completely hooked.
Like, it was around 2009.
(13:49):
Okay, yeah, you just answered it.
My question was when, Ijust wanted to index this.
So, this is 2009.
Yeah, so this is, and predictivemodeling, these kinds of
things were not a thing yet.
No one talked about AI yet.
But it sounds like science fiction to me,and it was really, really interesting,
so I actually went and asked him tobe my advisor for the MPH thesis.
(14:11):
And this is how I got to know him, andwe actually did that thesis together.
And then, a few years ahead, uh,I completed med school, I completed
my internship, and I was, I was reallystill, I was sure I wanted to be a
physician, but throughout these clinicalyears, throughout the internship year,
I couldn't find a single medical residency that I thought would make
(14:36):
me happy to do on a daily basis forthe rest of my professional life.
And I still think to this day, thisis the most noble and interesting and
beautiful profession, but somethingabout how medicine is practiced
wasn't making me as happyas I thought it would.
So, I, now it was around 2014,and I contacted Ben again.
(14:57):
And I asked him, listen, do you know of somewhere in Israel where I can combine
my computer science skill, my programmingskills with my medical knowledge?
And he actually introduced me to Ran.
So that's how I met Ran and Ranwas very impressive to this day.
He's very impressive.
And it was clear that they weredoing really interesting research in
(15:17):
what was then the Clinics ResearchInstitute, but something about
working in a health organization
in a basement of a clinic that was morethan 100 years old. So, you can imagine
how it looked like. They actuallyhad a pet mouse in that basement.
So, every once in a while, a mouse sprawledthe corridors, and they actually named him.
And it was basically then when I said,listen, this is really, it looks really,
(15:43):
really important and interesting, but Ifeel that maybe a medical startup will
be like a cooler environment to be in.
And I ended up findingthis kind of startup.
I actually negotiated the contract.
I was pretty happy.
And like, for the courtesy of it,I called Ran to say, I'm sorry,
but I'm taking the other job.
And to this date, I'm not exactlysure what he told me and what
went on in this conversation, but
(16:05):
basically, I remember one sentence.
Ran, I don't know if you remember thiscall, but he said something like, I
can't really explain it in a way thatwill convince you, but if you'll come,
in a few months you'll know I was right.
Again, I don't know what happened there,but I hung up the phone and took the job.
And fast forward 10 years later,I've been working in Clalit
(16:26):
almost all the time ever since.
I completed my public healthresidency there and my
Ph.D.
in computer science.
I also did a post doc in the DBMI,in the Department of Biomedical Informatics
in Harvard with Zak Kohane as my mentor and also Ben Reis.
They were my two advisors and,
I think, two true mentors to this day.
(16:49):
And this thing was actually partof a pretty beautiful collaboration
between Clalit and DBMI called theClalit and DBMI Berkowitz
Living Lab made possible due to agift given by the Berkowitz family.
And basically, I think that's how myneural network was trained by mistake.
And I think I can also admit by nowthat Ran, I think you were right
(17:12):
back then in that conversation.
So,
that's the answer for theneural network part of it.
Noa, that was amazing.
And I think that's a great pointto actually transition to this
next part of the discussion.
We really want to dive into the workat Clalit and the work that you've
done for more than a decade now,
including the work that you'veled on Covid-19 and on many other
(17:34):
topics for the past few years.
So, I am very privileged, actually,in my own sort of position as an
academic and running a lab at DBMIat Harvard Medical School that
I get to work with both of you.
And so, I, you know, I've been workingwith you both very closely, and
I've really gotten to appreciate howspecial Clalit's research environment
is, and how really amazing both themethodologists and the clinicians
(17:59):
are and how special your data is.
And I can't really think of anotherexample anywhere in the world that is
quite like what you've accomplishedin terms of just assembling both this
amazing team and really investing,I think very early on, and then
sustaining that investment in creating this very special dataset.
(18:19):
So, I'm probably going to getsome of the numbers wrong here.
I know, at least from the projectsthat we are working on together, that
this data spans at least two decades.
And it is longitudinal individuallevel data for millions of patients
that represent something like half ormore than half of the country of Israel.
And this is a dream
(18:41):
dataset to do many things, to do many epidemiological investigations,
but also to ask not only questions aboutthe prevalence and incidence of disease
and risk factors and understand thosethings temporally in this population, but
also to even ask nuanced causal inferencequestions about how effective different
drugs or different interventions likevaccines during Covid-19 were and are.
(19:07):
And so that's really, maybe, where Iwant to start and then we can branch
off, but what really caught my attentionand really made me aware of some of
the work that you all are doing andhow special this resource is, is
the work that you did on Covid-19.
This is from 2020, all the waythrough the next several years.
Actually, I'm going to just interruptmyself before we talk about that.
(19:30):
Maybe we can just talk aboutthe circumstances that led you
to sort of invest in Khalith.
I think Ran, you hinted at it.
One of the things I learned fromboth what you said and your opening
remarks and Noa's is that you'revery persuasive. And you're able
to accomplish a lot that, so
I'm guessing that that has somethingto do with how we were able to
establish this from the beginning. Butmaybe you can just tell us a little bit
(19:50):
about, you know, before we jump into someof the work that you've done recently,
the founding of Clalit ResearchInstitute. What those sort of
initial conditions were and how yougot it off the ground. And were
able to sort of sustain thisinvestment in all the data and
the research that you guys are doing.
Sure.
So as the setting the ground,I have to say that we cannot be
(20:11):
credited for creating the datasets.
Okay.
We're standing at the shoulders of giants.
Basically, I think Israelhad a landmark piece of
legislation put forward in 1994,the National Health Insurance Law.
They did a lot of good things.
One of them is make the four health fundsto become responsible end-to-end for
the life of their patients and for alltheir medical needs throughout their life
(20:32):
and paid nationally through capitation.
So this created really a very good, sound,well-motivated system for all of us.
And probably because of that,all of the SIG funds decided
to go digital pretty quickly.
And in digital, not only in theiradministrative but also to have an
electronic something that was, youknow, unimaginable at that time,
(20:52):
electronic health records, mid-90s.
And so, they went aheadand did all of this.
And now, when we went into our endeavor,when we're talking about 2010, we already
had like 15 years of longitudinal data,end-to-end, primary care, specialty care,
hospital care, claims, and provider data.
(21:12):
So, there was truly unparalleled.
But Raj, you asked about investmentin data, but I think what was even
more important was the decisionto invest in data science talent.
And so, at that time in 2010, I was alreadya few years at Clalit in a policy role.
So, I decided to kind of put all ofthose roles aside and established
(21:33):
the Clalit Research Institute.
And the vision was pretty simple.
If we take the best data in the worldand we managed to couple it with
the best talent Israel could offer.
We'll have a leverage point to movemountains, probably on global scale.
And that's probably why Ididn't let Noa slip away
in that decisive call on Friday, 2014.
So, you know, within— So you do remember?
(21:55):
Vaguely, but I will say that, you know, it was, it was quite,
I think, a pivotal point at the time.
Because in a few years we grew fromfour people to 70, and this group of
amazing young professionals had both thedata skills, but also kind of an in depth
understanding of the clinical needs.
(22:17):
The kind that you can only get when you'reworking within a health care organization.
It's really difficult to do thatfrom an academic position when
completely detached from the needs,evolving needs of the organization.
So, you know, early on, we did alot of cool things with the data.
We were changing policy and wecould already see measurable impact.
We decided to go and reduce healthdisparities, for instance, so between
(22:39):
affluent and less affluent clinics.
So, within three years of anintervention, large scale, we were
able to show a 60% reduction
in some of the key measures.
What was the intervention?
So, the concept of that interventionwas to identify the least successful
clinics in achieving their healthquality goals, especially with those that
(22:59):
were associated with low socioeconomicstatus, and then go ahead and tell each
one of the district heads that theycould not just put them under the rug.
There was a special focus from thedirector general on those clinic least
affluent populations with the mostdifficult life conditions and people
say you can't change it, you know,it's the people don't do what we ask
of them and it's really difficult andat that point we said no. It's just a
(23:20):
matter of determination and if you wantto increase the averages, don't go for
the highest achievers and improve them.
Go ahead for the lowest achieversand bring them closer to the average.
And that kind of approach withinthree years and, you know,
CEO-level intervention, 60%reduction was absolutely amazing.
On another program, we were able toreduce, for instance, admission
(23:41):
rates on multi-morbid patients by 43%.
So, you know, amazing things could happenand we started collaborating, became
a collaborating center for the WHO.
So,
we had a lot of global interest.
And at that point I was asking myself, youknow, what is it that's happening here?
Is it, is it data or is it talent?
And I can tell you when the answer tothis question became crystal clear to me.
(24:03):
It was in 2017 when actually the NewEngland Journal of Medicine had the global
competition called the SPRINT Challenge.
I don't know if you guys remember that.
They didn't allow scientists to bringtheir own data, just the skills.
The journal brought the data andcreated an even playing field
and everybody had to compete.
And at that point, Noaled our small team.
And we created a new way of supportingdata-driven individual treatment
(24:25):
decisions for, in that case, hypertension,because that's what SPRINT is about.
With lots of innovation in there,we had new ways of calculating
individual number needed to treat.
We had new types of patientinvolvement interface.
And, to cut a long story short, Icouldn't have been more proud when the
New England Journal of Medicine choseour solution as their first place winner.
So, at that point, Isaid, yes, it's clear.
(24:46):
It's not the data, it's the talent.
First and foremost, that'swhat makes the difference.
Amazing.
Amazing.
Noa, maybe we can, we can moveto some of the work that you're
doing in implementing predictivemodeling in real clinical care.
And what really strikes me is thateveryone is talking about this now
with AI, with large language models,with ChatGPT and its cousins as
(25:09):
potential agents to aid in diagnosis.
But I think long before this sortof current LLM era, and even before
The, let's say, various kind ofinvestments and interests from the
broad community in AI, even, let'ssay, dating back to 2012 with ImageNet
and Convolutional Neural Nets, I thinkyou all were implementing predictive
models in real ways that reallywere changing care, were changing
(25:32):
outreach, were changing the way thatyou delivered information to providers
to reach patients most in need.
And I think Ran just hinted atsome of that with the sort
of district level interventions,and reaching out to the less
affluent clinics and areas.
But maybe you can tell us alittle bit about how you see the work
evolving and what has led to successin implementing predictive modeling
(25:54):
in real clinical care long before ourcurrent AI wave that we're in now.
Sure.
So, I think, first of all,Ran and the team, even before I
joined, created the first predictionmodels, I think even maybe 2010.
And I think it was more, like,research focused then, and at some
(26:15):
point, you ask what evolved us.
I think the decision at somepoint to transform from research
to actual implementation at scale.
And the decision that we reallywant to make an impact not only
through the publications, butthrough actually transforming care.
So, we were actuallyconvinced that it is possible.
(26:35):
We were, I think, convincedthat data is the key to do that.
But we still had limited resources.
Maybe I think that the time
transformation point was around2018, when we made this decision
to transform from a researchinstitute to an innovation center.
We had hard choices to make becausethe resources were still very limited,
so we still sat in the same basement.
(26:58):
As Ran said, with really great people.
I actually think the basementwas like a good selection
bias for really good peoplebecause people who agreed to come
and sit in that basement really,really believed in the cause.
So, uh, we.
Mission driven, mission driven.
Exactly.
As today we have much nicer offices,but back then it was like there was a
(27:20):
commodity, like where we are, what we do.
And we really try to, to makesomething that will be influential.
So, I think the first
major decision that we've made was to focus on
community care because we had 14hospitals within the credit network.
We could do a lot of innovationwork in the hospitals, but maybe
(27:42):
it's kind of a surprising choicebecause most of the innovation that
we know today comes out of hospitals.
But for us,
we thought that the true clinicalimpact will come from outpatient
care because this is where thereal medicine should happen.
If you want to keep patients healthy,this is where you want to intervene.
Before they go to the hospitals, before, let's say, for chronic
(28:05):
conditions, every admission shouldbe considered as some sort of
system failure that is preventable.
So, if you want to use predictionmodels at scale, go to that setting
where you have millions of patientsand you need to choose from those
patients who are those patients at risk
that you want to intervene with, that you really want
to change the course of their disease.
And our first major decision, I think,was to focus on community care setting.
(28:30):
And I think that the second major decisionback then, which made the difference
compared to what we did before that, was to do things in a
way that will avoid reinventing the wheelevery time for every deployment project.
So, like implementing things, deployingthings, it's so hard as it is in medicine.
There are so many thingsthat make it hard.
(28:52):
So, you never want todo the same thing twice.
And we made this decision that we wantto really try to use every project.
And to make sure that every advancementfor a specific deployment really paves
the way for deploying the next solution.
And it's not an obvious decisionbecause sometimes you want to
have quick successes and you wantto make things happen quickly.
(29:14):
But if you really want to do it atscale, you have to do it systematically.
And those two decisions to focuson community care and to do things
systematically led to the decision orto the understanding that what we really
needed is to create a systematicprocess to inject data driven insights
into the point of care, specificallyprimary care, in a way that will be
(29:36):
agnostic to the specific medical domain,that will work if it's osteoporosis or
cardiovascular or hepatitis C or whatever.
And will be agnostic to the type ofdata, whether it's structure data,
or images, or text, or monitoringdata, and to the type of model.
So, machine learning, LLMs, whatever.
(29:57):
We didn't think about LLMs backthen, but the strategic decision was
to basically create something thatis agnostic to where the insight
comes from and how it was created.
We will have a way to injectit into the point of care.
And this is a thing wherethe true change happened.
And this is how a platformwe call C-Pi was born.
So, C-Pi is a platform, that stands forClalit Proactive-Preventive Interventions.
(30:20):
And it has two major roles.
One is to be the house for all ofthe prediction models we can imagine.
So, it has a proactive roleof identifying who are those
patients we want to focus onfor preventive care.
I don't know if you know this articlethat came out a couple of years ago
that really neatly showed what we knewfor years is the truth, but it really
(30:41):
quantified it in a really nice mannerthat says that if primary care physicians
want to do everything by the book,including chronic care, acute care,
preventive care, they need to work
only 26.7
hours per day.
And so of course it's not possibleand it's unavoidable that people
will go unnoticed for some ofthe care that they should get.
(31:03):
So, these prediction models basicallycreate a really smart spotlight on the
right patients for the right conditions.
And now we have a place to put them,to put them all and to create this
population-based view of who you shouldfocus on for each clinical domain.
And the second thing that C-Pi does isafter we focus on a specific patient,
because we identify that this patientis at a specifically high risk, we
(31:27):
create a decision support of whatto do with this patient in order to
provide that patient with the best care.
And again, it, it comes to solvea problem where I think we all now
understand that medicine is just becoming
super complex and decision makingis becoming really, really hard.
In the 90s, let's say we had lessthan 10 drugs to treat diabetes.
(31:49):
Now we have dozens.
And for each one of these drugs, youneed to know what is the right patient.
What is the relative indicationsand relative contraindications.
And it's hard.
And it's hard not to make anymistakes and not to miss anyone.
So, the second thing we do with C-Piis basically to create a very detailed,
(32:10):
very individualized, very actionabledecision support as to what do we
need to do with these patients fornumerous clinical domains to the level
of an expert clinician consultation.
So, imagine that every nightwe have experts, automatic
experts running through the HRsof all the patients in Clalit.
Identifying all the gaps incare and how we should fix them.
(32:33):
And this is, I don't know how toconvey the, the complexity of it,
but maybe, maybe numbers will help.
So, it is to the level of hundreds ofclinical features that go into each
such clinical pathway and the number of recommendations that
are being created by these pathways arehundreds to thousands of permutations of
(32:56):
what is the most accurate recommendationsto give to that specific patient.
So, this platform is being developedin house by Clalit since then, that
strategic decision, plus the amount ofyears that it took to convince everyone
in the system that we need to putmoney and effort into creating that.
And I think we really succeededin creating that vision of an
(33:17):
interface that is agnostic to anything.
To whatever clinical insight youwill bring, we'll have somewhere to
put it and to deploy it into care.
And now we have manydomains in that system.
For numerous clinical, chronicconditions, also infection agents.
Different things that were chosen to be at high degree of priority
(33:38):
for us to treat our patients better,and it's currently deployed for, I
think today, more than 1, 500 PCPs.
So, I think it is really the way todo things at scale. Not to reinvent
the wheel for any kind of insight orto any kind of clinical domain that
you're interested in, but thinkingsystematically, even if it takes longer,
(34:00):
the effort will pay itself later on.
That's really great, and I thinkthat's a great summary too, and way to
think about how to have impact here.
So, we want to switch topics and switch gears a little bit from integrating
predictive modeling into clinical careto some of the research that you've
published, the academic research thatyou've published over the past few years.
And so, I think Andy's going to talk aboutsome of your more recent work in AI and
(34:24):
machine learning, but I want to spend justa few minutes talking about the Covid-19
work that you published from 2020 onwards.
And so Ran said something afew moments ago that really,
I think, stuck with me.
So, both it's the team that youhad recruited, and you're very well
positioned, I think, by thetime that Covid-19 hits.
(34:44):
And we're in 2020, and everyone isconfused about what works, what
doesn't work, what we should be doing.
So you're already well positionedwith this stellar team that you have.
You're very lucky you have Noa there andmany others, and you're winning SPRINT
competitions and doing things like that.
So you have a great team.
You have this amazing dataset, andthen you have this other or, you know,
related ingredient, which doesn'tget talked about enough, but I think,
(35:07):
Ran, you just mentioned it, whichis an understanding of the data.
And so that understanding of the data,every time it sort of hops one away from
the data creators or what Andy and I liketo talk about on the show, which is a
data generative process, you know, what'ssort of baked into the data. Where it's
coming from? What could be confounding?
What could be issues with wheredifferent institutions record things
(35:28):
differently? Things like that, thatreally plague a lot of studies, a
lot of observational studies, youare very, very close to the data.
So, you know the warts, you know theproblems, you know how to deal with them.
And to me, that is what reallystuck out in those Covid papers,
just that you were very careful.
You basically did the best you couldwith non-randomized data with identifying
(35:48):
those sort of potential issues andthen doing either careful sensitivity
analysis or careful adjustments toremove the potential confounders and
issues with understanding the efficacyof the vaccine in the general population
in some of the early New EnglandJournal of Medicine papers and then
in certain subpopulations and othergroups with some of the later work.
(36:09):
So that's the way I saw it.
I'm wondering if you can just, you know,maybe spend a few minutes telling us
about, like, what it was like when youwere trying to publish that first paper.
I think I can, I feel like Noa's told me a little bit about this here and
there, and I could probably simulate someof it, but maybe I want to hear directly
from you. Like what, I mean, you justwent on this, tear of producing very,
(36:30):
very high impact papers very, very fast.
So maybe tell us about the stresslevels, the environment, the amount
of work, and, you know, what yousaw as the sort of key ingredients.
Maybe we could start with Ranand then, and then go to Noa.
So, you know, I think that Covidwas truly our make or break.
It was like everything we've doneuntil now just has set us for
this moment in time and place.
(36:51):
And this was like combining all of ourcapacities, capabilities over the years
and making them work at the right time.
The sad thing is that Israel at thattime, as you've said, was one of the
first countries to vaccinate and wasvaccinating faster than anyone else.
You know, 2 million people in about threeweeks, and suddenly was the first place in
the world where this data was available.
So, with this data, we began working.But then we were reading very
(37:14):
carefully Miguel Hernan's books.
So we knew better than to just jumpright in and do simple associations.
We knew we had to take causalinference very seriously.
We need to emulate the target trial
that we're trying to do.
And that's not trivial to do it soquickly and with such a complex data.
So, this is where, you know,good friends kicked in.
(37:35):
Noa talked about the Berkowitz LivingLab collaboration between Harvard DBMI.
And so Zak, Zak Kohane again, I think,as we know, one of the most prominent
figures in this domain in the U.
S.
and myself were trying to create anongoing work between the two teams.
And at that point, that'swhere it kicked in.
Because we had the best worldclass leaders to support us.
(37:56):
We had, in causal inference,we had Professor Miguel
Hernan to work on the paper.
And with epidemiology of infectiousdiseases, we had Professor Marc Lipsitch.
What else can you ask for?
And in informatics, wehad Professor Ben Reis.
So, you know, this was reallyimportant in giving us the
kind of relief that we
were in good hands and that weknew what we were doing. Because
(38:19):
making the wrong decisions here wouldhave had truly detrimental impact.
And so, what we actually didin terms of the science is
we did something new for us.
It was to create digitaltwins, if you'd like.
Every vaccinee got somebody who wasabsolutely identical to them, except that
they weren't vaccinated at the time, andwe followed them like it was a trial.
(38:40):
And what we did is if somebody, from those couplets, if the
unvaccinated guy got vaccinated, thenwe decoupled them and had the
newly vaccinated guy get a new twin.
So, this was really computationallyheavy, but, you know, we had by February
2021, the world's first very accurateestimate of vaccine effectiveness.
(39:04):
And we had that first publicationin the New England and a short while
later we had data from follow-up swabstudy that we've done in our hospitals
to suggest that everything we did inthat study was absolutely correct,
although it was retrospective.
So, we ended up, indeed, as yousuggested, repeating this methodology
several times during the pandemic toprovide decision makers with real time.
(39:27):
And we, you know, we ate our own dog food.
You know, at that time, I wasthe, appointed as the government's
chair of the national advisory teamon pandemic response for Israel.
So, my nights were in the cabinetand my days were at Clalit.
So, it was kind of interesting.
So, in eating our own dog food, they meanthat our own data was used for national
(39:48):
decision making and a lot of criticalpolicy decisions were made based on
real time data, based on this data.
So, it was exciting times,best of times, worst of times.
And Noa, can I turn to you aboutyour, your experience with
those, those first few papers?
Yeah, sure.
I think Ran really describedit accurately.
I think it was it was really workingunder pressure and understanding that
(40:13):
the world is waiting for this information.
And we really tried to do our bestto do the best work we can to provide
information that the world waited for.
And it was stressful.
We didn't sleep for, I think,many nights, or slept a very small
amount of hours per night back then.
It was, I think, to this day, oneof the major things that we did.
(40:34):
And I also have to say that I think thefocus was on our vaccine effectiveness
studies and our safety studies, otherstudies regarding those vaccines.
But I think the pandemic was alsotransformative for us, for the scale of
using data science and prediction tools,regardless of the fact of, like, we having
(40:58):
vaccines, we're doing high impact studies.
I think it was the first time thatour prediction models worked at scale
to the level of reporting everyday to the highest management in
Clalit, what was going on with them.
So, I think even though the world caredabout the vaccine effectiveness studies,
for us I think one of the transformativethings was actually the work that we did
(41:22):
with prediction models back then to thescale that we've never thought possible.
The first example was in March2020, was really the first days
of the pandemic and policy makersincluded for the first time came to
us to ask for prediction models.
So, I think until then we had topreach what prediction models are
(41:42):
and to convince them to use them.
This is the first time where I thinkwe all felt that we're dealing with
the unknown. And they came to us askingfor help and saying that they think
that if we have a prediction modelthat helps to stratify who are those
patients who are expected to have severecondition if they contract the virus.
(42:03):
It will really help them.
And we said, this is great.
And we're really happy that forthe first time you're coming to us.
Since then, it happened many times, butback then I think it was the first time.
We really like to help, but pleasewait a few months and then we'll
be glad to create such a predictionmodel because right now we have all
the features and all the data, butwe're missing one crucial ingredient,
which is the outcome of interest.
(42:23):
Because we didn't have Covid-19 patients
in Israel back then yet. We hadmaybe a very small amount, but definitely
not ones that experienced this severeoutcome that we want to predict.
So, we said, wait a fewmonths, we'll get back to you.
And they really insisted.
They said, we really need this thing inorder to make decisions now, in order to
inform the public now as to who shouldbe careful not to contract this virus.
(42:48):
So we ended up actually usinga flu prediction model that we
developed several years before thatto prioritize flu vaccines to make sure
that the highest risk patients willnot be not unvaccinated when the winter
comes, and we ended up using that model.But we really want to integrate some
Covid-level data into it because weknew something about this disease, right?
(43:12):
We knew the case fatality ratesthat came out of China back then.
We knew that, for example, what are the age groups that
are more prone to fatality?
What are the groups, sex groups,chronic conditions groups?
We really wanted to integrate thatepidemiological level data back
into the flu prediction model thatwe felt captures some biological
tendency to experience severecondition, but we wanted to integrate
(43:35):
that into, into the model as well.
And the way we ended up doingthat is to use actually a fairness
prediction model that we used invarious studies where we wanted to make
sure that our predictions are calibratedtowards numerous protected groups.
So, imagine you have these protectedvariables, you have hundreds or
(43:56):
thousands of subgroups and you wantyour algorithm to be simultaneously
calibrated towards all of these subgroups
to make sure that it's fair.
We actually knew this algorithm from ourwork with scientists like Guy Rothblum
from the Weizmann Institute and Omer Gulfrom Stanford, if you know these names.
And they really created a fantasticmulti-calibration algorithm for fairness.
(44:16):
But we ended up using that algorithmto adjust the flu predictions to
those case fatality rates thatcame out of China back then to make
sure that we can actually createsomething that is logical for Covid.
So, we ended up using this weird kindof model and within two weeks, 200,000
patients in Clalit received personalphone calls from their clinics, telling
(44:39):
them that they are at the predictedhigh risk to experience this new
unknown disease in a severe manner.
So keep your social distancing andplease know what social distancing
is because no, no one knew back then.
And if you need to contact the clinic,this is how you should do it because we
don't want you to come in physically.
And a few weeks later,
when we actually had enough Covid patients in Israel, we validated the model
(45:04):
and we actually were pretty surprisedto find out that it worked really well.
So, for us, it was one of manyprediction models that were, did
some really heavy lifting throughoutthe pandemic for many decisions from
national instruction of how to liftthe lockdowns, and to other prediction
models that were used to ensure thathigh risk patients get vaccinated in time.
(45:25):
And I think the, the VE, the vaccineeffectiveness studies were really
important, but for us it was also verypivotal in the way in the scale that
we use prediction models for, I think, the first time that scale of the
entire management includes only focusingon these prediction models and how to
make care decisions according to them.
(45:48):
Got it.
That's great, Noa, incredibly impressive.
And this became also, one of theother high impact papers too, right?
There's a Nature Communications paper, Ithink, that you published with the model.
And I'm just going to make a note thatwe should probably discuss the thread
between this and the projects that we'reworking on together on kidney function
and heart disease, because there'sprobably some interesting parallels.
Okay.
I want to hand it over to Andy.
(46:09):
I think we want to move on tosome of the AI work and then,
and then some other topics.
I think that's a natural segue to talkabout some of your recent non-Covid work.
So, I'd like to talk about the paperthat we published in NEJM AI, that
you both worked on called "Prospectiveevaluation of machine learning for
public health screening (46:25):
identifying
unknown hepatitis C carriers."
So maybe Noa, couldyou tee us up on that?
Where, sort of, what wasthe motivation for the study?
What did you do and what did you find?
Sure.
I think we can ask itagain and address it to Ran.
That's okay?
Yeah, that's fine.
Yeah.
How about Ran, could you teethat up for us, give us the
(46:48):
motivation and what you found?
Sure.
I think most of the people who hear thispodcast might be aware of hepatitis C.
For those who don't, it'sa viral infection that tends to
become a chronic infection andgradually messes up with your liver.
And in a few decades of silentlycarrying the virus, one in three will
end up with cirrhosis, which is not good.
(47:09):
So, now we have super effective treatmentthat could prevent all of this.
Okay.
It's 98.8%
effective.
That's according to oneof our own past studies.
So it's near perfect treatment.
So the only thing is that 50%of our patients will not be cured
by this drug because we don't knowthat they're carrying the virus and
therefore will not be there for them.
(47:31):
So, every year, Clalit follows theinternational recommendations and tests
50,000 previously untested people.
And among them, we generally find38 patients in this group, okay?
So, it's great for those 38 patients,that's really great, but it's
completely inadequate if we want toreach elimination of this disease
(47:53):
by 2030 as the WHO set as a goal.
So, what if we could useClalit's insanely wide database
to create a machine learning algorithmthat would identify the highest risk
members of Clalit and will beginscreening those top risk individuals.
So, we did exactly that, and theresults show that when we proactively
(48:19):
screened less than 500 individualsat top risk, among them we found
38 additional HCV patients.
So, 38 out of less than 500 versus38 out of more than 50,000,
that's a 100-fold improvement.
(48:40):
I haven't done many things in my lifethat have shown 100-fold improvement.
And for us, I think this studysymbolizes the paradigm shift,
you know, from the classic public health towards predictive care. So, this is
what modern day public health could andshould look like in the age of AI.
(49:02):
Noa, did you want to addanything on top of that?
I'll just add,
I think it's also one of the moresatisfying prediction models to
develop because usually in medicine,we have these longer outcomes.
We predict things 10 years intothe future, and it's hard to know
whether we got it right or wrong.
And this was one of these satisfying caseswhere each day we could come in, rerun
(49:25):
the extraction and see how many patientsthat we did we get right last night.
Um, how many new identified cases werebecause we sent them to be screened.
And as a clinician that chose notto be a practicing clinician, it
was really feeling the impact of,wow, we've saved those people.
We made sure that they will get treatmentthat could potentially save their lives.
(49:46):
So I think that, that's myfavorite part about that project.
Awesome.
So I do want to use this paper as ajumping off point to talk about future
applications of AI in public health.
But I think actually I'd like tofirst ask, or first, uh, sort of pull
back the curtain a little bit here.
So I was the handlingeditor on this paper.
So I, you know, I talkedwith Noa about it.
I talked with, uh, you're shocked, I know.
(50:08):
And I think there wasunanimous agreement that this was
an important high-quality study.
What it sparked off for usinternally was, is this AI?
So, I think that this was an interestingtest case for us to try and actually,
for our own purposes, try and naildown a definition of what we mean
when we say artificial intelligence.
I mean, like super high-quality study, A plus team.
(50:31):
I think everyone wasall excited about that.
It did spark some internal debateabout what we should call AI, what
we should call machine learning andwhat the taxonomy of this looks like.
So, I'm very curious tohear your thoughts on this.
I know that trying to define AIis always a hazardous thing to do,
but I'd love to hear your thoughtson how you think about that.
No, I think you have verystrong opinions on this one.
(50:51):
So please begin.
Yeah, I actually do.
And it's interesting tohear the behind the scenes.
I didn't know that part ofthe story, so I do have strong
opinions about that because I think.
The AI, uh, what is the AI?
It's a definition that islike, I think well accepted, right?
AI is any technique that basically enablescomputers to mimic human intelligence.
(51:13):
Or maybe today we need to alsosay suppress human intelligence.
But I think that people forget that AIis a toolbox and it's a wide set of tools
from, for me, simple logistic regressionsto more sophisticated ML models through
foundation models and, and if youspecifically think about text now, LLMs.
(51:34):
But I think that people who areactive in this domain really need to
develop skills in all of those tools.
And not to become fanatics of a specifictechnology, because at the end of the
day, I think that the real thing todo is to start from the clinical need.
(51:54):
To understand the real clinical need.
And then to take the most appropriatetool from your toolbox and, and to use it.
So, we actually make sure that we arewell versed in all of these tools.
We create things from logisticregressions and even simpler things
than that, even points-based models,to foundation models that we
actually develop with great postdocs.
(52:16):
For example, from MarinkaZitnik's lab and Zak's lab.
So, we believe in the full spectrum ofthese tools, but we, I think, really
believe in, in using the right tool forthe right task and not to go with the hype
of the technology. Because if you offer metwo different models for the same clinical
need and one is simpler and does the workjust as well, I will always opt for the
(52:40):
simpler model for thousands of reasons ofwhy we should deploy things that we can
understand and trust and monitor. Butif that test can only be made possible
by a more complex model, we'll use that.
But all of the spectrum is AI, as longas you use data to do better care.
Ran, I, I wonder if you agree.
(53:01):
I think, I think that's exactly the point.
I mean, we feel sometimes people are soexcited about the technology that they are
not thinking about the real clinical need.
They have a tool and they're lookingfor the clinical justification.
Trying to find the nail for the large language model hammer
Is not a good way to do what we do.
So, what we truly aspire to do is tostart from the real clinical need
(53:23):
and then ask ourselves, what isthe best, most simple thing to do?
Hint, hint, not LLM, 99% of the cases.
So, whenever you can, good oldmachine learning is the real
answer that you should go for.
And don't get me wrong.
I mean, we do projects, amazing projectswith AI engines and LLMs, but that's not
(53:44):
my current knee jerk model of choice,
let's put it this way.
I think that's a very pragmatic answer.
And I 100% agree with the solutions-oriented approach that you should do the
most complex model you need, but makeit no more complex than it needs to be.
I think I have a particular brand of PTSDwhen it comes to language around models.
(54:04):
So, I'm trained both as a statisticianand a computer scientist.
And I've just been at the middle ofall of these language wars where is
logistic regression, machine learning.
Is it statistics?
And so honestly, like, depending onwhat side of the bed I get up on, on
a given day, I may come down on oneside of that argument or the other.
So I, again, I think this paper forme specifically was a very interesting
(54:24):
test case about what the rightway to describe the models were.
I know you and I, I think went back andworked on, we came to consensus
on the language and stuff like that.
But I think, yeah, it did spark
a lot of good discussion amongstthe editors about how papers should
be described and things like that.
Again, complete consensus on thequality of the paper and the work.
A little bit of a pedantry exercise on our side, I think, though,
(54:47):
about exactly how to describe it.
And I see Raj grinning.
So I was just going to say,I was like, I thought AI was
just what we haven't done yet.
So, you know, yeah, what, what we're,what we're not capable of doing.
But, yeah, I, I totally agree.
I also, Noa and Ran, I reallyliked the emphasis on simplicity and
keeping it as simple as possible while
(55:09):
sort of achieving the goalthat you're trying to achieve.
And I think there is, especiallynow, there's a lot of energy around
sort of technology first approachesas opposed to the needs or the sort
of clinical public health goalsthat I think are really clear.
It's really clear that your missionis to change care and to sort of to
do very high quality research and thatdictates the technology that you use.
(55:31):
Completely agree.
So, having said that, I often get askedtoo, as someone who works in public
health, how will AI affect public health?
And I think, unfortunately, my kneejerk reaction is usually negatively.
Where my brain goes islike misinformation.
There are just like waves andwaves and waves of, or huge
potential for LLMs to create
(55:52):
misinformation machines.
But what I think I really like aboutthe work that you guys do is actually
it points to a positive use case forthis technology for public health.
So, could you continue to be the rayof light that you've been so far and
tell me how you think that AI willimpact public health going forward?
I'm actually really surprised thatyou said that you think it, it has a
negative impact on public health becauseI really think the exact opposite.
(56:15):
I like, I'm really sure that this is the way public health will be
revolutionized. Because I think we canall agree that public health is the
best health we can provide because,and not only because Ran and myself are
public health physicians, but I thinkit's the best type of medicine, right?
We get more health, more healthy years,for less money, fewer side effects.
(56:38):
It's one of these rare cases ofa win-win situation all around.
So now if we have these automatic toolsand we can identify, basically, who are
those patients at risk and what we wantto do to provide better care for them.
And, again, no matter what the tool is,whether it's an LLM that went through
your file and identified the caregaps, or if it's a prediction model
(57:01):
that identified that you have a high likelihood of suffering an MI in the next 10 years, now
we have all of this automatic informationproduced for everyone in the population.
Or everyone we have data for.
And now we can really stopbeing reactive and start being
proactive and preventive about it.
And I'm not only talking about, like,the classic primary prevention of
(57:23):
really preventing an event before itoccurs. Like, predicting someone will
have an osteoporotic fracture, treatingthat patient, and then preventing
that fracture from ever happening sothat now we have this counterfactual
reality where a person who might havesustained a fracture and maybe died
from complications in the next year.
(57:43):
Now that person goes on to livefor 20 more years of good health.
So that's one kind of, of publichealth impact that we could
have with these kinds of tools.
But it's also secondary preventionof early detection, like the example
of hepatitis C that Ran just gave.
It's also tertiary prevention, forexample, for patients we already know
(58:03):
suffer from some conditions, say diabetes.
We gave that example earlier ofpatients who are already suffering
from a condition, but we wanna makesure that they are being treated
by the most appropriate drug.
So now you have automatic ways toidentify all of those high-risk
patients and all of those gaps in care.
And we really have theopportunity, I think, to influence
(58:25):
patients' lives at scale.
And I have to say, it's not insteadof regular medicine, and it's not
instead of AI for a specific patient,which is a lot of the AI we do, it's
kind of creating a safety net for thosepatients in addition to that wide-cast
net that will capture all patientsand the wide-screening criteria that
(58:45):
have potentially high sensitivity.
Now we have all of these tools to shedspotlights on the right patients to
make sure that they get the right care.
I think it's the new age of public health,and It's more health that we can provide
at scale compared to any individualspecific intervention that we make.
It's really changing patientslives and really buying them
(59:10):
dozens of healthy years.
So, Andy, I, I like, I really, I'mreally surprised by your
answer. But this is my answer towill public health be changed by AI.
So, I'm willing,
I think I will now revise my answer ingoing forward and say that we are in a
golden age of public health enabled by AI.
I think that those are verycompelling reasons for optimism.
(59:30):
So, thanks for that, Noa.
I think we're going to keep movingalong to the lightning round now,
if you're both ready to answersome lightning round questions.
Let's do it.
Alright.
Yup.
So, the rules here are that these arequick kind of rapid-fire questions.
So just give us short answers.
(59:50):
We'll ask each question to both of you.
We can maybe start with Noafor each of the questions and
we'll take turns, and we'll kickus off with this first question.
So Noa, if you weren't in medicine,what job would you be doing?
Oh, that's a challenging one.
So, I think I'm doing the alternative job.
I think I should have been in medicine.
I thought I should have been in medicine.
I'm actually doing the alternative job.
(01:00:12):
It's hard for me to imagine acounterfactual reality where I'm doing
something else because I think I'mreally lucky, to have my day job as my
hobby, and I wouldn't really change that.
But if you, if I really have to choosesomething, I imagine something to
do with maybe being like a graphicdesigner or architect, anything that
deals with something that has like,aesthetics and neatness to how it's done.
(01:00:33):
But it's also somewhat likewriting a beautiful piece of
code or a well-structured paper.
I see the aesthetics in all ofthese things, but I really like,
creating things graphically.
I like to createpresentations for that reason.
So maybe that, but honestly,I wouldn't change it.
Why the figures are so goodin the high impact papers.
It all makes sense now.
Alright, Ran, same question.
(01:00:55):
So, I'll keep it short.
I think I would have been somewherein the entrepreneurial world.
I would have probably started myown thing and trying to, I have no
idea in what domain it would be.
And I think there would be a lot ofserendipity in choosing where would it
be in education or any other domain.
But one thing is sure, I would try tomaximize the impacts using technology.
And it's but again, any day of the week,any year, I would choose again and again
(01:01:18):
the path that I've taken and nothing else.
Nice.
Noa, if you were given one ofthese superpowers for a day,
which one would you rather have:
invisibility or the ability to fly?
Oh, wow.
Fly.
I think every day.
I don't have a good reasoningwhy, but that's the answer.
(01:01:40):
Okay.
Nice.
Ran.
I couldn't imagine being invisible.
I would choose flight.
I think invisibility forme would be a punishment.
So, I always feel like this is agood test to see if you're secretly
an evil, like supervillain, becauselike flight is something that we
can do now, but invisibility issomething that we can't, and you
could use it for nefarious purposes.
(01:02:00):
So, I think you both passed the
"Are you secretly a supervillain?" test.
Alright, Noa, which typeof medical specialty could be
most easily replaced by an LLM?
Oh, wow.
I, you can't answer that one.
But to be honest, I think that's, Ithink LLMs can potentially change a
(01:02:22):
lot of things in the way we do stuff.
I don't see them really replacingany medical profession soon enough.
I'll risk it and say that I thinkthat image analysis models will have
a higher impact on specific clinicaldomains before a lens will have.
Ran?
No, I would say the same thing.
These could be ancillary toolsfor many professions in trying
(01:02:44):
to make physicians job easier.
I think they have the potential ofrestoring some of the joy of work that
we've lost in becoming data entry clerks.
Which is exceedingly enjoyable by manyphysicians now spending most of the
time doing that so take that away andpeople would be happier in any profession.
But I don't see LLMs even remotely begin tomake the difference. And again, yes, image
(01:03:07):
recognition would take away some of thetechnical aspects of radiology, pathology,
etc., but they have other things they needto do now that their time is away from the
technical aspect of looking at pictures.
And I think that these willbe beautiful questions.
Good answers.
Good answers.
I agree.
To avoid order bias.
I'm going to go back to Ran first, andthen Noa can answer this next question.
(01:03:29):
If you could have dinner with oneperson dead or alive, who would it be?
You know what?
I'll choose someone alive and anyday of the week I'll choose to
have a dinner with my parents.
Which are the people who influencedme the most in the world.
I think everything that I am is becauseof what they have put in me, and I'm
(01:03:52):
still to this day trying to kind ofsatisfy their expectations to some extent,
sometimes succeeding, sometimes less.
So, I enjoy my time with them, andI'm trying to do this as much as I can.
So that would be my choice.
At least now I had some timeto think while Ran talked.
My answer is a bit similar, but alsovery different because I would choose,
(01:04:15):
I think my grandparents who are not alive.
But I think, so they werepartisans in World War II,
and they died many years ago.
And I think it was before I was, matureenough to really understand or fully
comprehend what their life story wasabout, and I think that I would go back
and try to make sure I know every pieceof detail about it. And also, to tell
(01:04:38):
them how much I admire them for whatthey went through. And I think that
also my grandfather was a professorof chemistry, and he actually worked
his entire life in developing drugs.
So, it would have been really interestingfor me to have a professional
conversation with him about what Ido today, which I never got to have
(01:04:58):
because I only knew him as a child.
So I think those are the peoplewho I really regret not having
more mature conversations with.
Alright, Noa, this is our lastlightning round question, and this
one I do want to start with Noa, andNoa will understand why in a moment.
Noa, what is yourfavorite piece in chess?
Wow, so, I'm not good at chess.
(01:05:23):
I'm really bad at chess,I have to admit that.
So, the only thing I can answer aboutchess that I wish I would have been a
much better player, because first ofall, I think I would have liked it.
And I would have enjoyedit, if I was good at it.
And the second thing is I, I thinkI, I would have shared an interest
(01:05:44):
with all of the rest of, of my house,household that I currently don't share.
So, and I think this is thereason why you're asking that.
And yeah, so I should sayqueen or something, right?
But the honest answeris I'm just not good at it
enough to really choose something meaningful.
Alright, fine.
We'll let it pass.
(01:06:05):
Ran?
I think the answer would be knight.
I think that's the kind of piecethat makes the game interesting.
And, you know, the thought abouthow you use your knights well is
what makes a real player a pro.
I love it.
I love it.
So the back story here isthat Ran and Andy, Noa's sons
actually came over to our house.
(01:06:25):
Noa's family was over at our house andthey're very interested in chess and they
had a portable chess board with them.
And our daughters became fascinatedwith what they were doing.
Uh, they're pretty young, but theylearned that day and then have
not stopped playing chess sincemeeting Noa's two sons.
So, we'll let that one pass though.
I was expecting, I was hoping forqueen or something like that, but
(01:06:45):
all good with that answer.
Alright.
So, I apologize, but I'm going tohave to hop off at two, Raj. You're
in good hands with Raj and he can takeyou home, but I have, like a pretty
hard stop at two, apologies for that.
So, Noa and Ran, youpassed the lightning round.
That was fantastic.
We really just have one last kind ofbig picture question for both of you.
(01:07:06):
And so, this is, you know, we've talked alot about the investment, the recruitment,
I think, of very high qualityand talented people at, at Clalit.
And this is a question aboutthinking about how we can emulate
that in other places, right?
So for other countriesand health care systems
that are looking to emulate Clalit,to produce both the infrastructure
(01:07:29):
that has led to integrating predictivemodeling very early on and using it to
improve care, but also do very, very high-quality research with special data
and special teams that you have assembled.
What advice can you give, you know,can we, is there a day that we
could do this in the United States?
Could we do it in a country like India?
Maybe just reflect on that.
(01:07:50):
What do we need to do to really emulate what you've done?
So, I think the short answer is yes,you could do this in India, definitely.
And the longer answer would be, there arethings that are really hard to change.
Okay.
You can't change the structure, theembedded incentives, the bureaucracy,
the risk appetite, the abundance oravailability of existing data, they all
(01:08:12):
really differ from country to country.But there are things you can do,
that are feasible stepping stones thatcan definitely be true game changers.
So, one such example for us, I think, isour digital health residency program that
is aimed to train the next gen physicians.Those that can become the driving force
of transforming our health care systems.
(01:08:34):
So, the bottom line, I think, is thatif you really want to change medicine
with AI, you need people who speakthree languages: AI, medicine, and
public health and epidemiology.
And any country can do that.
I can add that I think I wasthe first resident in this
(01:08:54):
program, one of the first two.
And I think really shaped it to besomething that is a unique set of
skills that is what needed in orderto make a change in this domain.
So, you need to know your medicine andI think that often people are wondering
whether or not you should start withphysicians and teach them the other
stuff or you need to start with computerscientists and teach them some medicine.
(01:09:18):
I think it's reallyhard to teach medicine.
I think that first and foremost, and it'salso related to that answer about what is
AI, you need to start with the needs andin order to really understand the needs,
thoroughly,you need to spend enough time in the settings where it actually happens.
And then you need toreally understand data.
But if you understand medicine, youalso understand how the data is formed.
(01:09:41):
And you need to create some oracquire some skills like programming
and understanding your toolboxthat we've talked about, about like
including the full spectrum of AI tools.
And you need some product managementtraining because you want to create some
like things that will truly be useful.
And you need some EPI and proper researchskills because you really want to be able
(01:10:04):
to measure the impact of what you do.
So, I think today we have one of thebiggest public health residency programs
in Israel and we were able to reallyrecruit truly fantastic residents
because I think they understand thepotential and the impact and everything
that this domain has to offer.
And they're reallysuperstars in their domain.
(01:10:25):
And it's a really good advicebecause it's both very feasible,
but it's also a true game changer.
I think for us, it was thesepeople were a game changer for us.
And with a small amount of peoplewith the really right set of skills,
you can really make a difference.
Awesome.
Thanks.
Well, Noa and Ran, thank youso much for being on AI Grand Rounds
(01:10:48):
today, we loved talking to you.
Thank you so much for having us.
It was really fun.
Thank you.
This was a great talk.
Thank you.
That was awesome.
This copyrighted podcast from theMassachusetts Medical Society may
not be reproduced, distributed,or used for commercial purposes
without prior written permission ofthe Massachusetts Medical Society.
For information on reusing NEJM Grouppodcasts, please visit the permissions
(01:11:12):
and licensing page at the NEJM website.