Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
We got an email from James Donovanfrom the strategic partnership team
at OpenAI who really connected thedots in terms of our idea and this
new emerging technology in the companyand said to us simply: Do you know any
patients who could benefit from this?
And that really kickstarted Fatimaand me talking about this project
(00:25):
and getting it off the ground.
Yeah, and I remember it being kindof a whirlwind week because we got
access to the technology and so Iplayed around with it over the weekend.
I took a look at how you couldupload a sample and how it
could reproduce my own voice.
And then we talked about what kindof patient and what background of
(00:47):
a patient that would make sensefor trialing this technology.
And that's when we came up on Lexi.
Lexi,
yeah.
So,
this is a patient who, very young,only 21, who had a vascular brain
tumor compressing on the back of her brain stem and had to undergo
a 10-hour surgery to have it be all removed.
(01:07):
One thing that was very striking aboutthis patient was when she was in the
pediatric ICU and she was intubated.
She was actually texting at the time.
I just vividly remember that on roundsseeing her, like nothing would stop her
from communicating with those around her.
And so we talked abouta few patients, I think.
But Fatima, you were like, this isdefinitely the one we should approach
(01:30):
and see if she'd be interestedin trialing this technology.
Hi,
welcome to another episodeof NEJM AI Grand Rounds.
I'm Raj Manrai, and we're delightedtoday to bring you our episode
with Rohaid Ali and Fatima Mirza.
Rohaid is a neurosurgeryresident at Brown University.
(01:51):
And Fatima is a dermatologyresident also at Brown University.
This was really an episode of severalfirsts for us at NEJM AI Grand Rounds.
This is the first wife and husbandduo that we've had on the show.
This is also the first episodewhere we've had two residents
join us together asguests on the episode.
I was really struck during ourconversation by the strength
(02:13):
of their collaboration.
We talked about several of theirinfluential papers, including their
study on surgical consent forms, wherethey use ChatGPT to simplify consent
forms in the state of Rhode Island,and another project where they were
able to help a young patient who hada brain tumor and lost her voice.
Using an AI model, create a customvoice that could be used with a
(02:34):
text-to-speech model that the patient now uses.
Also, this was the first episodewhere we were able to play a
version of the newlywed game withour guests, which was a lot of fun.
All in all, this was a veryinsightful and really, really fun
and illuminating conversation.
The NEJM AI Grand Rounds podcastis brought to you by Microsoft,
(02:56):
Viz.ai, Lyric, and Elevance Health.We thank them for their support.
And now, our conversation withRohaid Ali and Fatima Mirza.
Well, Fatima and Rohaid, it's greatto have you on AI Grand Rounds today.
Thanks so much for having us.
(03:16):
Thank you, we're reallyexcited to be here.
This is a question that we always liketo get started with, and maybe we can go
Rohaid first and then Fatima afterwards.
But this is a question that we askall of our guests, which is, could
you tell us about the trainingprocedure for your own neural network?
How did you get interested inAI and what data and experiences
led you to where you are today?
(03:37):
Thanks so much Raj, happyto be on the show.
I was born and raised in Oklahoma.
I went to college at Penn andgot my biology degree there.
Then I went to Stanford for medical schooland came over here to Brown for residency.
I would say, of all the things thatprobably trained my neural net the
most, it's probably been marriage.
And having the privilege ofbeing married to Fatima for the
(04:00):
past seven years, we've known eachother since 2015, when my roommate in
medical school introduced me to her.
And so, you know, it's been phenomenal.
We've been getting to worktogether on all these projects and
bouncing ideas off of each other.
So that's how I am here where I am today.
Oh, and I'm the Chief NeurosurgeryResident here at Brown.
And next year I'll be starting as theinaugural Spine Fellow at Mass General
(04:24):
Brigham at Harvard Medical School.
So before we move on, we like to go backtoward closer to initial conditions.
So why medicine?
How'd you get interested in medicine?
What led you to doing medical AI?
Could you take us throughthat trajectory?
Yeah, of course.
So, my father is a professor of publichealth at the University of Oklahoma
where I grew up. And for a long timeI was interested in health on a
(04:49):
population scale as a result of that.
In fact, one of my more formativeexperiences was during college
I was an intern for the U.
S.
Surgeon General where I was involvedin Everything from ghost writing
articles that would be sent out forthe public in the public domain.
Is this Vivek Murthy at the time?
It was actually Obama'sfirst surgeon general, Dr.
(05:10):
Regina Benjamin.
And so that was a really great experience.
But honestly, what was sort of missingfor me in that experience was, the
ability to connect directly with patients.
And so it tilted me then towardlooking to medical school.
When I was in medical school, one of themost enjoyable times I had was spending
time in the neurosurgery service, rotatingthere, and the neurosciences in general,
(05:34):
I think, just very much excited me forall the potential that it had within it.
In terms of how I got involved with AI, Imean, I think Fatima and I, it's fair to
say, are relative newcomers to the space.
Ee're just in awe of everyone who'scome before us, including all the
amazing work that you two have done.
I think how we got intoit was pretty organic.
So when ChatGPT was launched in Novemberof 2022, it just so happened to be
(06:01):
around the same time that Fatima andI were studying for our board exams.
And so what we found ourselves doingwas while we were studying for the
board exams, we were progressively usingit more and more to actually study.
Like just look up quick facts, ratherthan looking up keywords in a textbook,
we were asking whole questionsand found it to be very useful.
(06:21):
And in the process of doing so, Ithink that's what started getting us
interested into AI more generally.
In the process of doing so, we startedto wonder, hey, how well do these
tools work on our specialty exams?
And so that's when we starteddoing our first studies into AI.
Perfect, thanks.
(06:42):
Now, Fatima.
Yeah, absolutely.
So, hi, I'm Fatima, I'mcurrently the Chief Resident,
at Brown, for dermatology.
I started my academic career I was atHarvard, I studied biochemistry and
biophysics, and then I got my master'sin England, U.K., in public health.
And then I did my medical school at Yale,and now I'm here with Rohaid at Brown.
(07:05):
And in terms of my interest in medicine,I knew from a young age that I wanted to
be a doctor, that I wanted to help people,the same, clichés that most people have.
But I also had a passion for medicalcommunications and being able to have
patients empowered in their care and theirhealth by really understanding what can
sometimes be very complex health messages.
(07:27):
And so, actually, that's part of whatdrew my interest to do my master's.
Beyond that, when I was actuallyin England, I worked as a medical
production assistant for the BBC,and I got to see how they took really
complex medical thinking and were ableto disseminate it to a larger audience.
And so, I've always wanted to worksomewhere in the intersection of
(07:49):
taking care of patients clinicallyon the individual level because
I find that incredibly rewarding.
But also, being able to communicate thingsat a larger scale to be able to make
impact from a public health setting.
Like Rohaid was mentioning in terms ofour foray into AI, one of the things that
I've learned throughout my time is, asnew technologies emerge you can either
(08:11):
get on board or you can be left behind.
And we realized that it wasso useful as we were studying.
So Rohaid was studying for hisneurosurgery in-service exams, and I
was studying for my dermatology ones.
And it was so good at explainingthese complex questions that when
I would go to our textbooks, Iwould still be kind of struggling.
And so, I said, wow, this is really goodat explaining really complex concepts
(08:35):
to doctors or doctors-in-training.
Can it do the same thing for patients?
And so that's really where all ofthis started, and that's how we
started our first project together.
And, so let me, backing up to when youwere both finishing medical school,
clearly you both have matched at Brown,so you have solved the intractable
(08:55):
two-body problem for medical training.
Could you tell us how you did that?
Tractable.
Tractable two-body problem.
Uh, sometimes, sometimes tractable.
There may, there may not be, there maynot exist a preference set that satisfies
both bodies so it can be intractable.
Fair enough.
Fair enough.
So.
I, I think what I'm most proud about,about how we solved this two body problem
(09:16):
is we solved it on multiple levels.
So not only are we at the sameinstitution, but we're in two
different specialties that havedifferent lengths of training, but
somehow, we managed to wing it wherewe're both graduating the same year.
So we're perfectly aligned in that sense.
I don't know, in terms of how it happened,I think we just got both very lucky.
Yeah.
And we have incrediblysupportive programs.
(09:36):
Did you both graduate, did you bothgraduate medical school at the same time?
No, so actually I graduateda few years before her.
She is just, the college level wasjust one year behind where I was.
And then she had gone off to doher master's and then had a year
where she was doing research.
I was one of those that just went straightthrough the whole time, did high school
(09:59):
in three years, college, med school,all the way straight through, which
is I think unusual in neurosurgery.
But,
fortunately, it made it such thatour timeline synced up pretty well.
Nice.
But so, the couples match option wasn'teven available to you, I take it.
Got it.
No.
Unfortunately not.
But actually, and when I ended uptaking my research here in medical
school, part of what pushed me to ittoo was, hey, if we ended up matching
(10:22):
at the same place, then we'll be ableto graduate at the same time too.
So, it all worked out.
Awesome.
Amazing.
Alright, great.
So, I think one of the things that Fatimasaid in her sort of initial remarks was
the importance of simplifying messagingfor patients and really communicating
with patients as one of the mainmotivations for getting into medicine
(10:43):
and then getting into the research that,that you both are doing together now.
And so, I think that's a greattransition point to what we want to
talk about for this initial kind ofnext part of the conversation about
the work that you both are doingusing ChatGPT, using other AI models.
To simplify complex messaging as I guess,one of the major themes amongst others
(11:04):
in the work that you're doing together.
So, you publish this paper, and actually,just to give a little bit more context
about the podcast, too. We started thispodcast I think almost a year before
we published the first papers. Butalways the sort of original vision
and this is, maybe one of the first,if not the first conversation where
we've actually realized this, was to haveconversations like this with authors of
(11:26):
papers that are published in NEJM AI.
And so, the two of you, and Ithink, Fatima, you're the first
author on this and Rohaid, you'rethe last author on, on this.
So, you guys work that out who'sfirst, who's last on the paper.
And it was this really interesting,very, very intriguing analysis of using
ChatGPT to facilitate informed consent
(11:47):
in the context of surgery.
So patients who are undergoing surgeryat one of the largest, if not, I think
the largest health system in the stateof Rhode Island, which is the teaching
hospital affiliated with Brown and thetitle of the paper, "Using ChatGPT to
facilitate truly" – I think that's probablythe key word there, truly – "informed medical
consent" was one of the first papers thatwe published as a case study in NEJM AI
(12:11):
back in the early part of this year.
So, this caught our attention bothwhen we were discussing and evaluating
the paper and then it caught ourattention again, when Greg Brockman,
who's the CTO of OpenAI tweetedit out before we published it.
And I imagined you two were a littlebit nervous about the publication timing
there, but we thought it was cool.
(12:31):
And so, we retweeted it and, we werehappy to see that it was getting some
attention and that there was interestfrom several different types of folks
in the work that you were doing.
So maybe we could start with thatpaper, and you could tell us about the
background, how you got interested intackling that particular problem, what
you actually did, and what were the keychallenges that you had to overcome
(12:52):
in conducting that research study.
Yeah, absolutely.
You know, as residents, we often areat the front line of interacting with
patients, particularly when it comesto the consent process, whether that
be in neurosurgery when they're doingneurosurgical procedures or when
we're doing bedside procedures forbiopsies and other things in dermatology.
(13:13):
And what I realized was so much of theconsent process is really built on trust.
And so, so much of it is actuallybeing able to discuss verbally
here are the risks, here arethe benefits, and all of that.
But what I realized when I actually wasconsenting a family and particularly a
(13:34):
pediatric patient for a procedure waswhile mom understood everything that I
was explaining to her in terms of theprocedure itself. When she went to go
actually sign the form, she said, I don'treally understand what's written in front
of me. And she actually shared with me,you know, this is a really overwhelming
time for our family. I'm going to havea lot of questions and obviously we're
(13:55):
always open to those questions. Butwhen I'm here late at night and I've had
this sheet of paper in front of me, amI really going to be able to understand
what we talked about and look at it?
And so that's part of what really inspiredme because I was thinking this is the
same consent form that we use across thehospital for all sorts of procedures.
If this one patient and familyis having this issue, then
(14:17):
I'm sure it's more universal.
We talked about that experience,collectively between the two of us.
We've had these consent conversations withhundreds of patients in our residency.
And one of the ways that I thought aboutit happened to be because growing up
in Oklahoma, we were one of the firststates to adopt this AR Reading Club,
(14:37):
Accelerated Reading Club, that lookedat grade levels of books and made sure
that every student was reading booksat their appropriate grade level.
So, there were stickers on every bookin our libraries and in school saying
what grade level it was appropriate for.
And so, as a student, you had to readbooks that were appropriate to your
grade level and then hand in essays thatwere appropriate to your grade level.
(14:58):
And so, to try to put some structurearound this problem, we started
asking ourselves, okay, what gradelevel are these consents written at?
And what we found was that lookingat consent from 15 different
academic medical centers across thecountry, the average grade level
was at that of a college sophomore.
And then to make matters worse, lookinginto this problem further, we saw that
(15:22):
over 40 years ago, this problem hadalready been highlighted in the New
England Journal of Medicine, whichmeans that despite decades of awareness,
this problem hasn't gotten better.
In fact, perhaps it's even gotten worse.
And so, at the time, as we hadmentioned earlier, we had been using
ChatGPT to help study for the boards.
(15:43):
So, we asked a flip question,could ChatGPT help patients better
understand their medical information?
And I think that's howthe project started.
So, we took the consents into ChatGPTand asked it the simple question
while preserving content andmeaning, convert this consent to
the average American reading level.
(16:05):
Yeah.
So, your prompt was, preserve contentand meaning and convert this to
an average American reading level.
Yes.
Yeah.
And it did so beautifully.
Not only did it convert it to the averageAmerican reading level, but it also
simplified the consent in a way whereit actually shortened the length of
the consent and was able to, I think,make it more approachable for patients.
(16:28):
Did you have to play with alot of prompts to get that?
No.
Articulated?
No.
It was the first one.
And, uh, yeah.
Did you specify what the averageAmerican reading level was?
I'm just wondering if there wasan opportunity for ChatGPT to
be super judgmental about theaverage American reading level.
You know, it's incredible.
I think it's actually a hard number toexactly pin down, as you can imagine, it
(16:51):
varies quite, quite significantly, countyto county, state to state, but in general
I think it's widely acknowledged that,uh, over half of Americans don't read
at above a middle school reading level.
And so, when we applied the toolto test it, whether it was at the
reading level, we used somethingcalled the Flesch Kincaid score.
(17:11):
Looking at this further, it's prettyinteresting the way this first came about,
the Flesch Kincaid rubric was initiallydesigned by the U.S. military because what
they found during the Vietnam – A lot ofpsychometrics and this sort of evaluation
of humans have military origins, right?
And yeah, it's, it's not surprisingthat this one does as well.
So, with Flesch Kincaid, that was theprimary way that you assess the sort
(17:35):
of existing reading level for thecurrent forms and then also the
ChatGPT-modified versionof the form as well.
And did you look into like other waysto assess the sort of the reading
level or like other potential rubrics?
I think that is the standard one, butI'm curious, what your thoughts are on
how appropriate that is, and what eventhat rubric, even if it's standardized
(17:55):
as it is, it might be missing forcommunicating things, making them
accessible to diverse groups of patients.
Yeah, so I think Fatima can talk abouthow we engage stakeholders, lawyers.
Yeah, absolutely.
So, when we first had the output, right,we thought, okay, eell, this makes a
lot of sense to us as the ones who aretypically consenting these patients,
(18:15):
but part of what was really important,particularly in rolling this out in our
health care institution, was making surethat there was buy in and also expert
oversight from many different aspects.
And so really part and parcel of whatwe did was making sure that there
was an interdisciplinary team thatreally evaluated whether or not this
(18:36):
consent form really stood the test.
And so, part of that involved includingpeople like patient advocates, including
the leadership in the hospital, thesurgical executive committee. Yeah,
it was, it was really like everyone inthe hospital, but I think in particular,
being able to tell the surgeons that weran it by a medical malpractice attorney
(18:58):
and saying that they felt that the legalcontent was the same was a big turning
point because that was a source ofconcern early on, as you can imagine.
Yeah, you had plural humans in theloop here evaluating the form and
it was used as kind of the ChatGPT.
And I think this is what we'reseeing over and over again, right?
It's a way of suggesting, acceleratinga potential different way of doing
(19:20):
things, but the importance of havinga human or in your instance, I think,
and this is what was attractive tous in looking at the case study, too.
You had multiple humans in the loop here.
And then I think you actuallyit got approved, right?
So, everyone came to consensus.
They approved, you said one prompt.
Was this also the first model output?
Or did you have to run it a bunchof times and find one that was like
(19:40):
the one that you liked and thenthat was the one that moved forward?
Or was this literally thesort of the first output of
the first prompt for ChatGPT?
Yeah, so I think when I presentedthis to the surgical executive
committee, one thing I was concernedabout was that ChatGPT relabeled
anesthesiology to sleep medicine.
And so, I, ahead of the meeting, I wentto our lead anesthesiologist and asked
(20:03):
him, hey, this said sleep medicine.
Don't worry, I changed itback to anesthesiology.
He said, no, no, keepsleep medicine in there.
Because when I introduce myself topatients before surgery and say I'm
an anesthesiologist, half the timeI get a confused look on their face.
But when I mentioned I'm a sleepmedicine doctor, that's when they get it.
And I was like, oh, wow.
Okay.
So, in reality, we really didn't haveto change much of it to ensure that
(20:26):
it was compliant with the informationthat we wanted to get across to
the patient in that conversation.
So, you got approval and thenthis became the form, right?
You rolled this out in,
I think, September of last year.
And so maybe you could give us anupdate on where lifespan is at.
Is this still the form that they're using?
(20:47):
Have you updated it, iterated some more?
Where are we today?
Yeah, so this form has been in continuoususe since fall of 2023 and is used for
more than 40,000 procedures that are donethroughout our health care system annually.
And actually, it's really inspiredother changes within how our forms
function and work, like chemotherapyforms, other patient information forms.
(21:11):
And so, it's kind of taken offlike wildfire and it's really
been exciting to see that.
And I think one of the thingsthat was really exciting for us
throughout the whole process was wekept thinking, can we do this, right?
If we've known about this issue for 40years, why has it not been changed yet?
And what I realized when we weremeeting with all these people is
(21:31):
when you put the patient at theheart of the issue and you say, we're
doing this to make things better forpatients, everyone gets on board.
And I think it's really
been rung true by the fact that
not only did we first roll out this
procedural consent form, but noweveryone wants to incorporate this.
And that's been really excitingthat it's taken a life of its own.
(21:51):
Yeah, every intake form, everypatient getting admitted to the
hospital signs is now simplified.
Every oncology treatment form,radiation treatment form, you're
getting high risk medications.
You're signing a form that'sbeen simplified by this process.
Also, via ChatGPT, also with the humansin the loop, sort of the template that
you built in this first case study.
And I imagine other health care systemsare also interested in doing this, too.
(22:14):
That was the really exciting part, thatit wasn't just our health care system.
For us, we really saw it as a proof ofconcept, and the fact that once this
was rolled out in the largest health caresystem in Rhode Island, there was a
lot of other institutions that reachedout to us and wanted to follow suit.
And so,
for us, that's really exciting because notonly do we get to make a difference at the
(22:35):
individual patient level, but it reallydoes help us make a difference at the
national and international level as well.
Can I ask a question about whyyou think this was so successful?
As you mentioned, the level atwhich consent forms are written
has been a problem for 40 years.
Presumably, we could haverewritten it by hand and come up
with a more legible consent form.
(22:56):
Was it the fact that this wasthe product of an AI that made
people excited about this?
Whereas previously, this would havebeen a grunt work kind of task or like,
what was the combination of ingredientsthat actually made this change happen?
So, this is a thing that's, Ithink, unique to the culture of
medicine inside of a hospital.
Maybe not unique, but it'svery formalized in that you get
(23:17):
consults, you consult other servicesfor pretty much every issue.
So as a neurosurgical service, we'reconsulting with our colleagues in
endocrinology, psychiatry, neurology,medicine, geriatrics, other surgical
services to take care of our patients.
And so,
I think part of what kept this problemunresolved for many years, and there's
(23:39):
still plenty of work to do to fixit, don't get me wrong, is that as
clinicians, we're almost brought up ina system where we're taught to stay in
your area of expertise and specialty.
So, I'm not a lawyer, I'm not arisk analyst, I'm not a patient
advocate, I'm not a finance person.
And so, do I really have the authority to make these
(24:01):
changes to a document myself?
Do I have the wherewithal toget all these subject matter
experts together in the same room?
But then when you have something likeChatGPT create this for you, then it
becomes like a template that everyonecan then have a conversation around.
Rather than saying, oh, it's thedoctor who brought this up, or oh,
it's a lawyer who brought this up.
It takes that part out of the equation.
(24:24):
Interesting.
Yeah, and something that I thinkin the past would have taken a
much longer period of time to do.
You know, you put in a prompt andit gives you an answer, and yeah,
you may need to change the prompt.
Luckily, we got it on the first shot,but at the end of the day, if you had
a team of people initially trying todraft this, I think it would, you know,
it decreases the barrier to entry,and I think that was really important.
(24:47):
And then it was really easy then tohave the discussion once we had at
least some sort of draft in front of us.
One other question before we move on.
I'm curious about yourconversation with the lawyers.
So, it seemed like, from a legalperspective, it seemed roughly
isomorphic to the original document.
Was there a conversationabout if there was something
overlooked, and harm was done?
(25:09):
Who was liable for that?
I'm sure the hospital was, but thatmust be something that the lawyers
went nuts over that if we do thisnew consent form and it comes
from ChatGPT, we're still liable.
So, what were those conversations like?
So that's actually a greatquestion and something that we
learned throughout the process.
Because actually when the malpracticelawyers came to the table, they said,
when you have these really complex consentforms and, you know, typically, if something
(25:34):
goes to trial, right, you have a jury,and they read out these consent forms.
A lot of the people in the jury say Idon't know what that means. How, even if I
sign that, like what doesthat actually mean for me?
And so actually simplifyingit is more important because
it allows them to be able to say thisis actually understandable and something
that is easily understood by patients.
(25:56):
Yeah, I think people saw it as aprotective thing rather than as a harm
because, quite frankly, because theseare so verbose in how they're written,
many people end up unfortunately notreading the consent forms, right?
But I think it servestwo purposes very well.
One, when a doctor sees that consentform written in plain English right
before they speak with the patient.
(26:17):
I think to a certain degree thatprimes them to speak in a similar
manner and to speak in those terms.
But then, yeah, secondarily, when alayperson on a jury is reading the consent
form, I think it's a better defense ifyou just plainly state what the risks
and benefits of a procedure are and thenthey can make a determination if you
explain that information adequately.
(26:37):
That's interesting.
I hadn't considered that there mightbe a legal benefit of clear language.
I'd always thought protection byobfuscation was, um, how most lawyers
like to operate. Or judged by a jury.
Well, uh, one last question on thisone, and then we want to transition
to one of your other projects.
Has the human or humans in the loop sideof this protocol, if you will, for this
(27:00):
initial consent form, has this changed atall since you first conducted this study?
Like, are there more people who areinvolved who were not involved, or
are there fewer people involved?
You know, if you have, like, thechemotherapy consent form, or the,
I can imagine even the, like, thehandouts that go to the patients, right,
to inform them about their condition.
Maybe those are already alittle bit more accessible than
(27:20):
some of these consent forms.
But for these other parts of the hospitalwhere there are also informed consent that
you'd like to move into truly informedconsent, has your sort of human side of
the protocol adjusted or changed at allsince you did this about a year ago?
I think for each forum that you approach,obviously you want to make sure you
have the key stakeholders there.
(27:41):
But I think the core team of peoplehas stayed the same, because they have
the same questions and the same issuesthat come up time and time again.
But one of the really importantthings, for example, that you brought
up is making sure that there's anexpert particularly in that area.
So, for example, with chemotherapy,you would want to make sure there's
like a hematologist, oncologist,who's actually looking over it.
(28:02):
But for the most part, the team thatwe built has actually stayed the same.
Yeah.
And if anything, I think the socialproof of that initial form being
simplified, in a certain sense, yes,more people are involved because
specialists in the certain area thatneeds to be simplified are involved.
But the initial consent form, tookdozens of people to cosign essentially
(28:24):
in terms of being okay with it.
Once everyone saw that this changewas being implemented, I think it
emboldened people and I think thelevel of scrutiny applied to these
additional changes, though it's stillthere, it's not to the same degree.
And so, I think the social proof ofactually implementing it, writing about
it, publishing it, serves as a key vehicleto having this change happen in a much
(28:48):
easier fashion later on down the road.
Got it.
Okay, great.
So, we just want to spend maybe a coupleof minutes talking about another one of
the projects that you both are working on.
And this is the voicerecovery project with OpenAI.
And so, I saw, I think it was a blogpost from OpenAI that talked about
this a little bit and some tweets.
(29:09):
Rohaid, I think we've spoken aboutthis just a little bit, and so as I
understand it, you're now experimentingand piloting some initial studies with
patients who have lost their voice.
And so there was on the blog postfrom OpenAI, there was a young
patient, I think with a braintumor, lost her ability to speak.
And you used a small sample of her audiothat was actually previously recorded.
(29:33):
From a school project or classproject, along with OpenAI's voice
engine to essentially create a customvoice that could be used alongside
OpenAI's text to speech model.
This is a 15-second excerpt from avideo from prior to the surgery where
the patient was making a pasta salad.
(29:54):
When you have all of your ingredientstogether, you are going to put the chopped
broccoli and chopped banana peppers insidethe bowl, and now this is very adverse.
You can use anything that you would like.
If you want to use cucumbers, you canchop up cucumbers and put this in here.
This is her currentvoice after the surgery.
(30:16):
Hi everyone, this is what myvoice sounds like using OpenAI's
new text-to-speech model called VoiceEngine.
I was able to use just 15 seconds ofa video I made for a class project
(30:38):
to be a reference audio sourcefor the voice you hear right now.
What do you think?
This is what the text to speechmodel was able to produce.
Hi everyone, this is what my voicesounds like using OpenAI's new
text-to-speech model called VoiceEngine.
I was able to use just 15 seconds of avideo I made for a class project
(31:02):
to be the reference audio sourcefor the voice you hear right now.
What do you think?
This is the text to speech modelcreating an output that allows
her to order at a drive thru.
Can I please have a number one withlarge fries and with a strawberry shake?
I thought that demonstration was prettypowerful, honestly pretty amazing.
(31:23):
I was wondering, maybe you could tellus again, as with the other paper,
the background for this, how yousee this as physicians taking care
of patients, where your interest inthis area got started, and maybe also
update us on the status of that projectand how you see this moving forward.
Yeah, I think just on a practical level,how this started was we had done this
(31:43):
initial work with consent forms andChatGPT, and that caught the attention of
OpenAI, as you mentioned, Greg Brockmantweeted about it, and it just sort
of spurred into some natural organicconversations between us and OpenAI.
At one point, we came to them andsaid, hey, we have some ideas for
projects that could be accomplished,maybe not with existing models, but we
(32:07):
can imagine here are some models thatyou could very likely be developing
that this would be well suited for.
And one of the projects that we'dcome to them with was this notion
of recovering a patient's voice.
I don't know a single doctor who doesn'tknow a patient in their practice who has
some voice issues.
There's almost 20 million patientsin the United States who have voice
(32:28):
issues according to the NIH and whenwe initially spoke with them, they had
a smile on their face and they said,we think we may have something that.
So, at this time, you didn'tknow about VoiceEngine, right?
Because I think VoiceEngine wasn'tpublic, it's still not publicly available.
That's correct.
I think they're very cautiously, we'rein an election year thinking about
(32:50):
how to deploy this and technology togenerate synthetic voices at scale.
But okay, so you had these earlyconversations and, so you were actually
just exploring potential differentareas. And there happened to be a
technology that they were building thataligned pretty perfectly with one of
the major areas that you highlighted.
That's right.
We got an email from James Donovanfrom the strategic partnership team
(33:11):
at OpenAI who really connected thedots in terms of our idea and this
new emerging technology in the companyand said to us simply: Do you know any
patients who could benefit from this?
And that really kickstarted Fatimaand me talking about this project
and getting it off the ground.
(33:32):
Yeah, and I remember it being kindof a whirlwind week because we got
access to the technology and so Iplayed around with it over the weekend.
I took a look at how you couldupload a sample and how it
could reproduce my own voice.
And then we talked about what kindof patient and what background of
a patient that would make sensefor trialing this technology.
(33:55):
And that's when we came up on Lexi.
Lexi, yeah.
So
she's publicly identified herself
and wants to share this, her experience
of using this with the world communitybecause I think she really gets
and understands, you know, how herleadership in this regard could help
normalize the technology for othersand really advance it for social good.
(34:16):
So, this was a patient who, very young,only 21, who had a vascular brain
tumor compressing on the back of herbrainstem and had to undergo a 10-hour
surgery to have it be all removed.
Due to the nature of the tumor, itleft her with speech deficits due to
damage to lower cranial nerves, um,and also had some issues for a period
(34:38):
of time with swallowing as well.
And so, one thing that was very strikingabout this patient was when she was in
the pediatric ICU and she was intubated,she was actually texting at the time.
I just vividly remember that on roundseeing her, like nothing would stop her
from communicating with those around her.
And so, we talked about a few patients,I think, um, but Fatima, you were
(35:02):
like, this is definitely the one weshould approach and see if she'd be
interested in trying this technology.
Yeah, and so we, um, asked her for asample of audio, and she happened to have
a school project from prior to the surgerywhere she was making a pasta salad.
So, we got that video
and then we took 15 seconds ofaudio from that, and I was able to
(35:24):
upload it into the model and I rememberI was doing this in between patients,
and we got on a call together andI said you got to hear this output.
I mean, this is this sounds amazing andwe knew that there was something here
and it was really exciting because thenwe got to send it over to Lexi and she
was so excited about it as well. But at that point, you know, it was just on the computer
(35:52):
And so we really wanted to make surethat there was a way to bring this
technology into the hands of patients.
One of the things that Lexi sharedwith us was she wanted to use it in a
drive thru, like a fast-food setting.
And you really can't take a MacBookwith you through a drive thru, right?
And so, we approached OpenAI and said,hey, I think to make it more practical for
our patients and patients in the future,you should turn this into a mobile app.
(36:14):
And thankfully they got together a teamand were able to make a bespoke app for
her that just has her voice and she'sable to type into it and speak using that,
using that voice.
I think that with this technology,there's a broader concern about
its potential for abuse, like youmentioned, this being an election year.
(36:34):
And I think somehow through this process,we have found a way of potentially
responsibly deploying this technology.
In a sense, almost having doctorsprescribing these technologies to patients
and confining it to bespoke models,bespoke versions of the technology.
Such that they could utilizeit in a secure fashion.
(36:55):
And so, for example, for her bespoke app,when she types in and it produces speech,
it's only tied to her voice signature.
You can't put anothervoice signature on there.
And so, it was really exciting becauseI think a lot of the work that
we've been doing in AI is thinkingabout how to responsibly deploy
(37:16):
these technologies and whatguardrails need to be in place.
And it's been really exciting to beable to work with patients and with
everyone else to figure out exactlyhow we can, while understanding
the risks, not make that be thereason why patients can't benefit.
That's fantastic.
And I think that's amazing context,too, on where this came from and
(37:40):
even how it's evolved and how theteam has built a bespoke model
that, that she can actually use.
Just combining the twothreads here, right?
Informed consent, ChatGPT,this new technology.
Could you tell us about what type ofapproval either at the hospital or
human review process you had to gothrough to even trial this, get this
started, get this off the ground?
(38:01):
Cause I think you're really solvinga lot of these incredibly important
problems around how to do thisresponsibly, how to do this safely.
And it's problems that I thinkotherwise, turn a lot of people who
want to do things like this awaybecause they don't know how to solve it.
They don't know how to evenapproach it or think about it.
So, like with ChatGPT, you have thehumans in the loop, the multiple levels
(38:23):
of approval, consensus, iteration.
What is the sort of equivalent here?
How do you think about this, right?
Like, where's the data going,privacy, security, whether
this is okay by the hospital?
How did you navigate that?
And just as a reminder, youboth are residents, right?
So, you're navigating this whilealso being full time residents and
working within the teams at thehospital, all the dynamics there.
(38:45):
So maybe you could just tellus about that side of this.
Yeah, it's certainly, uh, allthose concerns cross our mind.
It's our, part of our ongoing study ofthis to ensure that there's no off-target
effects of any of this technology, right?
That patients aren't having theirrecovery from speech deficit be inhibited
by the usage of this technology.
Ensuring that the datais private and secure.
(39:06):
I think first and foremost, theperson who has to agree to use
this technology is the patient.
And in this case, veryenthusiastic, very willing.
And so, the question becomes is you havea patient who is not able to express
themselves as they would like, isinhibiting their ability to interact
with people at work, in their sociallife, and also in their family life.
(39:28):
And the question is, if you have atechnology that could help ameliorate
that problem, how can we get thistechnology to that patient in the
fastest way possible and make surethat they could benefit from something
that appears to be a social good?
And so, we had the approval, I think,of the highest levels of the hospital
in doing this and moving forward withthis, a multidisciplinary review.
(39:51):
And I think the advice I would giveto anyone who's trying to do this
moving forward is think about thereasons you're doing this, right?
There's no money to be had withsimplifying a consent form.
I mean, maybe there is, I mean, thatwould be nice, but what we're ultimately
trying to do is really enable patientsto be as autonomous as possible, have
(40:12):
the most fulfilling life as possible.
And I think as long as you use thatas your guiding principle, we have
found in our experience that peopleare incredibly responsive to that.
Okay, so I think we're gonna move onto the next segment of the episode.
I think as listeners you both know we liketo play lightning rounds with our guests.
(40:34):
We thought we would do a uniquetwist on this given your, uh, the
special partnership that you have.
So, we're gonna play avariant of the newlywed game.
And so, the way that this works isI'm gonna ask one of you a question
(40:56):
and you have to answer in the waythat your partner would answer.
Um, and so I would also, uh, we'll seehow complicated this gets, uh, like the
partner who you're answering about topre-register their answer by texting me.
So when, if you're responding onthe mic, uh, your partner will be
texting me what their actual answeris, and then we'll compare the two.
(41:16):
How does that sound?
Sounds good.
Uh oh.
Dangerous territory.
Yeah, you didn't know, you didn't knowthat you were gonna have a secret,
uh, relationship test on AI Grand Rounds today, did you?
Yeah, yeah.
I love it.
Okay, so the first one is for Fatima.
Okay.
Uh, if Rohaid wasn't in medicine, whatjob do you think he would be doing?
(41:38):
I'll ask you to think about it,he'll be texting me, and when he
has texted me, I'll give you thethumbs up and you can answer.
That's a good question.
I think, if Rohaid – You can answer.
Oh, can I go ahead and answer?
Okay, ooh.
So, I think if Rohaid wasn't inmedicine, what would you be doing?
(42:01):
I feel like I can only see himas a neurosurgeon because that's
all he's ever wanted to be.
But if he wasn't in medicine, I think hewould probably do something in business.
Maybe not far off.
He said policy, but he alsoexpressed doubt that he wasn't sure.
Perfect.
(42:21):
Alright, so our nextquestion is for Rohaid.
What is a frivolous thing or hobbythat Fatima enjoys doing just for fun?
So, Fatima, text Andy, and Rohaid, youhave some time to think about it.
I'm watching where youreyes are now, Rohaid.
No cribbing.
(42:42):
Okay, we have an answer.
Frivolous hobby.
Uh, geez.
Frivolous hobby.
Um, that's dangerous, Andy.
There's nothing that's frivolous with, uh.
It could be just for fun.
Whether or not frivolityis the correct adjective.
Um, I'll give you a hint.
(43:04):
This is a very common thingto do on the Internet.
Okay, it just texted you the
So, uh, you can say it on the mic.
Uh, she's already texted me.
Watching like Instagram reels.
We'll give you partial credit.
She said watching cat videos.
So, we'll go.
Um, Fatima, what is Rohaid'sall-time favorite book or movie?
(43:28):
Actually, let's just narrow it.
Let's say favorite book so that youdon't have to pick the category.
Oh, I know this one.
Um, when, when breath becomes air.
Yeah.
Oh, I have to text, I have to text him.
Oh, okay.
Um, was that correct?
Yeah.
Yeah.
I really enjoyed it.
That's a famous neurosurgery.
(43:50):
That's an excellent guess.
Yeah.
That's a cop out.
Alright.
So this question, all right, now backto me, back to me on the hot seat.
Yes.
This question is for, for Rohaid.
What would Fatima say?
Okay, so Andy scripted these questions,and he gave me all the ones that are,
that are funny and a little bit awkward.
So, Rohaid, what would Fatima sayis the most annoying thing you do?
(44:17):
Oh, I'm just going to text you whatI think she's going to text you.
Well, so you, you are answering this.
So, Fatima will, will textme what the actual real most
annoying thing that you do is.
And once she's texted me, you can say.
Oh, what I would say isthe most annoying thing is.
Okay.
So, she, let me make surethat I'm not crossed up here.
(44:41):
Wait, the thing that the mostannoying thing I do, right?
What is the most annoying thing?
Okay.
So, I, uh, I think I texted you.
Yeah.
And what I think is the most,Oh, I mean, he's lovely, right?
Like what could be annoying?
Um, no, so I have to say,you texted him what I think.
(45:01):
Well, so, I apologize for the confusingrule set here, let me clarify.
Uh, Rohaid, this is your question,so you will answer on the mic.
Fatima will text me the answer,and we will compare the two.
Oh, okay.
This is a new format here on AIGR, so.
Love it.
Okay, so.
So, Fatima is texting Andy, what is themost annoying thing that Rohaid does?
(45:23):
And Rohaid will guess.
After Andy gives the green light.
Yes.
This is going to be great audio, guys.
Okay, you may answer.
I think she says I go ontoo many tangents sometimes.
That is essentially correct.
She says, not focusing on thething that we're talking about.
(45:50):
Alright.
So, this one is for Fatima.
So, she will answer and Rohaidwill text me what he thinks.
Does Rohaid think that AI willeventually replace doctors?
That's a good question.
I don't know.
I don't think so.
Um, I think we've always talked abouthow exciting it is for AI to help,
(46:16):
but that doctors really fill a reallyimportant role and that it's so
fulfilling, and it would be a shame.
We love, we love working with patients.
So, I want to say no.
Uh, that is excellent.
His answer was an unequivocal no.
Okay.
Perfect.
Well done.
Alright, so this next oneis a question for Rohaid.
(46:37):
So, Rohaid, you will answer,and Fatima, you will text Andy.
Um, what do you think Fatima'sbiggest concern, Rohaid, is
with respect to AI and medicine?
You know, I think she's probablymost concerned about biases.
Um, she's training to be aMohs micrographic surgeon.
We recently did a paper looking at biasin text to image generators and found
(47:00):
that almost all frontier text image modelsdon't represent females or minorities as
being members of the surgical workforce.
And I think that she's concernedabout how it affects patients and
people in the profession in general.
In case you can't tell from theadoring look that Fatima just
gave you, you got that spot on.
(47:22):
Excellent.
I love this format.
Yeah.
I love it.
It's really exciting.
So, so you can take a deep breath.
That was the last newlywed game question.
You both did excellent.
You're obviously, uh, very familiarwith your significant other
and, uh, are totally in sync.
Wait.
So, who won?
Hey, now.
Whoa, whoa, whoa, whoa.
You, you almost had it.
So close.
So close.
Well, no, he knows who won.
(47:43):
Who won?
Yeah, you did.
Yeah.
Yeah.
Yeah.
There you go.
Um, I think that's a naturaltransition to some of the big picture
stuff that we wanted to talk about.
So, um, I'm going to break the fourthwall here a little bit and say that
I also work with my spouse a lot.
Kristen Beam is aneonatologist here at Harvard.
(48:05):
She and I have also written alot of papers together, so I
see a lot of myself in you both.
And because I see that, I also knowthat having a very porous boundary
between the personal and professionalcan be challenging at times.
And so, I'm wondering, um, how younavigate this kind of duality, both
in your personal and professional lifewhere, I mean, we have argued about
(48:26):
authorship before, uh, that is somethingabout who's first, who's last, who's
like listed last, things like that.
How do you keep that partof your relationship to be
a feature and not a bug?
I think one thing that, you know,as we've grown to know each other,
what we've come to realize is that.
There are certain gut instincts that shehas, that she ends up always being right
(48:46):
on, and I think it's fair to say thatyou think I have certain gut instincts.
And so, in that sense, we almostcompartmentalize the work.
So, I think that's one thing that wehave learned as sort of a mechanism to
work as collaboratively as possible.
But I think rather than being achallenge, I think the fact that we
(49:07):
come at this with, you know, she'sa dermatologist, I'm a neurosurgeon.
We're also just very busy.
We're both chief residents.
We have a baby coming on the way.
Congrats. Thank you. Thank you.
So that, that's all to say thatwe don't have a ton of time to
do a lot of different projects.
And so, what litmus tests that wealmost always have with each other is
if an idea is compelling enough to be relevantfor patients she's taking care of in dermatology
(49:32):
and in neurosurgery, then that ideais probably compelling enough just to
pursue overall, because it means that ittouches on some core aspect of medicine.
And so, with that sort of guidingphilosophy, I think it's led us to kind
of approaching, you know, big problems.
Yeah.
And it's really been lovely to be ableto work together because it, we've
(49:54):
actually really been able to verbalizelike what are each other's strengths.
And so, like Rohaid said, when it comesto particular aspects of when we like
execute projects, it kind of parts ofit will just automatically get delegated
to me or him, without us really evenrealizing it because we know that this
person can really do this part well andthis person can do this part really well.
(50:17):
So, it's actually been really fun. Yeah,
so for I mean the VoiceEngineproject to take one example,
patient had a brain tumor, you know,no shock in terms of who probably
saw the patient first, right?
But then she's great at coding.
And so the initial applicationthat was on the Mac terminal,
she was doing that part of it.
And so, and then she also had thejudgment of who to pick first.
(50:38):
Because we had a numberof possible patients.
And so, I think that when wecompartmentalize like that and play
to each other's strengths, that'show we work out maybe what would
otherwise potentially be contentious.
And there's always a moment in a projectthough where you have to, one coauthor
needs something from the other coauthor I'm usually the deadbeat I'll admit,
and I'll get these texts, if I get a text from her, I know it hasn't
(51:02):
escalated, if I get an email saying,could you please read and revise this
paper, then I know that I'm really behind.
So, there are still some things like that,even if there's natural division of labor.
Uh, I think you have to navigate.
Oh, 100%.
We were just working on some revisionsover the weekend, and it was very real.
We almost have a rule that we can'tbe writing a manuscript in the same
(51:26):
room at the same time because it justgets, it just gets too contentious.
So, we actually compartmentalize.
I'll go in a different room, writeup some stuff, then she'll review it.
And that's how we keep things happy.
Absolutely.
So, like, you don't do the pairprogramming version of writing a
paper where you're both sittingdown collaborative, it's kind of
like a revise and edit completelyseparate format it sounds like.
(51:48):
Yeah.
It works for us.
It works for us, yeah.
Because we also have differentschedules too, you know, um, in terms
of when we can work on things, so.
Yeah, I guess for two parts of, ifI was trying to come up with two
areas of medicine that have
diametrically opposed schedules,
neurosurgery and dermatology mightbe the two that I would pick.
You're absolutely correct.
(52:09):
Is it going to become more alignedduring your fellowship, Rohaid, next
year or not, not necessarily?
Well, you know, we have a baby onthe way and so there's no telling
what's going to be aligned.
I'm sure a lot of things will bemisaligned, but thankfully, like, the
further we've gone along, the morewe've grown closer to each other, and
I think it's been really satisfying,honestly, just having a team member.
(52:30):
And to be clear, it'snot just the two of us.
It's a lot of people here.
As I mentioned, likeJames Donovan from OpenAI.
We have a close collaborator here.
Uh, Haiyao, one of my co-residents.
Cheryl, one of her friends.
And James Xu, who we'vecollaborated from Stanford.
And of course, our respective departmentshave been very supportive of this work.
(52:51):
Yeah, like Dr.
Gokaslin, which is the chairof neurosurgery, and Dr.
Qureshi, our chair of dermatology.
Yeah, but so, it's not just that we workwell together, I think we're lucky to be
part of a broader team, uh, of supporters.
Awesome.
Awesome.
So, um, I think you've touched on thisnext question, uh, already, but I think
(53:11):
it's worth just, uh, giving us some kindof big picture, concluding remarks here.
So, there's a lot of folks who listento this podcast who are at various
stages of their training journey,residents, fellows, med students,
grad students, and you know, youboth as residents are already having
impact on medicine and in medical AI.
And the question is, whatadvice would you have?
(53:34):
Parting wisdom.
What advice would you have for other earlycareer doctors who are interested in AI?
Yeah, that's a great question.
I think the best advice I can giveis to seek out people who are both
who think very much like you andwho think very much opposite and
(53:55):
that will really help spark reallyinteresting questions and conversations.
And I know particularly at the residencylevel, what's been really important
is having fantastic mentors behind usbecause they have worlds of experience
more than we do, and so though we maycome up with an innovative idea, they
(54:16):
can be incredibly helpful in actuallybeing able to execute some of those.
And so that has been really important.
Surround yourself with really smartpeople, some who think like you, some who
think not like you, and that's just goingto be a recipe for something amazing.
Yeah, I think so.
We've been fortunate to havefantastic collaborators.
(54:36):
I think don't make anyassumptions about things.
If, if something feels offto you, trust that instinct.
With the consent project, once westarted seeing consents being written
at too high of a reading level, it'sjust like hard to unsee it, right?
Every time you update your terms ofconditions, every time we go to the, uh,
she goes to her perinatal visits for,for, uh, pregnancy, I mean, you just
(54:57):
see this often and sometimes you canjust get blinded by the normalcy of it.
And it's to look at these problems andlook at them in a new light because truth
be told we have now new technologies thatallow us to look at it in a new light.
And then of course do what you'repassionate in and you know, just
don't do it to check a box.
I think What our story hopefully willshow is that two people who didn't have
(55:21):
a great deal of subject matter expertisein AI just using it naturally and kind
of just falling into a groove, youknow, hopefully that can inspire others
to, you know, proceed into this field.
Yeah, and I think you're following,you know, there's some folks who
really have been kind of leadingthe way and teaching us about how
we might work with AI in the future.
(55:43):
And one of the folks is a, is aprofessor at Wharton at Penn, Ethan
Mollick, and he tweets a lot andhe's written a book about this.
And I think one of the pieces of advicethat he gives, which I think you're
totally, totally, a great example ofis that you're seeing how AI can get
integrated into your existing workflowand you're using your expertise and you're
using your hospital and the existingprocesses that are in place there and
(56:08):
your real domain subject matter expertise
to integrate AI safely andresponsibly into that workflow.
And I think it's not accidental.
I think your success here becauseyou're able to evaluate and appraise,
like A, get things off the ground thatother people can't because they're not
situated in the same position, but thenB, you're able to evaluate sort of where
(56:29):
these things can go awry and really
put that critical human in the loopagain, or those multiple humans
in the loop to get it integratedand to get it off the ground.
And I really do think that activationenergy, that kind of being embedded
within a system and using AI withinthat organization or within that context
where you have so much expertise builtup is where we've seen a lot of success.
So great to hear.
(56:50):
I guess one follow on question is,are there any materials that you guys
yourselves look at to learn about AI?
As clinicians, asresidents, what is helpful?
I mean, this is such a big space.
What do you like to read to juststay up-to-date about what is
happening in AI and really understandkind of what's underneath some of
these models that you're using?
(57:12):
Um, everyone should subscribe tothe NEJM AI Grand Rounds podcast.
Um, Thank you.
Excellent.
Great interview.
Now, um, and follow you guys on Twitter.
Uh, honestly, I would say X is a, orjust following on like LinkedIn and X,
I think it's really helpful because thealgorithm just naturally curates to,
uh, you know, what you're interested in,
(57:33):
and so, um, seeing that come up alsois great for amplifying your work.
You know, you mentioned Ethan Mollick.
He was actually the first prominentperson to really tweet about our
work and it gained a lot of traction.
Uh, we did some of the early work onmedical hallucinations, a hallucination
rate of these models, and that waswidely disseminated and shared.
So, I think as much as possible, I mean,look, we work collectively between
(57:55):
the two of us, we spend over 150 hoursin the hospital each week, right?
And so we don't have a ton of free time.
So,
I think that's good in a certain sensein that we know really fundamentally
what are the core issues that arethe pain points for patients on
a daily basis and for providers.
So, it's good, it gives us, you know,an avenue to find problems to solve.
(58:16):
But, you know, as much as simplifyingit, I think, um, uh, integrating,
you know, your AI reading with yoursocial media is not a bad way to go.
Yeah, absolutely.
And I think part of what'sbeen so important too is just
using AI on a daily basis.
Like, you're writing an email, doingsomething else, you know, you start
to get an understanding of what arethe limits of what the model can do
and what it can't when you just kindof integrate it into your daily life.
(58:39):
And so, I think that's beenalso really important.
Awesome.
So, I think, um, I'd like to just askone more question and close this out.
On the show, we talked to like techluminaries, we talked to AI researchers,
we talked to like very senior physicians,and we like to ask them like, what
do you think the next like fiveyears of AI and medicine looks like?
Um, but it strikes me that they, uh,probably over or underestimate what
(59:02):
the next five years will likely be.
But given that you are bothearly career physicians.
You've done some leading work in AI.
I'm very curious to hear your perspectiveon what the next five years looks like.
What are you excited about?
What are you fearful about?
And yeah, just, wewould love to hear that.
I mean, I think from our point ofview, the question really comes down
(59:23):
to who is going to be able to getaccess to these tools and what that
will mean kind of on a society level.
And what I mean by that is,
I think as you've published in thejournal recently, doesn't actually
exist a lot of, for example, CPT codesfor many of these AI tools, right?
And so, what you don't want happeningis all these cool technical advances
(59:45):
taking place, but it's not adoptedat large because most insurances
won't pay for it, for example.
And so, then you get a scenario wherepeople are paying out of pocket.
And so, I think we just have to bemindful of that moving forward as a
society, that we're ensuring that we'realigning incentives and interests,
uh, such that this technology can bewidely deployed to people at large.
(01:00:07):
I think both Fatima and I, and Fatima youcan speak to this as well, are optimistic
about what this technology will do for us.
I mean, both in obvious use casesand in non-obvious use cases as well.
Part of what's going to be importantfor doctors to do is to thoughtfully
and methodically implement thesetools, study it, ensure that they're
(01:00:29):
evaluating all safety outcomes,and then sharing it academically,
and putting it up to peer review.
And I think that's been thereally important part, right?
Like, AI is here and it's here to stay.
And it's either going to have theinput of the people who really care
about the patients or it's not.
And so, what's been really exciting is thatwe get to be part of that conversation.
(01:00:51):
And honestly, what is it goingto look like in 10 years?
I think it really depends onhow we shape that future.
And I'm just really excited that we get tobe part of that conversation, and that we
get to help bring patients to the table.
Awesome, thanks.
That was a great answer.
Thanks.
Alright, well, that was fantastic.
Thank you both so much for joining us, andthank you for being on AI Grand Rounds.
(01:01:14):
Thank you for having us.
That was awesome.
Thanks so much.
Thank you all.
Thank you, thank you.