Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
We're exploring AI and healthcare today on CXO talk
#874 with three experts shaping this future.
Doctor David Bray is distinguished chair of the
accelerator at the Stimson Center, Teresa Carlson is
president of the General Catalyst Institute, and Doctor
(00:23):
David Rich is chief clinical officer of Mount Sinai Health
System and president of the Mount Sinai Hospital in New
York. We're going to explore what's
real, what's hype, and how AI impacts decisions, data and
patient outcomes. So, thoughts on solutions and
(00:45):
how AI can support the overall healthcare system?
The answer is to really analyze from the perspective of the
providers what is a true return on investment.
I have many people in the startup world who are often
approaching and presenting me with the problem that they can
solve, but I don't actually havethat problem.
(01:07):
And so they're trying to convince me I do.
And there are many problems I have that they they don't really
work with. And the, the other thing, of
course, is that people in the early stages want to work with
the health system. And they say, well, we're, we're
with you for free. Well, free isn't free because my
friends in information technology are, you know, very,
(01:27):
you know, upset with me as as a clinician.
I come with them and I say, lookat this bright shiny object and
they try to explain to me the amount of effort that would go
into integrating it into the IT workflow, let alone the clinical
workflow. So I'll stop there for a moment.
I want to amplify what what David was saying.
It's bringing back a lot of flashbacks and a past life.
I was with the bio terrorist prepares response program.
(01:49):
This was back in the early 2000sto 2005 and we were trying to
plug into hospital systems, state systems and things like
that to basically monitor. And as David so well put, the
irony of electronic health records is they've actually
created more human labor to complete.
And, and I think also that's what David was saying, any good
solution in this space has to fit into the workflow that's
(02:12):
already being done by clinicians.
Now there's a bit of a catch 22 because technology also changes
are the possible. And so it's really you have to
involve clinicians and then to, and I'll give a nod to what
Theresa's talking about. We're now in a place because
technology is changing what's possible so quickly doesn't mean
it's being adopted, but it's changing what's possible so
quickly. Policy makers almost have to
(02:34):
iteratively learn also from what's even possible in the
technology, both to remove barriers, but also to put in
place incentives. If we're going to move forward
and think about third and 4th order effects almost in real
time, which makes this a really hard space to find solutions,
it's doable, but it's a very hard space.
David, so right, don't bring a solution to an area there's no
(02:54):
problem. And I think that should be the
number one use case of the companies that are coming down.
What problem are you actually solving in?
And there's got to be an ROI because you can't add more cost.
Our job should be able to make our clinicians lives easier, our
physicians be able to service more of these patients faster,
more effectively and provide them the data that they need to
(03:17):
get their job done more rapidly.And for us, I would tell you
it's, it's really very much about ensuring that we're doing
value based care and outcomes based care.
And I'll just share one example.We have a couple of companies
that we were with yesterday thatwe've invested in that are
really thinking about patients that are really underserved
(03:38):
populations. I'm from Kentucky, the hence the
accent. I'm from a very rural part of
the United States and one of ourfounders is focused on rule
based care and ensuring that we're not only just training and
teaching, but we're actually using technology to force
multiply the capabilities that are there.
(04:00):
Because a lot of times you have this specialist, you don't have
the access to education or advocacy for these individuals.
And they have individual and unique needs in their healthcare
and they are usually the last toget it sometimes.
And so she's thinking about a problem.
And there's a lot of America that is in rural America, not
cities or Urbans. And then the second part is for
(04:23):
a company that we have called City Block that's taking care to
the most underserved in there. There's a Medicare, Medicaid
overlap with what they do that they literally do not get paid
until the patient gets better. So it's a value outcomes based
model and both of these companies, Homeward and City
Block are doing this model that's very unusual.
(04:45):
And when we talked to the legislators on Capitola about
that yesterday, there's like, what?
I didn't even know this existed.So, you know, one of the
physicians from City Block Toy and who started this company
said she's a physician herself. And she said, look, I could see
a lot of patients during the day.
I could give them a prescriptionand hope that they leave my
(05:05):
office and go fill that prescription and they take it.
But I can't ensure they're doingthat.
But I get paid for that visit regardless.
And their model is much more about ensuring that there are
outcomes from what they're actually prescribing and doing.
And that's what I call a shared responsibility model between
that physician and the patient. So we're really looking for
(05:26):
creative ways to ensure we're getting healthier outcomes, just
not charging the system more. David Rich, your thoughts on
this kind of relationship between innovators and
hospitals, hospitals, I mean you're president of of a major
hospital. So I'm assuming you're dealing
(05:47):
with these practical issues all the time.
One of the the key. Things that we're.
Working on Michael is a a seamless pipeline for sort of
digesting the the multiple. Groups that are coming to us
with fantastic ideas. General Catalyst being we know
one of the the top groups that we we pay attention to.
We regard anything that comes through an organization like
(06:09):
General Catalyst is being highlyit was something that's highly
thought about and you know, it, it comes to us with a degree of
how should we say it's, it's curated in a way that we should
pay close attention. And as we do this, we try to
break them down into the different buckets, you know, for
(06:29):
example, is it? Something that would improve
patient experience, Something that would improve workflow,
Something that's focused on a patient outcome.
And so that we can look at what our ROI is related.
To this particular. Idea and ROI isn't always
strictly financial because in our world we have penalties for
(06:51):
failing to meet certain criteria.
And so sometimes penalty avoidance is an ROI, sometimes
better outcomes and shorter length of stay is an ROI because
we can backfill with other patients.
So it all comes down to whether the idea integrates into a sort
of a category of ideas that makesense for us and is something
(07:14):
where we can rationalize an ROI and sometimes partner and since
we're an organization. That likes to think of.
Ourselves as entrepreneurial potentially work with startups
and other groups and put in in our case, what is often the
sweat equity in helping develop and making a product
commercially viable. Check out cxotalk.com, Subscribe
(07:38):
to our newsletter, Join our community.
David Bray I'm hearing a core issue here is alignment between
groups that have somewhat overlapping agendas and goals,
but also distinct. Is that a correct way of looking
at it? And how do we bridge that gap?
Yes, I think this would be the definition of what we would call
(08:00):
a wicked problem where you have not only different folks aiming
for different goals that sometimes overlap, sometimes
don't, but the very goals and how you're going to pursue it
are changing at the same time too.
That's why, you know, everyone'slike, there's a magic wand that
will magically fix healthcare and solve it by the next hour.
I think we're all we're all trying to say what you really
can do is if you get the stakeholders together.
(08:22):
And I think because things are moving so quickly and, and
things are shifting, do three things, which is think about
first and foremost, how does this improve the practice?
And, and, and David Rich was very good at pointing out, think
about ROI. And ROI is not just in terms of
financial return, but giving back time to the clinicians,
healthier outcomes for the patient in terms of getting them
(08:42):
released earlier, and then also penalty avoidance.
And so practice first and foremost.
Then on top of that, then thinking about the new program
and having some curation. So it's not just every shiny
idea gets tried out because you just can't absorb that.
And the organization would be stressed and it actually might
disrupt up the organization morethan help.
And then finally the policy layer.
And that's where Teresa was saying, because we have
(09:03):
policies, you know, our laws don't have expiration,
expiration dates usually. Sometimes they do, but usually
there's no expiration dates. And so there's things still in
the books from the 1940s, nineteen 50s and 1960s that may
have made sense then that don't necessarily fit now.
And I think what I would submit is 1 area in particular that I'm
personally passionate about coming from a background with
the People Center and Internet Coalition and, and what we're
(09:24):
doing at the Simpson Center now,data, in my opinion, is a form
of human voice. And I think we've got to think
about instead of data being something that's opaque to both
the patient but also the clinician, about where it came
from, how it's informing and everything like that, we've got
to think about how it gives bothmore visibility and stakeholder
ISM to the clinicians as well asthe patient because this is
(09:48):
going to be helping to influenceoutcomes.
And if you're in a rural state where maybe things aren't even
digitized yet, they're not even accessible yet to help inform a
clinician with your care, that'san unfair thing that we have to
figure out how to dress. But at the same time, we don't
want patients and clinicians to lose the ability to choose when
and where that data is used. If we want to take advantage of
these most transformative technology companies that we
(10:10):
hope will help David in his practice and our patients.
The one thing I will share is that we also can't have a
patchwork, have a regulatory environment that doesn't enable
this. And like is a good example I
think is healthcare. But if you think about there
could be a federal AI model and then fifty state models.
(10:32):
And then if these companies workoutside the US, they have
another model that they have to adhere to.
That is not if you're really it's an extension, it's an
existential threat to these really transformative
technology. And we want the doctors and
physicians, the physicians and the nurses and the clinicians to
have the best, best technology for what they're trying to
(10:52):
achieve. But what we actually adhere to
is we believe Healthcare is a really good example where you
can allow the industry to take control of how they want that AI
to be enabled. And I'll just give one quick
example. The FDA has been doing their job
a long time. Let's let them actually work
toward what the right policy is for their environment for
(11:13):
healthcare. And they're actually kind of
already doing it. We have companies working in
with FDAAI Doc was with us. They work directly on the
physician to look for capabilities to allow the
physician to see if there's medical errors to help them with
that. And we know that medical errors
is one of the largest cost that they're moving fast.
(11:34):
They need technology to support their efforts that if we can let
those industries kind of operate, then we can take this
AI capability industry by industry with the policies and I
think it will allow us to move alot faster and let the experts
who know that industry work on the right policies.
There's an organization called the Consortium for Healthcare.
(11:55):
AI Mount Sinai's involved all ofthe major health systems.
And one of the. Aspects that we're all concerned
about is that we can be regulated out of existence
before we exist. And so, and it isn't just at the
federal level and the state level.
We also have Joint Commission, other certifications, we have
(12:16):
local health boards as well. And So what we've done often is
trying to come together as, as agroup of, of healthcare
organizations and bring all of the three letter federal groups,
at least as a start, together with the concept that we might
develop assurance labs, such that as the AI tools are
(12:39):
developed, that we have a way ofassuring their quality and
assuring that they, you know, fit into the, I would say the,
the culture as well as the, all the different matrix elements
of, of working in complex healthsystems.
And I think the, you know, one of the ways we get around it
(12:59):
right now is often a lot of the tools that we've implemented,
especially once we've developed internally for decision support,
We try to think of bringing the right team to the right patient
at the right time. We don't actually make the
diagnosis, but we try to identify the patients who most
likely have that diagnosis and then bring the clinicians there.
We'd like to move beyond that atsome point.
(13:20):
But that's going to take a little help.
We need to have a real partnership between government,
industry and healthcare providers in order to be
successful at that. We've had John Halamka as a
guest several times on CXO Talk,who's one of the founders of
Chai. Looks like an interesting
organization. We have some questions that are
coming in from LinkedIn and Twitter, so why don't we jump
(13:42):
there? And this is a question from
Arsalan Khan, and I'll toss it out to the three of you, whoever
wants to grab this. Has anyone mapped the entire set
of healthcare workflows? And what about having AI
recommend which workflows are archaic?
(14:03):
This is actually a needed space and there are different groups
talking about how do you translate what is business
process modeling in the healthcare domain into some way
that can then be analyzed rigorously across different
systems. There are different vendors that
have tools for business process modeling in healthcare space,
(14:24):
but they're each in some respects proprietary.
And so if we can get to that andactually have almost a universal
way of expressing business process modeling in healthcare
space, we can actually do another thing too, which is we
can go to the journals. And this is where I'll I'll give
a nod to my other fellow colleague David to talk about.
Sometimes the journals actually have peer reviewed articles that
(14:45):
recommend different ways of achieving outcomes that might
actually be different. But because it's right now in
text as opposed to business process models that can do
comparison, you actually can't compare and say which of these
leads to better efficacy for thepatient.
So this is actually an area where with AI, if we can
actually get to some way of not being locked into proprietary
(15:06):
vendor systems, but instead havea way of having interoperability
to express business process modeling across both hospital
systems and IT systems. But then also translated out of
the text journals in terms of what is the peer reviewed gold
care, then you can actually start to do some very
interesting things and actually achieve that.
However, first we've got to figure out what's that place
(15:26):
that encourages people to think more broadly than their own
specific system, but think aboutholistically across systems of
systems. I'd like to address the the word
archaic. One of the.
The jokes in medicine is that the way you learn is you.
If you've done one case, you sayin my experience, if you've done
2 cases, you say in my series, and if you say done 3 cases, you
(15:48):
say in case. After case after case.
So one of the problems that we have in medicine is this culture
where experience that leads to best practices that are not
always evidence based. And as we develop the evidence
base in getting the physicians to change, getting the nurses to
(16:09):
change, getting the transportersto change, everyone who's
involved in care in a healthcaresetting requires some sort of
persuasion. The common term now is the nudge
unit. And the nudge unit is intended
to identify and to gently or maybe sponsor gently correct
(16:30):
behaviors that are are headed inthe wrong direction.
Often those are things that are are wasteful either clinically
or, you know, sometimes financially.
Like a perfect example was University of Pennsylvania
managed to switch prescribing behaviours to less expensive
drugs and we're very successful and we see similar things at
(16:53):
Mount Sinai where we've had somesuccess.
So I think going back to archaic, it regards the, the
most important thing I take intoaccount is we have to think
about behavioral change and sometimes that'll be based upon
behavioral economics. And so the behavioral economics
perhaps are the the answer that I would give to starting to look
(17:16):
at ways of overcoming archaic processes.
We have another question that has come in on a related topic
on the topic of interoperability.
And this is on Twitter from Chris Peterson, who says we're
30 plus years into healthcare interoperability standards, but
we're still not there in many respects.
(17:40):
And so how does this lack of standards affect the advancement
of AI in healthcare? We do hear a lot from our
companies that is that it is because they are so creative.
I mean these entrepreneurs, I can't stress, they are trying to
(18:00):
solve a problem, not just look at a problem and jump over it
and step in it. I mean, they're really trying to
solve a problem. But I would tell you that they
do say interoperability is, is atop challenge for sure.
But I think what we're also seeing, Michael, are creative
ways that they are trying to work around it and look for new
(18:20):
ideas, even starting from scratch on some things, which I
know nobody wants to do, but they're like, OK, we got to
solve it. Not just keep talking about the
problem, but look for creative solutions.
And I will tell you AI, because the Internet in these data sets,
these large language models are getting so good that you're
(18:41):
almost now beginning to create new systems of record without
even knowing it's being created,right?
So you're seeing these ideas. And also now they're beginning
to access systems that are therein a way that's creating new
ideas to look at that data and look for solutions.
(19:03):
So while it would be we, we talked about freeing the data a
lot and I think we need to free that data.
Data needs to be free, you know,our physician David needs to be
able to take advantage of that in every aspect of his practice.
He should never not have access to the information that he needs
at his fingertips to treat a patient.
(19:24):
On the more research side or Doctor David Bray, we want our
researchers and our new, the entrepreneurs creating new
technologies to have those capabilities that they can bring
back to that clinical work. So I would say whatever we do in
this country and around the world, we've got to make sure
that we have open and accessibledata.
And just by the way, not just for the clinician, but for me,
(19:48):
the patient, I want 100% access.I want to make decisions about
my own data. I think you're seeing consumers
get a lot smarter, so we need systems that are open and
interoperable in order for the patient themselves to make the
right decision about their care.David Bray, let me ask you a
question then, as Teresa was just describing.
(20:11):
Well, as David Rich and Teresa were just saying,
interoperability is very important and Teresa was just
describing the need for flowing,the free flowing of data in
order to support innovation. But on the other side, the
economic incentives for healthcare information systems
(20:34):
in many cases is to keep that data because he or she who holds
the data holds the money. So what about that?
I think we need to unlearn the meme from about a decade ago
that somehow data was the new oil.
Oil, use it up, it's gone data, use it, it's still there.
And in fact, if you involve people associated with the data,
(20:54):
whether it be a patient or a clinician, they'll find things
wrong in the data, they'll fix it, they'll make it better,
they'll create more data, they'll create better quality
data. So in some respects we need a
different metaphor than data's new oil.
I also think we need to simplifythe consent mechanisms, but also
the transparency. So a patient can see where is my
data going because right now youfill some forms, but they're
(21:17):
paper based forms usually and you have no idea where it's
going to go. And then similarly, a clinician,
as Teresa says, needs to be ableto get access to that data to to
do what they need to do. And so there are technically
policies that say you're not supposed to do information
blocking and, and, and it, this is one of the sort of tensions
of the US system given that we have a free market, which is
great, but we also then are trying to think about how we
(21:38):
overcome individual self-interest to.
Hoard the last thing I would sayon the interoperability
challenge, Michael, I often use the phrase that standards.
Are like toothbrushes. Everybody says they want to use
one, they just don't want to useanybody else's.
And so you have to be careful because not all standards are
created equal. Some standards are really
vendors trying to build a Moat. Sadly, it's it's standards that
(22:02):
are propped up by them. But other standards groups
really are about trying to levelthe playing field, whether it's
for established companies, startups and the like.
And so usually I look for, do they charge or not to use that
standard? I think the last thing I would
say is also technology is, is advancing just what's even
possible in medical medicine. And we've gone from about 20
(22:23):
years ago, there were 12,000 CPTcodes for claims.
Now we're up to 72,000 CPT codesfor claims.
It's getting where it's really difficult for a human to know
which claim code to use, whetherit's an experience coder, let
alone a patient trying to figureout when they actually are
looking at their bill. So this may actually be
something where we need to thinkabout a new strategy, almost
(22:44):
like what Theresa was saying. What would it look like if you
had your own personal chat bot to help you understand your
bill? If you got your bill and or pre
authorization to navigate what is a very complicated standards
based process and how do you make it more accessible to
everybody, not just someone who's a technological expert.
We have a question from LinkedInfrom Ravi Karkara who is the Co
(23:10):
founder of the AI for Food global initiative and David
Risch. He says this what AI skills will
be needed for medical and nursing schools to teach in
order to create this AI medical revolution?
We don't teach so many things inmedical and nursing.
(23:32):
School, other than perhaps the traditional things that emerge
from anatomy, Physiology, mythology.
And we need to teach people about systems based practice and
that it isn't just AI. The whole point is technology is
(23:53):
here. I would argue that Healthcare is
a technological industry that has failed to recognize it for
decades, and so I couldn't agreemore.
With the individual that asks the question that as we reform
the curricula, it is about teaching people how to best
integrate with technology, with.The proviso that you have to
(24:15):
teach also that AI and other technologies that it's not a
threat, it's actually a way to augment the way you deliver care
and. I have a friend who's chair of
radiology here at Mount Sinai and he says that AI won't
replace radiologists, but radiologists that use AI will
replace. This radiologist that do not.
So I'll leave it at that. I just want to remind everybody
(24:38):
that you can ask your questions on Twitter, Use the hashtag CXO
talk if you're watching on LinkedIn, pop your question into
the chat. And truly, when else will you
have the chance to ask such an esteemed group of people?
Pretty much whatever you want. So I hope you take advantage of
it. And this is from again from
(25:01):
Arsalan Khan who comes back and he says regarding healthcare
data, do we need more healthcaredata or less?
And what can the government do at the federal policy level to
help this data and interoperability issue?
I've been in the world, I've been working in the public
sector world for over 28 years and I'm used to a lot of data.
(25:24):
So I've seen it just continue. So I so I don't know that we
have a limit on the data anymore.
And artificial intelligence is really a key solution for us to
be able to enable and understandthat data, research on it, get
immediacy from it. So I don't, I don't think data,
I think I would be personally OK.
(25:44):
I think as much as we need, it'sOK because we're going to keep
building on it. So I don't know that there's a
limit when you're working with folks like the intelligence
community and defense and you see over the years the amount of
data in healthcare, I think we're OK.
But to your point, what are the policies?
I would tell you we need to educate a lot more our policy
(26:05):
makers. I do think they want to
understand, but I don't know that we've done a good job
actually explaining to them thatif the data is not free flowing
and there's not accessibility toit, what's the harm that it
does? Why does it inhibit our ability
to make America healthy again? If that is the goal, if that is
(26:27):
this administration's goal, whatdoes it do?
So I see our job as an example at the institute working with
partners like this to go in withexamples.
And, and to your point, David's point earlier, there are
policies in place that are supposed to make sure that the
data is open and free flowing. But for some reason we hear from
(26:48):
a lot of our companies, a lot ofbig and small companies, it's
just not working. And not just them, we hear from
the physicians and the clinicians and the hospitals.
It's just not working the way it's supposed to work.
So when we need to figure out why that is, and then we need to
ensure that the policies that are in place already are being
activated, activated, and somebody is watching out for the
(27:11):
patient to make sure that that data is open and available to
them. The clinicians and the hospital
systems and the vendors and partners that actually have to
have it to make America great again.
And healthy, sorry, healthy. We're already great.
I think America's already great.Let's make it health.
And Michael, if I can build on what Teresa said, 'cause I want
(27:33):
to give a nod, General Catalyst,the General Catalyst Institute
put out a report. And what was interesting is they
were talking about what Teresa mentioned earlier with some of
the examples shifting from rightnow, what is a service where you
get paid a fee, but doesn't necessarily mean that the
patient followed through with the healthcare advice, followed
through, didn't get actually gethealthier.
Shifting from that to value based care and outcome based
(27:54):
care. And I would submit this is where
the data is so crucial because if you're going to do value
based care and outcome based care, you almost first have to
understand the the sort of the long, the long tail of that
patient and understand are they pre diabetic?
And therefore, if you get involved now you can avoid some
of the more costly things that they actually become diabetic.
(28:15):
But that's one of those things again, where you know, it's both
as what what David Rich was saying about how do you help fit
into the existing workflow for clinicians?
Because if you give them a 40 or50 year olds patient record,
they don't have time to wade through it all to make sure they
catch everything. And so how can you give them
(28:35):
confidence if they're working with an AI augmented tool, it's
not replacing them, but augmenting them confidence that
the AI has brought the relevant things to help them inform that.
And then when you. Teach.
Future generations of doctors and nurses, how do you make it
so it's not just about the individual clinician, but part
of a larger system to make sure that hand off for all the people
(28:57):
that are providing that value based, outcome based care are
thinking about that patient's neutrality.
And so that's a major revolutionto try and fit into the next 5
to 10 years that's already got asystem, that's already got
strains. But you know, that's what also
makes this space very interesting.
And that's why we need to have these conversations now about
let's make sure we don't lose sight about empowering the
(29:18):
patient and then also making sure the clinicians have the
tools at their hand to actually help with that.
When we talk about maximizing physical responsibility in U.S.
healthcare, the only other pointthat I want to make here, if we
want to cut the red tape out anddo that, we don't want to make
it more expensive to get to the data and be able to access it to
(29:41):
get those outcomes. Because again, let's remember,
AI and technology over time should drive the cost way down,
right? We should think about, I know
people talk about it driving it should drive the cost down
because you can do it faster, more efficient, get to the
outcomes. And then let's just not just
less expensive, but then you getto the results faster.
(30:03):
So that allows our clinicians tobe able to make decision, the
decision making much faster. So I just wanted to say that
because we we talk a lot this this administration is very
focused on cost cutting. So we can't make it more
expensive. And that's why technology should
be the enabler both to move faster, get the resourcing there
and again, don't make getting tothe data more expensive.
(30:25):
Let. Me give a couple of practical
examples if I could that I thinkillustrate the the points we're
just making About four years agowe started maybe the longer we
started working on a predictive algorithm for delirium in
hospitals, which is a big problem and the structured data
were not very helpful didn't didnot end up with a predictive
(30:47):
model that was you know all thatimpressive.
But as soon as we added natural language processing to the
unstructured data in Physician and Nursing notes, the model
became. Predictive.
Almost scary predictive in termsof how accurate it was.
And so I think that as someone who was working with Snow Med
and Itsy Doo years ago, trying to establish standards and then
(31:09):
watching, watching the moats as David Bray described being built
by the different electronic health record providers, at
first I thought, oh, this is just the end of standardization.
We'll never have interoperability.
But now I'm starting to. See that the tools, especially
large language models that are able to create.
Some summarizations of massive data sets be very important.
(31:31):
I walked into the OR this morning and the patient had had
five previous cardiac operations.
I'm a cardiac anesthesiologist. Just imagine trying to summarize
that data. It's challenging, but honestly,
if a large language model had been available to me and had
given me a 5 paragraph summary of the most important
(31:52):
unstructured data in all of those records, it would have
made my job much simpler. I would not have spent 45
minutes last night pouring through electronic records.
I think those are practical examples of how we've lost the
battle in terms of having the data perfectly structured.
But with interoperability and using newer tools, I think we
can overcome those sins of the past.
(32:15):
We'll say what I find fascinating is that we even have
to have a discussion where we remind ourselves that we have to
put the patient first. It's just, there's so many
different stakeholders in this system.
And I do think it is a, you know, again, because we separate
our, our, our public sector fromour private sector, which is a
strength. But it does mean you have to
(32:36):
think about, you know, how do you empower the clinicians,
which are plural, which are working across different IT
systems that are provided by different vendors.
And then also the fact that the patient has the ability to
choose in some cases where they go, and they may go across state
lines. And for some mysterious reason
in the Constitution, because nobody mentioned who got to
oversee healthcare, it defaults to being a state right unless
(32:59):
the federal government uses somepreemptive clauses.
And so I actually think in some respects I liked what David Rich
was saying because I also have those battle scars from the
standards battles about 20 yearsago, which sadly, there were
people that were trying to, to, to create a standard identifier
for everything in the known universe.
And I would submit that is nevergoing to be doable.
(33:19):
But now with using unstructured data and using new approaches
that can actually say I don't have to have everything
specifically specified, but I can help the clinicians with
understanding whether it's through a summary of the text.
But could also be using computervision to go through 1000
computer images and actually saythese are the ones you should
pay attention to. I think if anything, it may be
(33:40):
we're in a world that's drowningin data but lacking in insights.
And if you can give the insightsback to the clinicians and the
patient, that's how we're going to get through this.
We have an interesting question from LinkedIn from Greg Walters
around this set of issues. Before I take that question, I
just want to mention to everybody who's watching.
(34:02):
You guys are an amazing audience.
You ask incredible questions andwe want you to join the CXO Talk
community. So just subscribe to our
newsletter, go to cxotalk.com sowe can tell you about our
upcoming shows because we have incredible shows coming up,
really awesome ones. So Greg Walter says this, it's,
he says that he believes AI can be the glue in healthcare, not
(34:27):
only accessing old data, but creating and tracking new data
and therefore could being that being that connective tissue.
And he wants to know what's the level of investment dollars and
the commitment to the, regardingthis mission of AI that the
healthcare industry is currentlyputting towards these
(34:50):
technologies and what is the, what is the engagement level as
well? We look at ourselves using
standard surveys, I believe Gartner did one for us.
We, for example, we look at how much we spend on cybersecurity
versus the banking industry and it's much less surprisingly and,
and especially with the challenges in healthcare, we're
(35:12):
an industry with very low margin.
And in fact, with the downward pressures that we're going to
see related to, you know, a desire on the part of the
administration to reduce healthcare spending, that we
really wonder if we're going to be able to afford to stay up to
the level of spending that otherindustries have related to
(35:34):
information technology writ large.
And so the answer I think has tobe that we always, when we look
at the ROI on a tool, have to see a way that it actually
reduces some aspect of the expense.
And so for example, finding tools that are better able to
(35:54):
engage patients as partners in their own care.
Truly personalizing medicine isn't just looking at a genomic
markers and biomarkers and environmental factors in
deciding what's best for that patient.
You also have to analyze something that we've never
really looked at, which is the receptiveness of the individual
(36:16):
or the family to actually being a.
Partner in that. That care so as we move from,
you know, good old fashioned, you know, case after case after
case that I referred to previously towards population
health, towards personalized medicine, we have to really
start to bring it down not just to the level of the provider and
(36:38):
the patient, but the patient andtheir desire and interest to
interact with the provider. Making America healthy again is
actually a partnership and. Michael, if you look at venture
capital, venture capital's investing, I think alone about
11 billion this year in AI, taking a look for healthcare and
about, you know, could be small in a $4.5 trillion health
(37:02):
market, but it's getting there and we believe in it.
And I will tell you our CEO Hemantineja has been investing
in healthcare, I mean for so long, he's a believer, He's
created companies. We we believe in this model.
So much of AI transformation in healthcare, we have a whole
group just dedicated to this. And the other thing I'll share
(37:23):
is we are seeing on the healthcare side, the six CE OS
and founders we just had up hereyesterday, their business is
growing at a really rapid rate, all focused on AI.
And we're seeing the healthcare industry that the payers and
providers really lean in becausethey see the opportunity.
(37:45):
So I expect this, the AI effortsin healthcare to grow
exponentially. And I'll just give you one
example that we've been talking to this administration about,
and we put in our white paper that we just put out a week ago,
which we advocate for sandboxes where we can allow healthcare
institutions in the US, federal government and states to see the
(38:09):
value. So they try out, they realize
how, you know, they need to understand it.
And we're a big believer in experimentation because we don't
want back to Doctor David in D'spoint.
We don't want people just selling him stuff.
He should see the value in a technology before he adopts it.
(38:30):
So he should have the opportunity to try it out in the
sandbox. Because he may say to his
credit, he's a smart guy. He works every day very hard
with. You might say, I'm not sure, but
I'd like to try it. So we need ways to allow for our
clinicians, healthcare providersto try out tools and show the
value quickly and then they can acquire it and scale it.
(38:51):
And I think in today's world of cloud and AI, it's not a license
based world. You should be able to try things
out and then scale it. David Rich, how do you measure
the clinical value, clinical impact of tools such as Teresa
was just describing and, and theoutcomes, the impact on
(39:13):
patients? Well, it really comes back to
the, you know, the, the many ways that ROI can be calculated.
A simple one for a very complex tertiary or quaternary care
hospital is can I assure a better outcome, a lower
mortality, a lower infection rate?
Can I empty a bed more quickly and fill it with another patient
(39:34):
that generates another DRG? Just at the simplest level, at
the population health level, something that we have not
really been successful at at least much less so on these East
Coast a little bit. Better on the West Coast is
value based care. So if I had a panel of patients
and they really had a narrow network and they really had to
(39:55):
use our health system for their care and I'd better manage let's
say their diabetes, their hypertension, etcetera.
You know, can I reduce their spend?
Can I truly bend the healthcare cost curve?
When you have. A wide open sort of rodeo here
in the United States where if I'm not happy with Mount Sinai
(40:17):
Health System, I can go to the health system a mile down, you
know, 2nd Ave. Or 3rd Ave. and go to a
different 1. You know, we don't really have
the ability. To to really model and measure
at that level. So it's, you know, really to
answer your question a bit, Teresa and Michael, it's, it's
such a mess because we don't really control the, you know,
(40:41):
the, the spend and we don't havea good way of measuring the
outcomes in an environment wherepeople can get their care
wherever they. Choose.
And that's part of what we valueas Americans is freedom of
choice. But if you go to Europe, then,
you know, they've taken a different approach with National
Health insurance and they've created, you know, sort of a
(41:01):
captive market. And they can for, you know,
it's. Not a word that we'd like to
hear, but they they ration care by deciding what's in the
national basket, what drugs willbe provided.
What service will will be provided and can apply standards
that are not really acceptable in in US culture.
(41:22):
So I think we have to also thinkabout these cultural barriers to
bending the healthcare cost curve.
Doctor Faith Mehmet Ghoul, who is CEO of The View Hospital in
Doha. He says he appreciates the
valuable medical examples sharedfor AI applications during this
discussion. However, we should not overlook
(41:44):
the significant potential of AI in non medical areas within
healthcare as we continue to face challenges around resource
allocation and affordability. AI can play a transformative
role in driving efficiency and improving access to care.
We know, for example, with this incoming administration, they
(42:05):
want a focus on eating healthier.
Also trying to change your lifestyle, you know, go outside,
enjoy the sun. Those are all things that you
don't need a medical doctor or anurse to both prescribe for you.
You don't have to prescribe. And two, it's hard for them to
follow up on it, but you could imagine if you did have an AI
app that was actually about helping inform the patient so
(42:25):
that they could actually do the behavioral changes that David
Rich was talking about so much of health, especially if you
want to sort of in that cost curve he was talking about
there. There there's things that
traditionally in the United States we called public health.
They were behavioral changes. But again, because the strength
of the United States is where highly decentralized.
That's also the challenge of theUnited States is we are highly
decentralized. You have freedom of choice.
(42:47):
The question is, how do you bring how do you bring the Nexus
of both choice and agency to theindividual to let them pick what
they want to do, but also. Find a way.
To bring together all those different parts that are not in
the immediate medical setting that can improve health
outcomes. And I think that's where there's
a long tail, both before you seea doctor and a clinician, but
(43:07):
then also afterwards too. That is that is huge
opportunities for innovation in the space if it you know, and
recognizing again, different countries have different
choices. EU s s strength is where
decentralized, but any solution has to take that into place.
This is from Philip C on LinkedIn, who describes himself
as a builder, investor, and advisor, and he says Amen to
(43:30):
sandboxes. Is there a realistic pathway to
bring together user data isolated in healthcare systems
and user data scattered across their consumer apps?
It seems like that's a natural next step for greater
contextualization of people's healthcare needs via Gen.
AI and other AI techniques. So going back to what we were
(43:53):
talking about both 4 David Rich,you want to jump into this one?
The how to how do we bring data together?
Well, I, I have a friend who taught me the expression that no
one lies to their search search engine.
And, you know, and it's, you know, when we stopped laughing
about it there, you know, there's a certain truth to it
(44:14):
because, you know, you know, we,we, we really want to know
certain things. And I think that, you know,
looking at, you know, consumer information, marrying it with
medical information, it, it certainly is going to provide
additional insights. And so it's something that I
would look forward to. The other thing that we really
haven't? Addressed so much in this talk.
Is is the concept of privacy? What is privacy?
(44:36):
Did we give it all up once we started tracking ourselves with
our phones? Of course we gave up some
aspects of it. And there's a balance in society
between the respect for privacy and the need for things like
national security. So where are these balance is
going to occur and as genomic information proliferates, Mount
Sinai has 300,000 genomes into a1,000,000 genome collection
(44:59):
project. There's no such thing as a de
identified genome. So we have to bring together
what can be de identified and perhaps work on that.
And I think we also have to create safe harbors as.
Something that's in the nationalinterest where we can bring
together data that have to be protected so that individuals
(45:20):
genetic information is not released, but to truly
personalized medicine to truly bring together it's the
information outside of the medical records, it's the it's
the information inside the medical records and it has to be
shared in a way that we are comfortable.
With as a society. I'll just say one thing.
Hippocratic AI. I'm going to answer our
(45:42):
gentleman from Doha and this newgentleman by saying I agree that
there's so many health applications out there that
actually not not just the day-to-day clinical care that
we're talking about, but help inmany other ways.
And one of the solutions that I just love is the one of our
investments called Hippocratic AI.
And their mission is really to close the growing shortage of
(46:04):
healthcare workers by helping them scale.
And they don't do, they don't doprescribing.
They don't do, but they do follow up and like they call
down, there's an agentic AI thatcalls the patient and says, ask
them questions about their health.
Have you checked your blood pressure?
Have you taken your medicine? Did you get your prescription
(46:25):
filled? Have you eaten healthy today?
Back on nutrition where you could scale a nutrition as to a
million people at a time, but then merging those two data
sets. They're a great example of being
able to open up the World Wide Web of the Internet of
information to also some clinical details without going
(46:48):
into prescribing or doing the work of a doctor.
But helping that doctor and nurse be better, have more time
them on their hands and them getting that information about
back about what is the patient doing, how when I actually see
them face to face or do a telemedicine call.
What are the core things that I have to focus on now?
So we're seeing so many great applications now that are truly
(47:12):
helping our physicians and nurses scale their work effort.
But also in the back end, it's more administrative work.
It's helping educate and advocate that things honestly,
I'm sure Doctor David would loveto do, but doesn't have time to
do too. He's he just doesn't always have
time to do that. So this is where AI is really
going to help us, I think in ourworld of of care for patients.
(47:35):
David Bray, very quickly question from Twitter, how do
you deal with ensuring ensuring that AI is used ethically in
healthcare? There are competing agendas,
including extracting value from patients to business.
So how do we manage these competing agendas and where do
(47:56):
the, where do the ethics come into play?
And very quickly, please. Two things that I think we need
to see more of. And, and I've I've seen this
already work in the UK. We can do it in the US, the UK
they call them data trust. Here in the United States, we
call them data cooperatives. I volunteer with an effort
called Birth to Three that triesto make sure every infant in the
United States gets a necessary physical, mental and emotional
care they deserve. Obviously data about infants is
(48:18):
very sensitive data. And So what we did is we used
the existing contract law to create a data cooperative that
includes members of that community, including
representatives, and they have choice as to how the data is
used where. And we explicitly said at no
point in time will this data ever be monetized.
But then you can actually then choose which systems are done.
And we're increasingly we're, we're pushing for the algorithms
(48:41):
if they're going to train, come train in situ with the data as
opposed to porting the data somewhere else.
This also solves the interoperability problem because
it's not about interoperability of the data elsewhere.
It's about can you bring the algorithms.
Now there's challenges with that, but you can do that.
The last thing I would also say is we may need to have.
More boards that include membersof the patient community being
(49:02):
served by a clinical setting be involved maybe every three
months, every six months worth. Just let's explain what we're
using the AI for and you can askquestions.
So stakeholders in both on the data and on the AI?
David Rich, very, very, very quickly, you mentioned the use
of Gen. AI to summarize complex patient
(49:23):
histories. Question from Twitter, who
should be held reliable if the AI recommendations are
hallucinations and how would youeven know that they are
hallucinations? Very quickly, please.
We have. A real obligation as we develop
the tools to assure that they are accurate.
(49:44):
I often refer people to a study done by radiologists showing
that when in then mistakes were intentionally entered into a
predictive algorithm, that it LED people astray and the
residents were wrong 80% of the time.
Even the experienced radiologists were wrong 50% of
the time when the AI LED them astray.
So I think it is a prerequisite that the tools we develop have
(50:08):
to be overseen, because when we build tools, we build them on
data that are frankly biased in nature because of inequities
within our healthcare system. So that's part of the quality
assurance process. We make a lot of errors in
medicine, and we're always struggling to try to make less
and less and to improve things. The same thing applies to any
(50:29):
technological tools that we haveat our disposal.
As we finish up, I'm going to ask each one of you a separate
policy related advice related question and I'll ask you to
answer quickly. Teresa, let me start with you.
Given all of this that we've been discussing, what advice do
(50:50):
you have for policy makers when it comes to technology such as
AI and healthcare? My top advice to them is they
have to learn. They, and I'm seeing them open
to this, but they really have tolearn and their policies have to
move faster. The technology is moving so much
faster than they're instituting policies.
(51:11):
And so they've got to learn and understand.
They've got to be open to new ideas.
They've got to be really on top of this.
But last, what we went to earlier is they need to be
thinking about AI at the industry level.
Let the experts do their job. Don't do broad policies that
hurt the industry. Do something that allows the
industry to understand. Don't over regulate, but let
(51:32):
them do. They're already doing it.
Let them just own it and do it well.
David Bray, What advice do you have for technology developers
when it comes to AI and healthcare?
Very quickly. Please learn from what happened
with the introduction of ultrasound for sonograms in the
healthcare space. Initially, when ultrasounds and
sonograms for pregnant women were introduced, there was
(51:54):
resistance. There was questions about the
efficacy. It had to be proven, But also,
how did you work it into workflow?
But now nobody would question. In fact, if anything, it might
actually be wrong to not do an ultrasound for a pregnant woman.
So it's worth understanding thatthere is a journey.
Be patient with that journey andfigure out again, as is David
Rich was saying, how do you pluginto that workflow but also show
(52:15):
quality? But David Rich, what advice do
you have for healthcare leaders when it comes to AI and
healthcare? Surround yourself with a group
of. Forward thinking people and
people who are perhaps even partof the old guard.
You need to have a brain trust and that brain trust has to be
(52:39):
there. It has to come together and
provide the best advice because it's becoming like the Wild West
out there and we have to have a way of digesting and, and really
facilitating better healthcare within our organizations,
reducing waste, cutting out costand assuring quality.
(53:00):
And you know, technology can be a double edged sword.
So we have to be sure that, you know, we do the best that we
possibly can. And the way to do that is to
surround yourself with the best minds.
Makes sense? Surround yourself with the best
people. And with that, a huge thank you
to Doctor David Bray, distinguished chair of the
Accelerator at the Stimson Centre, Teresa Carlson,
(53:23):
president of the General Catalyst Institute, and Doctor
David Rich, Chief clinical officer of Mount Sinai Health
System and president of the Mount Sinai Hospital in New
York. And a huge thank you to all of
you folks who watched and who asked such great questions.
Check out cxotalk.com, subscribeto our newsletter, join our
(53:45):
community, and come back, come back next week.
We have two members of the Houseof Lords next week talking about
AI and the ethical aspects and regulation.
So join us. Thanks so much, everybody.
I hope you have a great day and we'll talk to you soon.