All Episodes

October 15, 2024 57 mins

After three years of leading the Coalition for Healthcare AI, Dr. Brian Anderson has seen firsthand how artificial intelligence is redefining healthcare, from diagnostics to patient care. In this episode of the ReAligned Healthcare Podcast, Dr. Yauheni Solad and Dr. Brian Anderson discuss the Coalition’s mission to set best practices for AI, creating a framework for “what good looks like” in the field. They explore the balance between embracing innovation and ensuring equity, emphasizing the importance of diverse representation in AI assurance labs and the need for inclusive AI training. The conversation touches on the ethical dilemmas, regulatory challenges, and the potential of AI to improve clinical outcomes while avoiding a digital divide. Join us for this insightful conversation about the future of AI in healthcare and what it means for both patients and clinicians.

For more details on this episode, visit www.realignedhealthcare.com 

This Episode is support by: Dalos Partners.  — your boutique experts in AI strategy and governance, digital health, and interoperability. 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
We're looking to build a AI that's goingto ultimately serve all of us.
We need to be able to have assurance labsthat are representative of all of us.
There's a real cost to it, though,that I think
we need to be honest with ourselves about.
You don't necessarily need a call centerto help schedule patients,
because you can have an AI,but everybody in that call center
for that large health systemdoesn't have a job anymore.

(00:20):
And as a doctor,it's it's, you know, at once
terrifying and exhilarating to, you know,be part of a very challenging work
environment and have AI come aroundand potentially disrupt all of that
to create a new incentive structure.

(00:42):
Hello everybody,and welcome to Realign Health podcast.
We have a great guest with us today, Brian Anderson,
CEO and one of the founding membersof a Chai coalition for Health Care AI.
Brian,great to have you. Yauheni? Thanks.
Thanks for having me here.
I'm excited to be here with you.
Brian, it seemed that a lot of things
been developing with Coalition of HealthCare AI, and unless you truly engage

(01:06):
in this in a day to day activities,is very hard to even keep track of.
Can you just quickly tell uswhat's you up to,
how things are developingand what you guys are currently building?
Yeah.
So Chai started three years agowhen a group of,
doctors, researchers and technology
companies came togethertrying to create a set

(01:29):
of technically specific consensusbest practice frameworks.
In health AI,we recognize that we don't have that yet,
and we urgently need itin a consequential space like health.
You need to have a common,
shared definition of what good looks like,and you need to have an ability
to ensure that an AI model

(01:52):
is aligned with that definition.
And so that's what we've been focused on.
We've, published a numberof technical documents around this.
We're standing up quality assurance labsto help validate
that models are performing well andand are being built in a responsible way.
Our membership is, is is amazing.

(02:13):
We have over 4000 organizationsthat are part of China.
So I'm humbled and honoredto be part of it.
Candidly, it's pretty excitingwork that we're doing.
Wow. Large group, 4000 participants.
And, it seems to be growing.
So how do you even follow the paceof a current innovation?

(02:35):
Because every time you open Twitter,
there is yet another model or yetsomething you.
And it's it's hard to test it.
As a user, I imagine
testing it from assurance labperspective, is even more complicated.
So in this dynamic world, how do you

(02:55):
create a systemthat's robust enough to be safe
but at the same time not prevent
and slow down the speed of innovation?
I mean, at a high level,
you any it's it it's certainly an effort
that is going to take a large communityto be able to do it.
So, you know, there's a variety ofdifferent efforts at all of these require,

(03:17):
you know, communities, essentiallythey're willing
to lean in, looking at the assuranceLab one as an example.
Right.
We could not create a capability
of evaluating models, AI models on,
you know, representative
population sets of data across the US,

(03:40):
like one assurancelab couldn't possibly do that.
So it takes a community.
And so if we're looking to build
AI that's going to ultimately serveall of us,
we need to be able to have assurance labsthat can test models
that come from data setsthat are representative of all of us.
So you need, you know, one,you need a geographically diverse

(04:03):
set of labs that can draw patientsfrom Appalachia, from inner city Chicago,
from rural Kansas, and,you know, the urban centers of,
you know, Boston, where I'm at.
And so that's been the strategy.
It's been really to work with a large,diverse group of stakeholders.
You can't create a set of frameworksin the technology space

(04:25):
that keep pace with the kind of innovationthat you're doing.
If all you have are the incumbententrenched players, right?
If you don't have startupsthat are innovating and disrupting,
a lot of,you know, those entrenched incumbents
that maybe have a completely differentuse case

(04:45):
or a completely different approach,then you're missing.
Then you're missing a lot of the,
relevancyof, of these kinds of frameworks.
If, if you're not be able to keep up.
And so, you know, to that point, right,we have,
you know,all the big entrenched incumbents,

(05:06):
but we have over a thousand startupsthat are part of this community as well.
And so that's that's been our approach.
Diversityof stakeholders, different approaches
coming from different parts of the US.
And out of a lot of participants.
Do you guys have mostly people

(05:27):
from the industry, from EMC, from, representative of groups?
Because I think one of the importantquestion when when you look in that is
how do you ensure that people who createregulation, right.
Also not the ones who disproportionately,potentially benefit from this regulation?
Yeah.
So the short answer is,

(05:48):
we do have health systemsthat are in the lower resource space.
We also have a lot of academic healthsystems.
Chai was started by, four of them
Johns Hopkins, Stanford, Duke and Mayo.
And over the past three years,
we've been growing the membership

(06:09):
to include, a number of health systems that are,
you know, in, you know, the,the rural space.
Federally qualified health systems,critical access hospitals.
I mean, I'll be honest with you and say,do we have adequate representation?
No, I, I would genuinely wantmore representation.
Yeah.

(06:31):
That kind of regulatory capture, where
as an example. Right.
If one of the regulations was every healthsystem has to have,
you know, a, a robustAI governance committee,
every critical access hospital would,you know, overnight be disenfranchized

(06:52):
from being able to participate in theAI revolution because, you know,
some of these placesjust don't have the staff
or the resources to stand up, you know,a committee on the fly like that.
AMCs might be able to, but not FCS.
And so, you know,
those are the examples of where we'retrying to make sure that we have adequate
representationand we give equal weight to those voices.

(07:14):
Perfect.
So quick shout outs for everyonewho's listening, who may be
representing the efficacy and the smaller.
Please reach out to Chaiand enjoying the revolution.
What about patients?
Because we we're talking about industryplayer.
We're talking about, government, AMC and the clinics.
But regular users, people who actually maybenefit the most and potentially

(07:38):
also have the personal vested riskswhen they malfunction.
Yeah.
No, I think that's a great question.
To two thoughts on the patient side.
So, you want patients
to be at the center of every use casewhere, you know, it's relevant
and it's appropriate in the healthcaredelivery space.

(08:00):
Oftentimes, patients should be at thecenter of every one of those use cases.
In some of the lifesciences, like molecule discovery.
You know, it's more scientific.
That might not make sense.
But in the use cases where we'redeveloping the best practice frameworks,
you want to make sure that a patient,as an example,
is consulted in the designand development of the model.

(08:22):
First you want, for example,if I'm developing a model on breast cancer
detection in middle aged woman,on their mammograms,
in the development of that model, I wantI might want to make sure
that I'm talking to,representative sample of patients.
And you know what?
They're experiencing,
what they're going throughwith their doctor,
and how it tool might help themand their doctor.

(08:44):
You'd also want doctors,right, radiologists
or oncologists,to be part of those conversations.
And so so we have patient advocates inall those working groups working in China.
The other one, I think,
you know, maybe a little different way ofthinking about it is in this, like direct
to consumer or direct to patient, space,which is sort of
not in the traditional waywe think about healthcare delivery, where,

(09:08):
you know, a doctor in a patientor in an exam room and, you know,
having an encounter, in the in the directto consumer space, it's more about,
you know, every one of us is a patient,every one of us is a consumer.
And in the health consumer space,you know,
as an example,we have apps on our phones that we use.
We go to websites that, you know, help usnavigate, you know, whatever it is.

(09:31):
That we're looking to navigate.
And AI is oftentimes very frequently used,
sometimes opaquely,sometimes transparently.
And so in that specific space
we've had,we've launched a sort of specific effort,
with patients again, obviously in mindbecause it's, it's, you know, technology
companies, coming directly to you, what are the responsible ways to develop

(09:56):
AI for the use in the, consumer spaceor the patient consumer space.
As a user?
I have to say, that'smy personal biggest concern.
And thing that'sI personally would hate to see
is that we ended up with something as,
as our cookie banner

(10:19):
or something related to accessto some of our devices. Why?
Because intention was we're awesome.
We were we were trying to kind of givetransparency about what's going
on, what kind of data you have an access.
On the same time, the ultimate executionand the way the system was built around
this became less educationaland more of a nuisance for the people

(10:42):
because a lot of the time, unfortunately,people still click on right,
and people still allow, applicationto access everything
and take all the risks and admitting
that no one, you know, one actually readswhatever they're accepting to.
So how can we ensurethat those regulation

(11:05):
and those informational parts,like truly benefiting
and helping to make smarter decision,not just creating yet another pop up
nuisance for for end customersand frankly, also for clinicians,
because we can do the same forfor practicing physicians, too.
I mean, it's, it's the,

(11:25):
you know, really important spacethat you're talking about,
which is where the rubber hits the road
when that tool is actually in useby that provider,
where the doctor is experiencingthat pop up on the computer screen.
And, you know, as a doctor,I'll be the first to tell you, you know,
how you experienceany an electronic health record,

(11:46):
you know, varies from configurationto configuration.
And, you know, part of how these AI models
are going to be configured
greatly, if not fundamentally affectshow effective they are going to be.
You know, as an example,you bring up the pop up, you know, story.
There were many timeswhen as a practicing clinician,

(12:09):
you get a pop up,I got a pop up and I got annoyed at it.
And, you know,
I'll be honest and say there were timesmaybe I didn't even read it.
I was just frustratedand trying to get through my encounter
so I could spend more timetalking to my patients
in less time,kind of looking at the things
that were assaultingme on the computer screen.
And that's an example.
If that was an AI toolwhere it wasn't configured.

(12:30):
Right, right.
There is a level.
It's, you know, the technical term is likehuman cognitive bias,
where I am being biasedto not listen to that output
even though it might be the right output,the best output
that I could take, the best recommendationI could take.
But I'm not listening to itbecause cognitively it's
not coming in at the right timein my decision making,

(12:51):
whereas another pop upmight come just at the right time,
and it might be a horrible recommendation,but it caught me at the right time.
And and so I'm going to listen to it.
And that I think, is an examplewhere it's
really important that we're watching,not about how
well the model performs,but the actual clinical outcome.

(13:11):
Right.
In this case, you know, it might beI know, a recommendation
for a specific therapeutic, optionfor a patient I have in front of me.
And so the right kind of metric to watch
in these instanceswhere a model's deployed
and being used in a clinical context, is,is it helping the patient?
Is it leading to the clinical outcome

(13:32):
that ultimately, you know,I am a doctor in here for.
And so there's likea lot of different ways that we think
about evaluating these models.
But you point onone of the really important ones,
which is we going to be watchinghow these models are working, you know, in
real life, generating real world evidence
on their benefit or their harm.

(13:56):
You know, I think, you know,to speak a minute about, you know, drugs
and therapeutics and other kind of sectorsof, of the health economy.
We have
surveillance or post-market, surveillance networks
for drugs for medical devices,for vaccines.

(14:17):
We don't have that for AI tools.
We know that when drugs are approved,
they were approved because they showedthat they benefited, you know, a
specific group of people, you know,it might be as many as 50,000 people.
That's a big clinical trial,but we don't know how it's actually
working on the millions, tensof millions of people once it's approved.

(14:39):
Right.
And so we have these post-marketsurveillance networks that monitor
how they're actually workingand once doctors are prescribing them.
And so we need something like thatbefore I.
That's great.
I really like this point and a lotof infrastructure already being developed.
And I think in your previous rolesyou contributed significantly

(15:00):
to infrastructure developmentrelated to fire.
So, from a very practical perspective,
I think now it's in fermentation,in mise, in talking.
If, if now you're talking
about potentially monitoringreal world effectiveness
as well as providing recommendationsfor deployment

(15:21):
in, in in your recommendationfrom the group from Chai.
Or will you recommend
deployment processesbecause, ultimately if you have
significant variability on the way,you bring this model to a clinician
or to a patient,you will have different effectiveness.
So will you define the same wayas we define IV versus oral.

(15:44):
Will you define okay.
This particular model can be embeddedin this view of the EHR.
And if it's notthen we cannot guarantee effectiveness.
Yeah.
So we're going to be launchinga specific fairly
large effort early next yearin the clinical decision support space.

(16:05):
It's going to be our clinical decisionsupport or CDSS working group.
And that effort is goingto, dig into this exact,
clinical configuration, utilization,
space Chai,
you know, is a fantastic organization
with, you know, broad membershipand many experts.

(16:25):
One of the really important groups
that we're working withthat will be front and center in the CDSS
working group are the clinical societies,the specialty societies, right.
We believe I believe that doctors trust
the specialty societiesthat they're part of, to be the stewards

(16:47):
and curators of the guidelinesfor, you know, specific kinds of diseases
or specific kinds of protocolsand treatment modalities.
For, you know, everything ranging from,
you know, heart disease to, cancer.
And it's not on try to create
what those specific nuancedwhen I'll say specialties

(17:10):
specific considerations are.
That's where we want to partnerwith the specialty societies
to, to to drill into exactlywhat you're describing your any.
It's like yes, we need
we need to have an ability to say likethis is what the best practices
in the use of AI tools, make something up
here, in the use of AI toolsfor helping to determine,

(17:32):
you know, what protocol might be bestfor a specific patient.
And, you know, and then the specificsmight be the, you know, the tool.
You know, we recommend that the tool,give a recommendation,
before the encounter and maybe
asynchronous to, you know, specificfollow up visits.
There's some ability for the AI toolto help providers,

(17:53):
you know, make sure the patientis compliant on protocol, etc..
That's just as an example.
I wouldn'twant shy to make that on its own.
I want to partnerwith the American Society
for Clinical Oncology Asco about that.
And and that's make total sense.
I personally would not trustthe guidelines that's coming in in college

(18:15):
from China. I'm sorry Brian.
So it's it's it'sgreat, to partner with key societies
and I assume you'll be workingwith other groups that's, potentially
more heavily involved in informatics,like Amia, for example, to find,
optimal decisionsupport, launching points in other things.

(18:36):
Okay, enough with informatics.
We we got a little bit deeper, for an average listener.
So let's, let's talk about exciting stuffbecause, ultimately,
all of this is in the name of realignment
and optimization of things that's goingon, in the health care system.
And, where do you thinka lot of this is moving?
Like based on on what you see,and you're certainly seeing way more

(19:01):
than most of us, in the type
of a model, in terms of applicationthat's coming into market,
where do you think things will move in2 or 5 years?
I think ten is a littlebit too much to predict.
Yeah.
Gosh, there's so many ways,to answer that, I guess
maybe let me first speakjust broadly in the health space.

(19:24):
You know, certainly
AI is going to be more ubiquitous,
and ever present in,you know, many of the day to day tasks,
that we have and, you know,
as these large generative models
very capable of helping usnavigate our day to day lives, you know,

(19:44):
of which there are many kind of componentsare or, intersections with our health.
I think these models,these generative models in particular,
will start to be trained or tuned, with,
you know, all of the kinds of health datathat is unique to me, my, my health story
and how I modelcan help you both navigate,

(20:09):
you know, complex questionsor issues, assist you in,
you know, navigating and health systems
scheduling or the benefit,
for, claim, or, you know,
help answer, you know, specific questionsthat you may be looking to search for.

(20:30):
I think it's going to be really impactfuland really important,
for all of usto, to get ready for and, you know,
obviously, the importance inthat as it relates to chai is
how do we make sure that these modelsare actually advocating for you
and what your values areand what your priorities are versus mine

(20:51):
or versus, you know,the CEO of the technology company
in Silicon Valley that created the model,or who the model was trained on.
I mean, like, these are, you know, really,I think important questions
that as the use of AIand the use cases continue to expand
and become part of our everyday lifethat we really need to think about.

(21:12):
Now, on the provider side,
you know, I think within the
next 2 to 5 years,the majority of providers in the US
are going to be using AI to assist themin clinical decision making.
I think the potential benefit is
just so large that,it will very likely be.
I don't know if it will be two years,probably closer to five years.

(21:34):
I will use the term a standard of care.
That doctors should be relyingon these tools to assist them
in thinking about what kinds of diagnoses,what kinds of therapeutics
might be appropriateand most beneficial to their patients.
It'll help us with diagnoses thatwill help us with therapeutic options.
It'll help us with screening patientsfor the typical kinds of cancers.

(21:55):
I think also importantly, hopefully,maybe this is the optimist in me.
It'll help providers, have a more fulfilling professional life
and probably prefer personal lifeor a well-rounded life.
Burnout is such an issue.
Many of your listenersprobably are aware of that already.
My hope is that I will help

(22:17):
unburden providersfrom some of the more mundane,
parts of their
work thatare really overwhelming and crushing.
A lot of nurses and a lot of doctors.
I like this visionbecause a lot of us went to medical school
to truly deliver care,not necessarily to do after hours notes.

(22:39):
In feeling the billing forms.
So let's let's double down on thisa little bit.
So from, from your perspectivein, you know, things you see.
Right.
What's, what would be the evolving roleof a clinician, nurse physician?
Because now a lot of a thingthat we traditionally over the last decade
edits, into the clinicianplayed for documentation, for example,

(23:03):
especially was a wider adoption ofHR can be done by smarter agents
who can document based on my actions, mydecision, my discussion with the patient.
So do you think
what's will be the main task?
Because I'll tell you, as a physician,I hope that's

(23:23):
the main goal will be somewhere around mespending more time with patients,
not scaling me as a clinician to yeteven higher cohorts,
of patients. So,
maybe share your vision,like where you would like
to go.

(23:43):
It's a great question.
So I'm not a health economist, and,you know, so I'll speak in this space,
maybe a little bit out of ignoranceand with certainly a lot of humility.
When I think about, you know,
the healthAI revolution, that's, that's happening,
you know, one of the lessonsI try to draw from is,

(24:06):
you know, when,when sewing sewing machines,
were invented, back in the,you know, mid 1800s,
when it really gained steam and,and they become more widespread.
I think there was a lot of concern then
and, you know, many subsequent examplesof innovative technologies that came,

(24:27):
and were and then widely adopted,but a lot of concern about disruption,
workforce replacement, or, you know,it's only going to make me work harder.
And, and it'snot going to improve the quality of life.
There are many,many examples in the past.
This the example of the sewing machinesinvention is just one where,

(24:50):
yeah, there was a lot of disruptionand there was a lot of,
you know, concern strikes,
that that were going on during that time.
But ultimately what it led to was,you know,
a kind of flourishing for,you know, women's suffrage.
And in the women's rights movement,there's a real reason why

(25:13):
the women's suffrage movement happened.
You know, just a little bitafter the invention of the sewing machine.
Right.
It freed a large portion of our population
from a very mundane,
very burdensome task of,you know, creating the clothing
that we wear, and doing that very slowly.
You created more opportunity for people

(25:35):
to do things that they found more value
in, that they could be more impactfulto society with.
And, you know, the benefit, you know,we saw you know, 10 to 20 years later
with the women's suffrage movement.
My hope, similarlyto connect the dots back here to doctors,
is that it's not going to be just,you know, I'm just going to be scaled

(25:55):
now to see 50, 50 patientsinstead of the 20 that I see now.
But it's going to create opportunityfor doctors
to lean into areaswhere they find more value.
I think the the, the call to arms,
for my fellow doctors is to do thatso that the narrative
isn't one where you're just scaleto see 50 patients, but it's one to where,

(26:19):
you know, where where do I hopewhere your, your,
time is freed to more
meaningfully engagewith your patients in ways that no.
I ever could.
And doing that, with the freedomand opportunity,
to not worry about, you know,I think as we're kind of talking about,

(26:40):
you know, the documentationand scaring at the screen
because you got an AI toolto do that for you.
Now, you can be a doctor and live
in the moment, in a very meaningful way,that,
you know, allows a, a, a rebirth,if you will, of the art of medicine.
Powered by AI.

(27:01):
I like this version.
And to be honest,I think about this a lot,
because if we go back to,you know, 100 years, maybe even more
as a physician, in a smaller practice,you had an ability
not only to practice, butto publish your own cases and do research.
And right now, unless you are
have a dedicated time in EMC,this is pretty much lost art.

(27:25):
The only things you do as a front linephysician right now is to see your cohorts
and work in effect through like settingsto make things happen.
So certainly the hope is that I will allow
to bring this new efficiencyand tooling directly to the front staff
when they can start doing in discovery as,
famous lead in the water.

(27:48):
That's.
But, you know, youyou need just to interrupt.
Sorry to interrupt,but you know, you're exactly right.
That right.
The frontline clinicians,
their incentive structureright, is to see patients, right?
I get paid for the patients.
I see a researcher.
A principal investigator at an AMC.
They get paid a salary, and oftentimesa portion of that salary

(28:08):
is just for the research time. Right.
And they're not necessarily incentivizedto see as many patients,
but I so I think you're touchingon I think one of the key
concerns I have, at least,
and this is the reason why I'msort of advocating that I think providers,
nurses need to lean in and speak up aboutthis is that if we don't,
if we don't think carefullyabout the alignment of incentives

(28:32):
moving forward in a world powered by AI,
we do run the risk of thatdystopian future
that you were alluding to where it's,you know,
one doctor seeing, you know,100 patients simultaneously or, you know,
something silly like that, because that'sthe current incentive structure, right?
To as many patients as you can
bill as much as you canand move on to the next patient.

(28:54):
I don't want to live in a world where,you know,
as a doctor, I'm now seeingnone of my patients in a meaningful way.
And I'm having, you know, AI agentsseeing 100 patients simultaneously.
Yeah.
I think you would, losea lot of doctors in that.
In that future.
Let's double downon this topic of discussion.

(29:15):
After all,we're in the Realign Health Care podcast.
Doctors will definitely,
especially if you talk with some techinnovators expected to start,
monitoring even more chat bots or agentsthat's taking care of patients.
And, I even heard the concepts

(29:36):
where people proposing where doctors can,
share their medical license,right, and hire
bots to practice under their licensewith a form of risk sharing around that,
which pretty much, bring the modelwhere AI agent is
almost like you're a mid-level, right,that you're

(29:56):
practicing with, under collaborativepractice agreement.
Very dystopian future, very dystopian. So.
But it may happen.
And, in a spirit of a podcast topicof real life and health care,
where do you think we can apply eitherpolicy
or maybe financial incentive,to ensure that's not where we going?

(30:20):
Yeah.
So, so I would say at a policy level,
one of the really exciting things
about AI is,
let me actually step back at an economicat a unit economic level.
One of the exciting things about AIis that it really has the opportunity

(30:41):
to address you know, the, the the profitcenter, the cost driver, however
you want to look at it, of health,which is the patient doctor encounter.
Right.
We've had so many digital healthrevolutions.
But none of them have really gottento that,
inner circle of like, a toolthat is potentially capable

(31:03):
or more capable than a doctor to engage
in a patient encounter that, you know,yes, that's the billable moment.
And so
when you think about creating policiesin a, in a future world
where you need to really realignthe incentive structure, because now,
you know, the ability

(31:24):
to increase the supply side, meaning,
you know, potential doctorsare there, chat bot overlords,
isn't an issuebecause I can create as many kind of chat
bots as I want for myselfto, to practice medicine, we need to
we need to have a realignment of of whatwe're actually looking to incentivize.
It's not seeing.
It shouldn't be.
This is my my thesis.
It shouldn't be seen as many patients.

(31:46):
It really should be tied to, the kinds ofclinical outcomes that we want to drive.
And well, you might say, well,but that still could potentially,
incentivizea chat bot, see, in a million patients
simultaneously, if the chat bots,you know, the best doctor in the world,
and still retain,you know, outstanding care.

(32:07):
My response to that is like,if that's the case, then yeah,
that's the future we should be shootingfor, because at the end of the day,
we're doctor is not because,
you know, we want to be doctors,we doctors
because we want to help our patients,live healthier, better lives.
And if an AI tool ultimately can do thatbetter than a doctor, then I think,

(32:28):
we need to be very thoughtfulin how we use that AI for patients,
because they're the centerof what we should be concerned about.
Not, you know, my job security.
I, I'm actually
surprisingly okay with that.
The whole I don't know.
How many of your listeners will be. Now,

(32:51):
let's let's look from a population healthperspective.
Right.
Unfortunate reality is at scale.
We have a lot of a care variability. It's
is just how it is, especially geographicvariability in the care delivery.
And we starting to come to a very clear
conclusion based on the toolthat's getting progressively smarter,

(33:12):
that if you deploy the tool at scalefor very predictable condition,
you likely will do better than all of usas a physician.
Groups, on average.
And and that's that's okay.
We just need to ensurethat we are not putting patients
into the those echochambers of a chat bots.

(33:32):
That's you have no exits from where
unfortunately, when the chat bot is wrongor when the chat bot cannot find
the proper way to manage you,
you actually have a quick, easyway to exit the system.
But I think as a primary system,we may get a lot of the efficiency
and safety back into thatand and allow to reapply

(33:55):
a lot of our resources into somethingthat's way more impactful.
And that's beyond just standard educationphrases.
Right. This was say, asa clinician, after the visit.
Agreed.
I mean, the the you know,
the excitement is like, what is thatfuture exactly going to look like?
And, you know, as a doctor,

(34:17):
it's it's, you know, at onceterrifying and exhilarating, right?
You know, many of us have spent decadestraining and preparing our lives
to be the competent,trusted doctor that we are.
And then to, you know, bepart of a very challenging work
environment for the past, you know, ten,15, 20 years, to then have AI come around

(34:42):
and potentially disrupt all of thatto create a new incentive structure.
I think one that candidly,I think you maybe you and I both agree
we need a realignment aroundwhat these incentives are.
If the supply side is potentiallynot going to be as big of an issue,
or we need to at least think about itin a different way.
What that future, you know, exactlylooks like, you know, I'm hopeful about,

(35:06):
you know, what I saidearlier in terms of doctors
being able to lean into more meaningfulparts of their work, and finding
value and hopefully, you know, the typesof compensation that nurses and doctors,
should demand.
But we'll see.
Do you think that our current
health care system set up with

(35:28):
significant large for profit entities,that's
been managing those insurersas well as owning physical groups?
Our positionto even consider this type of the role,
because when your for profit entity,
your goal is to drivethe shareholder value.

(35:49):
This is the definition of the corporate.
And everything that does not drive
share has a little valuekind of misalign.
Of courseyou have a lot of a mission statement
and you're trying toto do better by doing good.
But in essence, there is a very clearprimary incentive, right?
And the way you're doing business.

(36:10):
So just out of curiosity,
and of course, this is opinion of Brian
personally not shy, isdo you think our system is even capable,
of, kind of creating a system
where we're mostly focusingon, on individual benefits,
even with technologyor we kind of passed this point.

(36:35):
And the optimist in me would say,you know,
I would say collectively, we're capable ofwhatever we, you know, set our mind to.
And the limits are only the things
that we put in front of us, around that.
That may sound kind of very trite
or Philippian, but,

(36:55):
you know,
time and time again,when a technology or tool is invented
on the other side of that invention,you know, it's a different future.
And oftentimes, if it's disruptive enough,
it reaches the level of needing to change
the overall frameworks for thingslike how we incentivize, how

(37:18):
we think about the economy in health and,
you know, for profit health systems
make a profit because, right,they drive value
and their shareholders realize that.
And I think, you know, that'sthat's a great example where,
you know, you have the current incentivestructure is is rewarding health systems

(37:43):
that, you know, are very efficientand very good
at having a loyal patient base
that go to see their doctor, they get careat the hospitals, etc., etc..
And you know, hopefully that's, you know,good quality care.
And, you know, patients aren't loyal
to hospitals or doctors that get bad care.
And, and I would say,you know, a future world

(38:07):
with AI doesn't necessarily needor mean that for profit health systems
can't be part of a meaningful,you know, potential
realignment about how we thinkabout the practice of medicine.
I think there is room for not for profitand for profit entities
to coexist and deliver,you know, outstanding care.

(38:30):
You have many technology companiesthat are now in the
for profit spacethat are really delivering.
You mean real, you know, impactful care.
Let's look at what happenedin the pandemic.
Like over 90% of providers were using
many technologycompanies platforms to deliver care.
And then you had technology platforms,that have been following suit,

(38:55):
employing doctors, to deliver care.
And I think that's okay.
It's up to patients as consumersto really kind of
from my perspective at least,
separate the winners from the losers
in terms of,you know, those organizations or companies
that are delivering good quality carethat's convenient, affordable,

(39:17):
timely,you know, whatever the metric is, that is
is the metric and important
to, you know, the consumer group,those companies will win.
I seeit, and to be honest, when you look on the
on the role of technology,internet or software,
the main goal and the rolewas always disintermediation, right?

(39:38):
So you can start doing something fasterwhere you before I had a very middleman.
And, our system has a lot of middlemenright now.
And I think one of thefuturistic questions,
in the future,especially when you talk about the payers
and the role payers play specificallyacross distribution of funds

(39:59):
or maybe finding appropriate careand and helping to guard,
better care quality
approaches.
Do you think we need as many middlemen,
and do you think AI in occurring breeds
will help us to make our system leanerso we don't really have
that many people standing between youas a clinician and me as a patient?

(40:25):
I hope so,
I mean, it's a tough question to answerbecause right
at the end of that is someonewho has a job, who has a family,
who has, you know, kidsthat are depending on,
you know, their parent to have a job to, bring food to the table.
And so, you know, we talk somewhat,

(40:49):
a loosely about, if that's a word, about,
you know, disruptive technologiesand the benefit that they have.
You know,there's a real cost to it, though,
that I think we need to be honestwith ourselves about.
And, yeah, I mean, if these models
create the kinds of efficiencies,
just as an example, where you don'tnecessarily need a call center

(41:11):
to help schedule patients
because you can have an AI toolthat does that really efficiently.
But everybody in that call center
for that large health system,you know, doesn't have a job anymore.
And, yeah, there's real efficiency
that's created from that, you know,potentially a benefit to society.
But how do we think about retraining

(41:33):
or upskilling those individuals that,you know, potentially lost their job?
Again, I'm not an economist.
I don't have all the answers here,but I know that's a
that's something that we need tohonestly face,
you know, square on and not pretend thatthat's not going to be,
a reality for at least,you know, a significant portion

(41:56):
of of the people that workin the healthcare delivery space.
It's a good question.
And I think we just verbalizingand will not be answering it
is if if you have five peoplewho are currently
standing in your clinical journeyor healthcare journey
between you getting acceptancefor the drug and not,

(42:16):
do you care more about getting accessto this drug faster
and more efficiently, or employmentof those particular five people's?
No further comment about this.
I think everyone colloquially can, can,can thinks about us.
Let's, let's switch a little bitbecause you had so much experience,
building interoperability systemand bringing our system together.

(42:40):
And it's been a huge change for us.
We started to deploying HR,
then we try to connect itand liberate the data across that.
And it's been a huge transformation.
And, I only been around in this industryfor a decade,
but it's feel like foreverbecause, most of us, started on paper
where getting a lab resultsand the printout was kind of the state

(43:03):
of the art things and factsstill ruled the world,
and it's still pretty prevalent,but not as much anymore.
And now we we're talking aboutintegrating the AI in super large language
models and a lot of other futuristic stuffand AI same time.
There is constantly the feelingthat we could have done even more.
We could have done even better

(43:23):
with everything we didwas a meaningful use and and deployment.
So with that in mind,is a person who will be on the frontline
in a lot of this initiatives.
Why do youthink we can learn from from the process?
So we do not repeat this duringthis current wave of AI deployments,
especially knowingall this incredible value

(43:44):
that generative AI can bring.
That's, that's a
deep question.
I mean,
I guess part of,
you know,one of the things that I've learned,
from all of the interoperability work,that, you know, we've been focused on

(44:06):
for the better part of,you know, almost 20 or so years now,
is, you know,
I focus on what really matters. And,
you know, as we've kind of,
you know, went throughthe interoperability journey,
you know, part of the reason
why we will launch that was becausewe wanted to improve health care, right?

(44:27):
Clinical outcomes for patients.
You know, you go to one hospitaland then you go across the street
and your medical record is an acrossthe street that just happens to be where
you landed in the emergency departmentwith a potential heart attack.
Right.
You want the doctorsthere to have access to your records
from the other side of thestreet, the other hospital.
And so we
created all these kindof steps of interoperability,

(44:49):
and we almost got lost in the forest
of all those things,like as an example, like, you know, we
we came up with really importantperformance metrics
that were not the actual clinical outcomethat we were measuring.
But like, did you do X, Y, and Z?
Did you, you know, test your A1, C, whichis a blood test for diabetes patients.

(45:10):
You know, every three months.
Did you do this?
Did you do that?
Did you, you know, structure the data in such a way
that, you know, you documentedsmoking cessation
questions, accurately?
And I think that
there was a lot of costassociated with creating,
I would say, you know, perhapssome cumbersome and overly burdensome,

(45:33):
hoops that people had to jump through,
because we weren'tfocused on that outcome.
And I think I.
Has the opportunity to allow usto really just focus
on, you know, the things that matterand keeping those things first.
And if we're looking to kind of createthe, you know, regulatory frameworks

(45:56):
that allow us to realizethe whole potential
of the benefit of AI and health.
One of the lessonsthat I have appreciated from the world,
and the experience of interoperability,is that we don't create overly
burdensome regulations, and not put first
the thing that we really can havethe opportunity to focus on.

(46:20):
So that's one I think the other is,
you know, it's,
so unfortunate that that, you know,the interoperability adoption curve
followed, you know, you know,if I were to pull up a list of the
most well
funded, resourced, profitable, you know,you choose your metric

(46:42):
that relates to income, revenue,list of hospitals
and looked at kind of their adoption,
you know, the interop standardsand the technology around it.
It it would, you know, follow the,that list pretty closely in terms of,
you know,the ones most resourced with the ones
first adopt the ones, the less resourcedare the ones just now kind of get into it,

(47:03):
you know, 15 years later,
I hope that we don't havethat same experience.
I really do with AI.
And so we need to if we can
think about creating the kinds of, incentive structures to bring those lesser
resourced health systems along, I thinkit's going to be so much more beneficial.

(47:28):
The risk, I think, is
even more exquisite inAI because in interoperability,
we're just talking about,you know, connecting pipes under the hood
so data can flow and maybe,you know, some,
you know, some hospitals have,you know, amazing, you know, still pipes.
Others maybe it's not so high quality.
The air might be not as good.

(47:50):
In AI though, the tool itself,
through processeslike reinforcement learning,
or just initial training and tuning,
are dependent upon
having access to,you know, diverse different kinds of data
sets that oftentimes come fromthose lesser resourced health systems.

(48:12):
And so you can get into a virtuous cycle
or a very non virtuous cycle with AI,where if you if you start seeing a trend
where those that are adopting itthat are making their data available
for training, tuning,improving iteration of the model
performance, only being health systems
that have, you know, highly educated,suburban, urban, you know, wealthy,

(48:35):
Caucasian people,you're going to have tools that are,
only going to be successfulin those environments.
And and that would be a real shame.
And I think a disaster, for our society,when you're essentially just
reinforcing the digital divide in a waythat is, potentially, very damaging.

(48:59):
I completely agree, and
it's great to hear from a CEO of
potentially the largest health healthcareAI group
that we're trying to build,not overly burdensome systems.
So please, keep,
helping us to stay awayfrom overly boring stuff.

(49:21):
And, I really like the parts, around
the presence of a clear alignment
with, underservedgroups and in hospitals.
Because if we cannot build a systemwe all need
without their participation,it's a very different value proposition
compared to when we just need to ensurethey're connected and not be left out,

(49:44):
because here you truly cannot move forwardwithout their participation.
And that's kind of switchingthe script significantly.
So, Brian, over your careerand it's not uniquely,
apply to, to chaior might or other groups,
you've seen all the sides of equation.
You're seeing physician side, you're seeing the payer side and the patient side.

(50:08):
Where do
you thinkwe have the strongest misalignment?
And, if you had a magic wand,
what would you do?
Great question.
So this is Brian personal.
Brian speaking. Not not CEO of Chai.

(50:31):
You know,
probably the greatest misalignmentI, you know, see, is me
because of my personal experienceas a doctor, that there's misalignment
in terms of how
health systemsand providers are incentivized.
You know, I think there are obviouslyplans and payers out there

(50:56):
that incentivize providers around,
you know, clinical outcomes.
You know, look at Medicare Advantage,look at some of the accountable care
organizations, ACOs and othersthat are out there,
you know, developing
frameworksthat are incentivizing paying providers
and nurses for their performance,for the clinical outcomes

(51:19):
that improve in thein the patients that they see.
But that's a minority of,
you know, the overall health caredelivery space in the US.
And even in those markets where,you know, we have Medicare Advantage,
the incentivestructure still aren't perfect.

(51:40):
And so you still have situationswhere you have doctors, that aren't
deriving meaning and value,
from their practice.
You know, I think one of the more soberingstatistics
in this space that I think speaks tothis misalignment is right.
In the US, we have to graduate each year
to whole medical schoolclasses of physicians,

(52:04):
just to account for the doctorsthat commit suicide each year.
And it's a sad, sad statistic.
I mean, I know many physician colleagues,
classmates in medical school of minethat aren't with us anymore.
Because because of that, because they,you know, committed suicide.

(52:25):
And I think it speaksto the profound misalignment
in, in, in how medicine
is practicedand delivered and experienced,
by providers and by patients.
And, you know, if we can find a wayagain, I'm not an economist
and an expert in the space,but if we can find a way to realign,

(52:49):
those incentive structuresand the experience of work for providers,
and hopefully powered by AI,that kind of flourishing
that I hope is on the other sideof this revolution for doctors.
You know,I think that'd be it'd be wonderful.
And I'm ready to partner with,

(53:09):
you know, with all the physiciansout there around that.
That's why I think we have over 600 or 700health systems
that are part of China right now.
We're in this together.
It's is certainly sadand sobering statistics
when you look aroundand I think from from my side,
one of the call to clinicians,especially now when there are

(53:31):
so many opportunity to contribute,is to to get active,
to ensure that you've been heard,that you participate.
I know it's hard.
I know it's very hard to find time,especially in the busy day, but
I don't think we will be heardunless we are talking about it.
And now is it time.
Now the time where the tools are coming.

(53:53):
Now is a time wherethere is a real opportunity to change.
And maybe, and it's interestingthat I'm already seeing it across
many clinical specialtieswhere simple practice settings
where you may not participate in insuranceand just work as a direct primary care,
may also be an option.

(54:13):
So I may give their raise
to a lot of other and alternativepractice models.
So I think overall, Brian, if you
if you look in the next 5 to 10 years,
do you feel optimistic?
Do you feel neutral, pessimistic?
What's what's your view?

(54:36):
I would say optimism for sure.
You know, there's.
There hasn't been an example yetin human history
where we've had an innovationand the motivation
to use it and to,
you know, control it
in a way that doesn't benefithuman society.

(54:57):
Now, I mean, we've had unintendedconsequences.
Just look at social media as an examplewhere you invent these tools
and, you know,
greater connection, deeper connection,all that fantastic unintended consequence.
I mean, I have two teenage daughters.
I see the consequences of social media.
But at the same time,

(55:20):
the benefits of having real,meaningful connection to people,
I would assert, you know,
are equally valid and,and perhaps even more potent.
In the health care space,
you know, I, I really, you know, believe
that even though AI is an invention, like,

(55:43):
certainly generative as an invention,like none, I think we've ever had before
to create the kind of superintelligencethat is smarter than you or I
or all of us combined.
You know, differentthings that we need to consider in that.
But I think with the kind of motivationthat we have as humans
to create the kinds of guidelinesand guardrails that will protect us

(56:07):
and enable us to use these toolsto serve all of us.
I think we can do it.
I think we just need to be
very sober eyed about the importance
that we need to ensure that these tools
are aligned to our values,
and if they're not, when we use themin consequential spaces like health, where

(56:32):
everyone's a patient,everyone's a caregiver.
We're going to be in troubleif we don't have that
kind of assurancearound the alignment of these models
to to what we valueand what we want for our health. So
thank you, Brian.
And certainly we all wish you in,

(56:52):
in my success,because we all badly need this
as a society, as physicians,and frankly, as all the patients.
So thank you for stopping by.
For our episodes.
It's been an absolute pleasure to have you
and hopefully, in a year or two,we can bring you back
so we can learn more about what'swhat's happened then about all the model.

(57:15):
That'shopefully by then we'll be in production.
So thank you. Let's do it.Let's make it a date, honey.
I look forward to it.
And thanks for having mehere. It's been a real pleasure. I.
Advertise With Us

Popular Podcasts

United States of Kennedy
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.