Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jeff Byers (00:39):
Hello and welcome to
Health Affairs This Week. I'm
your host, Jeff Byers. We arerecording on 03/07/2025. Before
we begin, I just wanna give aquick shout out that as a
special insider event next weekon March 19, we will have an
hour long session on Medicaid'suncertain future. Check out the
details for that in the shownotes, and join to become an
(01:02):
insider so you can join us then.
Today, I'm joined by BrianAnderson. Brian is the CEO of
the Coalition for Health AI orCHI. It's a nonprofit focused on
developing guidelines and bestpractices for AI use in health
care. Brian, welcome to theprogram.
Brian Anderson (01:19):
It's so great to
be here, Jeff. Thanks for having
me.
Jeff Byers (01:21):
Yeah. So for
listeners that may not know, are
you able to give us just, like,a thirty second pitch of, what
CHI is all about and what youdo?
Brian Anderson (01:29):
Yeah. So CHI was
started four years ago in the
middle of the pandemic when anumber of private sector
organizations were comingtogether. Of course, we're, you
know, solving the pandemic atthe time, but we began
appreciating the kinds of impactthat you have when you have
nontraditional companies thatare inherently competitive
working together. And so weasked, is there something we
could be doing in AI? And welooked at the concept of
(01:50):
responsible AI and developingconsensus technically specific
best practice frameworks on whatresponsible AI looks like for a
developer developing it, adeployer deploying it, be it
health system or a payer or alife science company, and what
it means to manage and monitorAI tools over time.
And so, really, that's been thefocus of CHI since we started is
developing consensus driven bestpractice frameworks of what
(02:12):
responsible AI looks like inhealth.
Jeff Byers (02:15):
Thanks. Thanks for,
giving us that that quick
overview. We'll just get into itabout what you're kinda seeing
on the policy landscape. Youknow, in January, the then Biden
administration released astrategic plan for health care,
AI. This document has since beenremoved from the HHS website as
far as I can tell.
The New York Times reported thatFDA layoffs were decimating
(02:36):
teams reviewing AI and foodsafety. Then today, I read a
report that the FDA is trying tobring back some of those
employees. So it's hard to tellexactly what the federal health
care agenda is for AI. You know,what are you seeing in that
landscape?
Brian Anderson (02:51):
Yeah. You know,
I mean, it's obviously been a
transition and a period that Ithink many of us are are, you
know, very interested inunderstanding where the current
administration is going to take,you know, policy and their
priorities. I think some thingsthat have, certainly stood out
to me that, you know, I'm reallyexcited about, honestly, are, I
(03:12):
think, an interest in engagingprivate sector organizations
like CHI. CHI is fundamentallyled by private sector
stakeholders from technologycompanies to health systems to
payers to patient communityadvocates. And working with
those communities to develop thekinds of guidelines and
guardrails in our space, whichof course is health AI, rather
(03:33):
than having, you know, top downregulation being developed and
instantiated from from aregulator.
And so, you know, I thinkthere's appetite and interest in
partnering with organizationslike CHI because, you know,
that's our focus is a privatesector led effort to develop
those kinds of internal selfregulatory guidelines and
guardrails, and so we haveexcitement in that. I think the
(03:55):
second big area of focus thatI'm also very interested in
exploring further is an interestin, investing in infrastructure
and technology assets that helpaccelerate innovation to keep us
economically competitive. Andthis, of course, is, you know,
very core to the mission of Chaiis to develop AI that is
(04:15):
responsible, that serves all ofus. And to do that, you need to
have the kind of infrastructureinvestments across, you know,
our our nation, to support, tobring online different datasets
from rural communities inAppalachia to the heartland to
urban environments as well. Andso we're seeing, I think, a lot
of interest at a policy level onexploring how to make those
(04:38):
technology investments to spurinnovation, to drive economic
competitiveness, and we'reexcited to be part of those
conversations.
Jeff Byers (04:44):
Great. Thanks for
that overview. Is there anything
else that you can give us about,like, what you're seeing from
the federal policy making that'slike Yeah.
Brian Anderson (04:51):
I mean, I think,
you know, some some more
specifics, particularly so onearea that really excites me is,
you know, there's this promisealways been this promise and
hope that technologies are goingto bend the cost curve in health
care. And I would say, you know,certainly during my lifetime, I
have not seen that. In thefifteen, twenty years that I've
(05:11):
been in digital health, Thecosts for health care delivery
have only gone up. I think AIhas a unique opportunity to
potentially drive down cost innovel ways because it's such a
powerful tool. And I seeinterest in exploring at a
policy level.
And, certainly, I think therehave been some announcements
from, you know, various,organizations that work closely
(05:33):
with the Trump administrationaround exploring things like how
can AI be leveraged, AI tools beleveraged in value based care,
looking at Medicare Advantage,even looking at Medicaid or, you
know, ACOs and how they canincentivize the use of AI tools
in in those value based carespaces to potentially drive down
costs. And I think AI is reallywell situated to potentially
(05:55):
help in those areas. We alreadysee, you know, private sector
companies doing really impactfulwork, with the patient lives
that they cover in that space.So, you know, I'm excited to see
that. I would love to seesignals of costs driving down
with greater adoption in AI.
So, you know, that's, I think,an example of something
(06:16):
specific, you know, thatpotentially has, you know,
relevance to, you know, CMS.We'll see where it goes. So When
we
Jeff Byers (06:21):
talk about AI, we're
we're talking about everything,
and it seems like everythingunder the sun can be AI these
days. So, like, what you thinkabout, like, specifically health
care, what are the current majorcapabilities you're seeing in AI
across health care?
Brian Anderson (06:34):
Yeah. Well so,
actually, let me first push back
on on that statement you justmade. So when we started CHI, we
intentionally chose the namehealth AI, not necessarily just
health care AI because healthcare, I think, connotates, you
know, specifically looking atthe traditional health care
delivery system with hospitalsand health systems and clinics.
CHI is health AI because thereis an additional area of focus
(06:56):
that we have, which is in thedirect to consumer space. So
this is and I think this is, Ithink, where AI vendors building
technologies can really have adramatic impact as it relates to
challenges and access to carefor so many people.
Rural communities, as anexample, inner city communities.
You can have AI tools that aremeeting individuals where they
are at. And so I think for that,it just has, you know, one kind
(07:18):
of marker in the sand. Chai isvery interested and focused in
in partnering with technologycompanies and patients, and
doctors and nurses in in lookingat different use cases that
disrupt the traditional healthcare delivery. You know, patient
goes to the doctor, waits inline, you know, in the waiting
room and then is seen by thedoctor for fifteen minutes and
(07:40):
then, you know, comes back maybesix months, three months, a year
later.
I think some of the reallyexciting use cases that we're
focused on supporting developbest practices are in the direct
to consumer space, empoweringpatients as consumers and
creating the kinds of guidelinesand guardrails that ensure that
the tools being used can betrustworthy and can be safe and
(08:00):
effective. Other use cases, Ithink, you know, certainly in
the traditional AI space, youknow, I think there's a lot of
excitement about robust,powerful expert classifiers
helping with, you know, avariety of different clinical
decision making tools. I thinkthere's a unique interest, and I
think this is something I'mreally excited about is the use
of both traditional AI tools andgenerative AI tools in the
(08:23):
administrative back end. Healthsystems are so challenged right
now in terms of their marginsand, you know, staying
financially solvent, Developingand deploying tools that
actually help them createefficiencies with coding, with
billing, with scheduling, withstaffing levels, I think those
will be the tools that helpthem, you know, hopefully,
navigate this really financiallychallenging time. The generative
(08:46):
AI space, you know, is reallyexciting.
I can tell you that as aphysician who was challenged
with burning out, to see some ofthe tools that are coming online
like AI scribes that helpdoctors connect better with
their patients really means alot. You know, there's this stat
that is really sobering that Ilike to remind many listeners
of, which is in medicine today,we have to graduate at least
(09:07):
two, if not three, medicalschools of students every year
to account for the providersthat commit suicide every year.
And that's sobering. And inpart, that is oftentimes because
of the lifestyle challenges thatproviders have. They get burnt
out, really burnt out.
And we have an opportunity herewith some of these generative AI
tools to help address some ofthose challenges in really
(09:28):
meaningful ways to helpproviders better connect with
their patients, to not spendinordinate amounts of time away
from their wife and their kidsand their families and their
husbands. To enable AI tools todo that, I think, is gonna be a
really exciting, space. And, ofcourse, you know, there's the
kinds of improvement in clinicaloutcomes that generative AI can
offer with, you know, takingcomplex medical records and, you
(09:49):
know, a number of researchpapers or publications that
doctors can't stay on top of.And helping doctors digest all
of that could help, I think,save lots of patient lives.
We're seeing already a lot ofanecdotal stories about patients
taking their medical records anduploading them into, you know,
one of these, frontier models ina secure way, and getting
results that really, you know,can help drive, you know, a
(10:11):
diagnosis and treatment.
Jeff Byers (10:13):
Yeah. I wanna push
back a little bit, on that,
research thing you mentioned. Ithink people can keep up to date
with health care if theysubscribe to, health affairs.
That I might be biased in that.Well, hey.
Brian Anderson (10:25):
I mean, we just
published a recent article with
the National Academy of Medicinewith you guys, so I I can't push
back on that at all.
Jeff Byers (10:32):
That's right. Yeah.
And I do wanna ask you about
that specific article a littlebit later on, but I'm glad you
brought up the back end andadministrative use for provider
organizations. Because one ofthe things you hear about I'm
gonna use AI a little a littlebroadly in this case, so feel
free to help fill out thosedetails. Are they generating the
ROI for the cost that they are,Yeah.
(10:53):
You know, the cost that they'reincurring. Yeah. So, like, when
it comes to providerorganizations and, like,
hospitals and practices, whatare their challenges for the
cost versus ROI?
Brian Anderson (11:03):
Let me create, I
think, two, two frameworks here.
So so on the clinical deliveryside, there is real challenge in
identifying financial ROIbecause there's so often a
misalignment of incentives interms of, obviously, providers.
We wanna drive clinical qualityand improvement of patients'
lives. But oftentimes, that'snot necessarily tied to
(11:23):
financial return for our healthsystem, particularly if they're
in, you know, the fee forservice space. So you have
challenges in deploying toolsthat may help a provider
diagnose a patient morecorrectly or accurately, but not
necessarily be tied with a verykind of clear line to an
economic ROI for the healthsystem that's, you know,
outlying millions of dollars toprocure that tool.
(11:44):
On the administrative back endside, I think it's a little bit
easier, particularly if I'lltake the use case of coding.
Right? And so physicianorganizations are oftentimes
very concerned that they'remissing, you know, a particular
DRG or CPT code, in a, you know,long narrated note or a complex
hospital visit that you'releaving money on the table. We
are hearing, I haven't yet seen,you know, a a wealth of
(12:07):
knowledge published in thisspace, but we are hearing that
there are, I think, easier waysto specifically identify where
AI tools deployed in that kindof use case are driving better
economic returns. Additionally,on the scheduling side.
So, you know, I think one of thereal challenges for physicians
and, you know, the schedulingteams that that they work with
(12:30):
is driving schedule density anddriving down no shows. And so
when you have tools that canhelp identify individuals that
might have a high risk of noshowing and then enabling those
health systems to take proactivesteps to help get that
individual to come into thevisit and then seeing your
scheduled density go up. Thosesorts of things are easy to
measure. And so if you'regetting the closer you get to,
(12:50):
you know, that billable moment,I think the easier it is to
measure economic ROI. And soschedule density tools that help
with scheduling, tools that helpwith, administrative billing,
capturing all the codes in acomplex, you know, hospital
visit or outpatient visit aresome areas where I'm seeing
signals that are helping healthsystems, I think, get a little
bit excited about some of thesetools that they're using.
Jeff Byers (13:12):
You know, that made
me wanna ask, you know, a lot of
concerns we've heard about withAI are about hallucinations
over, patient diagnoses orsomething along those lines. And
I imagine those are realconcerns, but I actually wanted
to ask you about potentialhallucinations for, like,
administrative data. Is that,like, possible when you're
talking about coding and billingand thing things like that? Is
(13:35):
that a concern at all or is that
Brian Anderson (13:36):
You want well,
anytime I think you're working
with a nondeterministic modellike, a Gen AI model, it is
gonna be a concern. And so it'sa risk. But I think what we're
seeing as the space matures, asan example, is when you take a
ensemble approach, meaning I'mbuilding an AI product, And it's
not just one model operating.It's multiple models. And in the
(13:58):
case where we're seeing, Ithink, a lot of, reduction in
hallucinations is where youhave, you know, an agent or a
supervisory model that iswatching out for particular
kinds of hallucinations, andit's specifically trained to
identify hallucinations andcatch those hallucinations that
one of the, you know, earliermodels in the ensemble might
(14:19):
might have created, and then andthen and stopping that and
reducing that risk ofhallucination.
That's a really exciting space,I think, where we're seeing a
significant reduction. I wouldsay, additionally, more robustly
trained models on high qualitydata with appropriate kinds of
you know, there's additionalkinds of tuning steps and
(14:41):
training steps that vendors, Ithink, are discovering to reduce
the kind of hallucinations to tomitigate that risk. It it
doesn't drop it to zero, but itreduces it. And so, you know,
I'm excited to see that. I thinkit it does reinforce, though,
that in some of theseconsequential use cases like
creating a claim bill, you stillwant a human in the loop to
(15:02):
review things until these modelsbecome, yeah, I think, more
accurate, and and theperformance is we're more
clearly able to measure howthese models perform in in
certain use cases.
Jeff Byers (15:15):
So you mentioned
you've been in the field for
fifteen to twenty years. So Ihave to assume that you saw
early stages of adoption forEHRs. Is that correct?
Brian Anderson (15:25):
Yeah. I mean,
one of the, first jobs I had in
the digital health space washelping to lead some of the
product development and clinicalservices that we offered with
Athenahealth, which was the EHRcompany that I worked at.
Jeff Byers (15:36):
Do you see any
parallels to the adoption of
EHRs versus AI?
Brian Anderson (15:41):
Yeah. Well,
certainly in, you know, any kind
of digital tool, there's gonnabe the early adopters and the
the folks in the middle andthen, you know, the late
adopters. And I think we'recertainly seeing a lot of
similarities in that in general.So you have, you know, the
techno files that are excitedabout any kind of digital tool
and leaning in and using them asmuch as they can. And you're
(16:03):
gonna have the folks that, youknow, aren't as excited about
technology that will be, youknow, later on adopting these
tools.
Additionally, I think, you know,in the early days of EHRs, these
were very expensive, pieces oftechnology to deploy and to
configure and to use and trainfolks on. And so it was mostly
limited to health systems thatwere well resourced and funded
and and had, you know, armies ofIT behind them to help do that
(16:26):
configuration and maintenance. Ithink we're selling seeing
similar, signs of that, holdingtrue here, meaning you have a
lot of academic medical centersand big health systems with big
IT teams that are some of thefirst movers to adopt and deploy
in, you know, really significantways AI tools and technologies.
My hope, and this is one of themissions within CHI, is how do
(16:48):
we create the kinds of playbooksand guides to help the rural
critical access hospital or thefederally qualified health
center in an inner cityenvironment that maybe doesn't
even have a single persondedicated full time to be an IT
staff? How do you help thosecommunities to be able to get
into the AI game?
One of the I think the the hopesand and and the things that many
(17:10):
of us are excited about with AIis that it doesn't necessarily
have as much of a burden on anIT team to potentially deploy
it. Right? You don't need anarmy and informaticians to stand
at the ready to do data, youknow, interoperability because,
you know, you have the magic ofgenera generative AI that might
(17:30):
be able to just, you know, beable to do all that for you.
Where I think the realchallenge, though, is still for
these FQHCs or critical accesshospitals is how do they manage
and maintain and govern thesetools over time? There's still
significant cost associated withthat.
And so part of, I think, one ofthe more existential questions
(17:51):
that we're gonna have to addressas a society if we want to have
these smaller market clinics,rural clinics be able to
participate is how do we supportthem in AI governance and
monitoring and managing AImodels over time, similar to the
challenges that that thoseclinics still have today with
(18:11):
EHRs.
Jeff Byers (18:13):
So we have about
five minutes left, and I wanna
make sure I ask you thisquestion. I'm really glad you
brought up, the you know, kindof an entry into a question
about workforce. So in yourpaper in health affairs, it was
part of the vital directionsseries. Listeners may have heard
my interview with Victor Zao,either a week or two ago. Please
check that out if you haven't.
(18:33):
You if you haven't also read thepapers, check that on our
website, or or in the Februaryissue since you're a subscriber.
You and your colleagues writeabout promoting the development
of an AI competent workforce,which makes me wanna ask, as we
think about we're still somewhatearly stages for a lot of this.
Where do you see the workforcemoving to, or how do you see the
(18:56):
workforce changing, whether itneeds upskilling, or how will
duties change? I know that's avery broad sweeping question,
but, like, where do you see howdo you see the workforce the
health care workforce shaping inthe wake of AI?
Brian Anderson (19:10):
Let me start
with kind of a look to the
future. So within five years, Icertainly see doctors and
nurses, the standard of carewill be more often than not
using AI tools. And so if youlook at it from a paradigm where
these are tools that doctorswithin five years, the majority
of them are gonna be using asusing them as parts of standard
of care. There is a very urgentset of questions that we need to
(19:34):
educate nurses and doctorsaround. And one of the principal
questions, as any doctor whouses any kind of tool, be it a
stethoscope or a scalpel or, youknow, an otoscope that we, you
know, use as part of our routinecare, in examining patients is,
is this tool this is a questionthat every doctor should be
asking.
Is is this tool that I'm aboutto use on my patient appropriate
(19:56):
for the patient I have in frontof me? And in the space of AI
tools or digital tools, there'sa lot to unpack in being able to
answer that specific importantquestion. Questions like, well,
how was this AI model trained?Was it trained on patients like
the one I have in front of me?Do I know how the model performs
(20:17):
on patients like the one infront of me?
What are the indications for themodel? What are the limitations
of the model? How does itperform right now? Right? If
I've if my health system hasdeployed the tool three years
ago and I know kind of what itwas what its performance was
maybe three years ago, does myhealth system know how it
(20:37):
performs right now?
Right? These are questions thatare questions I, you know, can
call out to you and and and andshare with your listeners
because, you know, I've been inthe space of of AI and
understand the principles ofresponsible AI. But that's just
because I've had, you know, theprivilege of of working
alongside and being educated inthis space. Many doctors,
frontline clinicians, andnurses, they don't have time.
(20:58):
They haven't, you know, beenable to take classes in this
space.
And so we have an urgent need toupscale our workforce to be able
to ask and answer the rightkinds of questions to deliver
care and use the right tools.The example I oftentimes give is
you don't use a pediatricstethoscope on an adult. You
don't use an adult stethoscopeon a pediatric patient. And so,
similarly, you don't use an AItool that's trained on a
(21:19):
specific population of people ona patient that comes from a
population that is notrepresentative in the data on
the model that was trained on.You just don't.
That's just using the wrongtool. And so helping to educate
doctors and nurses on that kindof information and working with
the vendor community to helpenable the physicians to have
the transparency on where to getthe answers to those questions
(21:43):
is gonna be really important.And so it's gonna take not just
educating and upscaling nursesand doctors. It's gonna take
partnering with the vendorcommunity to ensure that we have
the kind of transparency to getthose answers.
Jeff Byers (21:54):
Brian Anderson,
anything else you wanna plug
before heading out?
Brian Anderson (21:58):
No. This has
been great, Jeff. Thanks for
having me, and really appreciatethe time.
Jeff Byers (22:02):
Yeah. Appreciate the
time with you. Brian Anderson,
CEO of CHI. Thanks again forjoining us today on Health
Affairs This Week. And if you,the listener, enjoyed this
episode, please send it to theAI skeptic in your life, and we
will see you next week.
Thanks all.