All Episodes

April 8, 2025 35 mins

Andrew Mahler, Vice President of Privacy and Compliance Services, Clearwater, speaks with Drew Stevens, Of Counsel, Parker Hudson Rainer & Dobbs LLP, about the intersection of artificial intelligence (AI) in health care and the evolving landscape of nondiscrimination regulations. They discuss the significance of the final rule on Section 1557 and nondiscrimination in the use of patient care decision support tools, legal frameworks that apply to the use of health AI, how the “deliberate indifference” standard might be applied, how hospitals and health systems can demonstrate they are not being deliberately indifferent to potential discrimination risks in their AI tools, and enforcement trends. Drew recently authored an article for AHLA’s Health Law Weekly about this topic. Sponsored by Clearwater.

AHLA's Health Law Daily Podcast Is Here!

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):


Speaker 2 (00:04):
Support for A HLA comes from Clearwater. As the
healthcare industry's largestpure play provider of
cybersecurity and compliancesolutions, Clearwater helps
organizations across thehealthcare ecosystem move to a
more secure, compliant andresilient state so they can
achieve their mission. Thecompany provides a deep pool of
experts across a broad range ofcybersecurity, privacy, and

(00:28):
compliance domains.
Purpose-built software thatenables efficient
identification and managementof cybersecurity and compliance
risks, and a tech enabled 24 7365 security operations center
with managed threat detectionand response capabilities. For
more information, visitclearwater security.com.

Speaker 3 (00:53):
Welcome everyone.
This is Andrew Mahler , vicePresident of Privacy and
Compliance Services atClearwater , uh, where we help
healthcare organizations moveto a more secure, compliant and
resilient state. Welcome to theHLA podcast on health, ai, and
non-discrimination compliance.
In today's episode, I'll be ,uh, speaking with Drew,
exploring the criticalintersection of, of AI and

(01:14):
healthcare in the evolvinglandscape of non-discrimination
regs. Um, our guest today , uh,drew Stevens of counsel at
Parker Hudson, Rainer and Dobbs, uh, in Atlanta, Georgia. Drew
brings a , uh, a really wealthof experience in complex
litigation , uh, particularfocus , uh, on healthcare law.
Uh, his practice involvescounseling, hospitals, and

(01:35):
health systems on compliancewith federal non-discrimination
laws, including title , uh,title three of the a DA Section
1557 , uh, title six of theCivil Rights Act of 1964. Uh,
and he represents healthcareproviders and litigation under
, uh, under these statutes andin civil rights investigations
conducted by the DOJ , uh, theOffice for Civil Rights , uh,

(01:58):
within , uh, HHS , uh, drewrecently authored, and , and
hopefully you all had a chanceto take a look at this, but
authored a , a reallyinsightful article , uh, on the
implications of potentialchanges to section 1557 and
their impact on health ihealth, ai , non-discrimination
compliance. And today we'lldive in , uh, to this topic,
exploring some of thosechallenges and considerations

(02:20):
that that providers face , uh,in ensuring non-discrimination
in their use of AI tools . So ,uh, with all that said, drew ,
uh, welcome to the podcast.
Really, really excited to haveyou here to, to share your
expertise.

Speaker 4 (02:33):
Thank you, Andrew, for that , uh, excellent
introduction, and thank you toClearwater and a HLA for having
me.

Speaker 3 (02:39):
Great. Well, excited to have you here. So let's just
go ahead and dive in. Um, soagain, your , your article, I
thought, very, very insightful,and, and I , I don't know if
the , the podcast link willlink to the article, but , um,
certainly it's something that,you know, I think all folks in
this area , uh, sh should takea look at and, and read. Um,

(03:02):
but let's just sort of start atthe beginning and, and if you
wouldn't mind, drew, could youtalk a bit about the
significance of the Bidenadministration's , you know,
standard under 1557 onnon-discrimination and the use
of, of patient support tools ,and , you know , how , how do
you know, how does this affectdecisions related to AI in the,

(03:23):
in the provider context?

Speaker 4 (03:25):
Absolutely. So taking a step back , um, many
listeners will recall , uh, theoriginal Obama era regulation
intersection 1557 , uh, which,which did not directly address
the use of AI in healthcare.
Then, of course, the, theTrump, the first Trump
administration , uh, revisedthat regulation. And then the

(03:46):
Biden administration, when itcame into office , uh, issued a
proposed regulation thatincluded a generic prohibition
on discrimination in the use ofwhat they called clinical
algorithms. Uh, that was backbefore the chat GPT revolution
began. It was that long ago.
And the Biden administrationtook quite a while to finalize

(04:09):
its final regulation under ,uh, section 1557, of course,
the non-discriminationprovision of the Affordable
Care Act. And when the finalregulation came out , uh, gone
was any reference to clinicalalgorithms, and in its place
was this term patient caredecision tool , um, patient,

(04:31):
patient care decision supporttools. And , and so the Biden
final regulation , uh, it , ittook so long to come out in
part because of the importance,the emerging importance of ai.
And so it can't really beoverstated, the watershed shed
, uh, uh, foundation that thenon-discrimination in the use

(04:55):
of patient care decisionsupport tools, which, you know,
we can use shorthand referenceto health ai, although it's
technically broader than that.
The Biden Administration'sfinal rule imposed this ongoing
, uh, duty to make reasonableefforts to identify uses of
health ai that, that usevariables that measure race,

(05:15):
color, national origin, sex,age, or disability, these
classic protected classes. Soit created this ongoing , uh,
due diligence standard to , uh,undertake reasonable efforts to
identify those uses and thenimpose for any such use that
you identify an ongoingresponsibility to take
reasonable efforts to mitigatethe risks of discrimination in

(05:38):
those health AI tools. So truly, uh, watershed , uh,
regulatory standard , uh,imposed there that is still on
the books. Uh, it is not yet ineffect , uh, under the
regulation. This effective datewas delayed until May 1st of
this year. So a little lessthan two months. This , uh,

(06:01):
regular stand regulatorystandard is set to go into
effect. Um, the purpose of the, the article, of course, is to
kind of answer the burningquestion of if the Trump
administration does away withthis standard in whole or in
part , uh, what will be left inits place. Um, so we can, we
can discuss that next, but wecould also spend some time

(06:23):
elaborating on how HHS thought, um, a healthcare provider
should take those reasonableefforts , um, which , uh, we
can almost circle back to itbecause they have continuing
relevance even if theregulation is rescinded. So it
, that's a long way ofanswering your question and
saying that , um, the Bidenadministration's regulatory

(06:45):
standard , um, created a wholenew domain of AI due diligence
in healthcare, especially as itrelates to non-discrimination.

Speaker 3 (06:56):
No, I, I, I think that's, I mean, it makes a lot
of sense. And, and, you know,we're sort of gonna continue to
dive into this, but I , I thinkthat the , the point that you
sort of were touching on just aminute ago , um, you know, even
if we see some pullback ofenforcement , um, pullback of
some of the rules andregulations under, under the,

(07:17):
the new administration , um,you know, there still is this
ongoing conversation around,you know, what, what sort of
this looks like within, youknow, the sort of litigation
realm within sort of thebroader , uh, legal risk, I
guess, for lack of a better,better phrase , uh, realm looks

(07:38):
like when we're deciding to usethese types of tools and, and
decision making and, and reallywanting to make sure that our
clients and, and justorganizations more broadly in
this context, or , you know,they have the right controls in
place, they have the right, youknow, the sort of , uh, the,
the thoughtfulness that, that Ithink, you know, is really

(07:59):
going to be required , uh,moving forward as we're
thinking about these , uh,these tools. Um, so with, with
all that said, I mean, I, youknow, more interested to hear
from you, you know, in termsof, you know, legal frameworks
that , um, that might apply.
You know, your article exploresthis a bit. Um, you know , uh,

(08:20):
you know , if the Trumpadministration does renounce
some pull back some of theseregulations, could you talk a
bit about, you know, some ofthe main points for listeners
who may not have read thearticle yet about, you know,
sort of legal frameworks,constructs that may still apply
even if we see 1557 , um, youknow, pulled back or, or
whittled down?

Speaker 4 (08:41):
Certainly, yeah.
And, you know, to your point ,um, the, there are certainly
going to be state laws that ,um, may apply to healthcare
entities use of health, ai ,um, and , uh, every healthcare
institution is gonna do its duediligence , um, from a, from a
safety quality equityperspective. Um, so that's all

(09:07):
outside the scope of, of thisdiscussion that I'm about to
have, which is focused on, toyour question, if the Trump
administration does renouncethis Biden , uh, regulatory
standard under section 1557,what then? And so there , and
this is , uh, more of a,something of a legal point, but

(09:27):
it's very important forin-house counsel and compliance
professionals to understand asit relates to
non-discrimination. So it's,it's a little bit of a basic
legal proposition that the reg, if a regulation is rescinded
or renounced, the, thestatutory authority that
Congress passed into lawcontinues to apply. So, section

(09:50):
1557 of the Affordable Care Actwill continue to govern
healthcare actors' use oftechnology, which would include
health ai. There's not really aserious , uh, debate about that
point. It's not as though theuse of, of a particular
technology would fall outsidethe purview of preexisting

(10:10):
federal civil rights statutes.
And Section 1557, of courseincorporates preexisting civil
rights statutes, title VI ofthe Civil Rights Act, title ix,
the Age Discrimination Act, andSection 5 0 4 of the
Rehabilitation Act. So thesefederal statutory authorities
would continue to apply to ,uh, these non-discrimination

(10:32):
principles. So what Ielaborated on in my article is
that litigants and regulatorsare, are very likely to, to
argue, for example, thisdeliberate indifference
standard for intentionaldiscrimination under these
statutes. And, and before Idive into that, I should take a

(10:55):
step back and just explain for,for viewers who may be , uh,
unfamiliar with this, that Ty ,you can divide these type of
discrimination claims inhealthcare and even in
employment context into two bigcategories. The first category
is intentional discrimination,which I'll , I'll talk more
about in this context , uh,briefly or, or shortly. The

(11:18):
second category would beunintentional discrimination
claims. And these are oftenreferred to as disparate impact
discrimination claims. Andwhat's significant about , um,
these claims in general, theyare based on a facially neutral
policy or practice thatdisproportionately affects a
protected class. So you can seehow a hospital or health

(11:42):
system's use of health AI wouldlend itself to this type of
disparate impact claim. Thetrouble with that framework is
that the Supreme Court back in2001, greatly limited the
availability of a, anindividual, a private party to
assert those type of disparateimpact claims of discrimination

(12:04):
on the basis of race, color, ornational origin. It's the same
under Title IX for sexdiscrimination. So this has
left a void that there is verylittle disparate impact
discrimination litigation inhealthcare or elsewhere in , in
federal funded programs becausethe federal government has not
been active in pursuing theseclaims. So in the context of

(12:28):
health ai, let's say, you know,this, the health ai , if we
assume no one is using it tointentionally discriminate
against anyone intentionally ,uh, leave someone out of the
benefits of health ai , then ifthat happens just as a result,
that would be thisunintentional discrimination
type, disparate impactframework. But if the Trump

(12:50):
administration as expected ,um, continues to not police
those type of claims , um,which is a big if, by the way,
we we're seeing of course, a ,uh, an emergence of a, of a
fundamental fundamentally newand transformative technology,
that it is possible, of course,that regulators could take an

(13:12):
interest in policing disparateimpact in the use of ai. And so
we shouldn't gloss over thattoo quickly, that we can't say
for sure that there won't beany disparate impact
enforcement in the use ofhealth ai. But if we assume
that there is not going to bemuch enforcement, which is the
historic , um, trend in thatarea, what you could be left

(13:33):
with is the framework underthese preexisting statutes,
which impose a, a, a standardon institutions that requires
them to take action to mitigatethe known risks or known
instances of discrimination.
And so that is what thisarticle , um, really gets at,

(13:56):
is how that type of deliberateindifference standard , um,
would apply in in the contextof health ai , which we can
discuss more in a moment, butI'll, I'll just turn it back
over to you and see if, if, ifall those frameworks make sense
as a general proposition.

Speaker 3 (14:12):
No, I, I mean, it , it , it really does. And I
think it's, it's some , um, youknow, it's, it's really a, I
mean, I was gonna say sort of acreative argument, but it's,
it's actually something that Ithink is, is very
straightforward and, and makesa lot of sense. And I think ,
um, you know, I appreciate alsowhat you were saying about, you
know , uh, us you know, being abit cautious when we think

(14:34):
about trying to makepredictions about the way that,
you know, the wind may beblowing because there , you
know, we are seeing some, youknow, some enforcement , um,
that, that continues. And, youknow, I , I think your point,
particularly around, you know,1550 sevens reliance on other

(14:54):
civil rights laws, I , I thinkis is an important note for
people out there that arethinking, well, if you know
1557 goes away, then, you know,we don't really, you know,
maybe we don't have to havepolicies or practices or, or
maybe we, you know , can, cansort of reshape our thinking ,
uh, around, you know, some ofthese rules. And, and I think

(15:14):
the point that you're makingthat sort of the answer is,
well, you know, not , notreally. I think is , I
think is a really important onebecause there's still a lot of
risks here. And, you know, andI also think it's also worth
underscoring the point too ,that you made that, you know, I
, I think, you know, you wouldhope at least, and at least
from my experience, andprobably yours too, you know,

(15:36):
we're not seeing, you know,malicious actors out there that
are standing up organizationsto try to design tools that,
you know , intentionallydiscriminate against patients.
Um, you know, these things are,are most likely gonna happen,
you know, in the ways that youdescribed in , in sort of
unintentional , um, ways, whichstill is significant, right?

(15:59):
But , um, but I , I thinkthat's an important note that
you're making, and it reallyleads in, I think, too , to ,
to sort of the, the bulk ofyour article around, you know,
this, the deliberateindifferent standard. And, and
I don't know, you know , I knowyou've just shared quite a bit
about the background there, butdon't know if you have any
other thoughts that, you know,but before we start to think

(16:20):
about, you know , bestpractices and, and operational
questions and thoughts like a ,anything else that you think is
worth sharing around, you know,how the standard could be
applied to cases involvingdiscrimination that results
from the use of, of health ai ?

Speaker 4 (16:36):
Certainly, certainly. So , uh, a great
example came out of a recentfourth circuit court of appeals
opinion. So I just mentionedhow private individuals under
that 2001 Supreme Court casehave been unable to bring
disparate impact, unintentionaldiscrimination claims on the

(16:56):
basis of race, color, ornational origin , uh, for
decades now, that is the law ofthe land. But , uh, a fourth
circuit opinion recently justdemonstrated in , in very
clearly how a unin, or I'msorry, an intentional
discrimination claim inhealthcare under a deliberate

(17:18):
indifference standard, canprovide an individual claimant
with an intentionaldiscrimination claim under
these statutes. So here thecase just in a nutshell,
involved , um, allegations thata hospital retaliated against a
patient for complaining ofdiscrimination and retaliated

(17:38):
against them and terminatedthem as a patient. So
ordinarily , um, you know, aninstitution for a hospital or
health system to be liable forthe actions of its agents, you
need to show some sort ofknowledge of that event in
order to have a claim againstthe larger institution or , or

(18:02):
health system itself. So inthis case, this individual was
able to allege that a manageror supervisor was aware of the
retaliation. So the retaliationis allegedly occurring, you
know , at the level of herinteractions in the hospital
with staff. But she was able toallege that a manager and a

(18:23):
supervisor was aware and tookno steps to correct the alleged
discrimination or retaliation.
And that allegation that nosteps were taken by a manager
or a supervisor to, to takecorrective actions allowed this
individual to state claim forintentional discrimination

(18:44):
against the hospital itself fordeliberate indifference. So
this recent case , um, drivinghome the deliberate
indifference standard, whichrequires a deliberate or
conscious choice to ignoresomething , um, that drives
home how in the context ofhealth ai , you can imagine

(19:07):
scenarios in which individuals,even at a class action basis
or, or public interest groupsor, or public regulators, state
agencies, could allege that ahospital or health system was
aware, knew about certaindisparities or discrimination

(19:28):
in the use of health ai , andfailed to take corrective
action such that it isintentional discrimination. So
the point I was trying to makein this article, while no court
has applied this framework tohealth AI specifically , um,
what ends up happening if, ifthis type of framework is

(19:51):
applied, is you have a scenarioin which hospitals or health
systems should continue to beproactive in taking steps to
mitigate known instances ofdiscrimination in healthcare.
So it's almost as ifparadoxically we're coming back
somewhat to , um, a portion ofthe regulation which, which

(20:13):
imposed this duty to mitigateknown, you know, the risk of
discrimination. So it, it'sdriving home the point that
even if the Trumpadministration renounces this
regulatory standard at thisstatutory level, the risks
remain under this deliberate,indifferent standard. And
therefore hospitals and healthsystems would be wise to

(20:35):
continue to , uh, do their duediligence, vet these products,
monitor them, and have aprocess for responding to
complaints. If they do not,they run the risk of a
deliberate indifference typeclaim being alleged against
them .

Speaker 3 (20:52):
Yeah, and I , I mean, this this case is, is,
you know, it was decided, youknow, really just last month,
right? So this is, I mean , I ,I think for , for many reasons,
I think your article andperspective is really timely.
But , um, I mean, I thinkthat's important for those
that, you know, that may not beas familiar with this case or,

(21:12):
or some of the background herethat this is, you know, this
isn't a case that was decided,you know, four years ago,
right? This is , um, this issomething that's, that's fairly
new. And , um, and , and I , Ithink, again, you know, why
your perspective is , uh, isreally important. Um, I , I

(21:33):
think probably a really goodsegue here is , is to talk a
bit about sort of theoperational and, and practical
steps here. So there's, youknow , you sort of, I , I think
done a , a fantastic job, youknow, walking, you know, sort
of walking me through the, thebackground here. Um, but for
those that are just saying,okay, so what do we do now, you

(21:55):
know, we've, we've got thisfourth circuit opinion. Um,
we've got a lot of , uh, sortof nebulous gray area around,
you know, this administrationand 1557 and, and even ai. Um,
what are some of yourrecommendations and, and
thoughts around practical stepsthat, you know, hospitals,
health systems providers cantake to, to demonstrate that

(22:18):
they're not being deliberatelydifferent , um, to potential
discrimination risks in , uh,in AI tools?

Speaker 4 (22:25):
Yeah, absolutely. Uh , great question. It's, it's
the question. Um, and atbottom, under this framework
that I've been discussing, ahospital or health system
should be prepared todemonstrate that it took
reasonable steps to mitigate ,um, the known instances or

(22:47):
known risks of discrimination.
So at, at bottom, at, at a bareminimum, you would, you would
want to be able to demonstratethat you had a process to
receive and review and evaluatecomplaints , uh, in the use of
health ai . And when complaintsor risks of discrimination or

(23:10):
disparities or inequities are,are surfaced , um, via a
process or even if ,even if they're surfaced
outside of the establishedprocess, that, that there was a
reasonable effort toinvestigate, to review, to
gather the facts, gather the ,the data, and, and then of

(23:31):
course, the real hard workwould begin of deciding whether
this is legitimate , um, whatis the extent of it, how
prevalent is it? Um , how muchexposure is being created? And,
you know, again, I don't wannaoverlook , um, you know, this
raises safety, quality equityconcerns anytime that we're

(23:54):
talking about discrimination.
So, while I often speak interms of, of legal risks,
really , uh, that's maybe thethird or fourth consideration
for a hospital or health systemwhen we're talking about tools
resulting in disparities,clinical disparities or
something like that, accessdisparities. So evaluating it

(24:17):
from a safety, quality equity,and then legal risk perspective
, um, in deciding how toaddress it. Um, that will
really be the hard work. Butthis process of receiving
complaints, it , it's notunlike the grievance procedures

(24:39):
that hospitals and healthsystems have had to have in
place for some time. It , it's,you can think of it as a
similar type obligation that ,um, receiving the complaints,
evaluating them and respondingappropriately is the, is really
what the deliberate,indifferent standard would

(24:59):
impose. Mm-hmm .
So mm-hmm . Um,training staff on that,
training clinicians on that,creating those processes,
having the governancestructures around health ai ,
which are already being stoodup across the , the nation. Um,
building in to thoseframeworks, this type of

(25:20):
process , um, it will becritical , uh, to showing that
a hospital or health system wasnot deliberately indifferent
and providing the subjectmatter expertise or the
training , uh, with , on thesetools will, will be key as well
to enable these individualstasked with responding,

(25:41):
reviewing, and acting to do soeffectively. Uh , so the , the
task is very complex. It , it ,it's, no , it's much easier
said than done , uh, andrequires tremendous
interdisciplinary collaborationbetween mm-hmm .
Chief , chief AI officers,chief medical officers,
frontline staff, patientquality and safety, and of

(26:05):
course, in-house counsel andcompliance professionals as
well.

Speaker 3 (26:09):
Yeah. You know, it's , um, it can be very , um, you
, you know, having sort of wahelped organizations sort of
walk through , you know,assessing AI risk. Um, I think
your point aboutinterdisciplinary collaboration
can't be , um, emphasizedenough. You know, it's, we sort
of think about, you know, and ,and we sort of look at

(26:31):
frameworks like, you know, theAI risk ma , you know, nist, AI
risk management framework,which for those of you who
haven't looked at that yet ,um, you know, encourage you to
do so, they , there's a, NISThas a , uh, a playbook that
accompanies the, the framework, um, that they published. And
, um, it's, you know, when yousort of take a look at, you
know, whether you're using NISTor, or, or other types of

(26:54):
frameworks, it, it sort ofstarts to give you this picture
of the way in which, you know,these kinds of assessments
around, you know, ethical useof ai, non-discrimination in
ai, bias in AI really requireseverybody to be talking
together. And so, just, I mean,you mentioned, you know, the

(27:14):
patient grievance process and,and that's, you know, those are
people that we often speak to,you know, when we're , um, when
we're doing these types ofassessments, because that, that
sort of feedback loop, youknow, what they're hearing from
patients around, you know,clinical decision making . I
mean, granted, the patient prob, you know, may or may not be
aware of how AI's being used,but you know, patients

(27:36):
oftentimes will sort of have a,have a sense that something
maybe felt different abouttheir treatment. And , um,
those feedback loops areincredibly important. And, and
so it's one thing to have, I ,I'm just sort of talking here
for for a minute, drew, but Ithink just elaborating on your
point, you know, it's one thingto have the governance in place
and the policies in place , um,you know, and maybe even the

(27:59):
training in place, but it ,it's another question, you
know, is the chief complianceofficer, you know, are, are
they meeting with, you know,clinical, IT leads , um, you
know, AI development leads,data sci , you know, data
scientists that are helping todevelop this, manage this , um,
within clinical settingsbecause it, you know, the ,

(28:21):
these things are, you know,these systems are being
developed very quickly and inmany way , you know, I think
for good reason, right?
There's, there's a lot ofbenefit that that happens
because of these systems. Um,but you know, with sort of the
fast deployment, you know, thefast development , um, there's
risks there. If you're notmonitoring the quality, if you

(28:41):
don't have the right sort ofaudit mechanisms in place, the
right committees in place tohave these meaningful
discussions, it can be, it canjust make things more
challenging. I , I think maybeis what I've seen, and I , I
don't know if that's thatresonates at all for you. I , I
know I've sort of just sort oftalked a bit all over the
place, but , um, yeah,

Speaker 4 (29:01):
It does. It , it does. I think that in, in some
ways what we're discussing interms of non-discrimination is
the, the larger challenge in,in deploying, vetting,
deploying, monitoring, thesetools in general. And this is
just one more additional , uh,domain that That's right. Ha

(29:23):
has continuing relevance, notwithstanding the , the change
in the regulatory standard thatwe're all expecting.

Speaker 3 (29:29):
Yep . No, I , I think that's, that's a great
point. Fantastic point. Um,well, I think maybe one, one
way we can sort of, I thinkwind down the, the discussion a
bit is, you know, I , I know ifyou had a crystal ball, and I,
I know that, I know that wedon't, but, you know, for those
of , you know, for those folksthat are listening and saying,
well, you know, where is thisgoing? Um, you know, can you,

(29:53):
you know, what are yourthoughts, you know, your own
personal opinion, you know,where do you see enforcement ,
um, of non-discrimination andai, you know, changing under
this administration, you know,particularly around, you know,
disparate impact claims. And,you know, maybe just sort of a
tied question to it, if youwanna talk about this too, is
just any other thoughts aroundchallenging aspects of ensuring

(30:15):
compliance in this, in this,you know, sort of rapidly
evolving field?

Speaker 4 (30:20):
Certainly. So, as I mentioned it , it's, it's safe
to expect no large shift indisparate impact , uh, policing
or enforcement at the federallevel. I, I, I, I hesitate to
say that because , uh, again,this is, we are seeing the
emergence of such a significanttransformative technology that

(30:44):
it, it's not outside the realmof possibility that , uh,
regulators would see the needto be present in this space and
offer guidance to the industryand to, to seek voluntary
corrective compliance , uh,before they , uh, a matter is
turned over to the Departmentof Justice for enforcement

(31:04):
proceedings, for example. Soit's important to, to keep that
, uh, in mind and monitordevelopments in that area. I do
anticipate , uh, privateindividuals , uh, public
interest groups , uh, at particmaybe on a class action basis
or maybe state certain states ,uh, alleging similar claims of

(31:27):
the deliberate indifferencetype frameworks. I think that
it, it cannot be ignored. Um,and the, the real, the real ,
uh, purpose of an article likethis in a discussion like this
is to put this on the radar ofin-house counsel , uh,
compliance professionals tounderstand that the legal risks
, uh, continue even despite theexpected change in the

(31:53):
regulatory standard. So, toanswer your, your second
question , uh, challenges andcompliance, apart from
everything that we've alreadydiscussed , um, what, what I
would like to note is it , inone sense, this discussion
would , would suggest thathospitals and health systems,

(32:13):
their AI governance committeesand everyone , uh, involved in
that effort would be wise tocontinue doing everything that
they're already doing, perhapswhat they were doing with an
eye towards the regulatorystandard. And what do I mean by
that is it would make no senseto not vet , um, AI tools and

(32:34):
do your due diligence on thefront end on a
non-discrimination , uh, uh,basis, and, and leave it to a ,
a later day to discover that aparticular tool , uh, results
in a lot of , of drift resultsin disparities , uh, creates
all kinds of problems down theroad. So if you know that this

(32:54):
, uh, legal analysis is, thisframework is kind of hanging
over, you know, the use ofhealth ai , it, it might,
should incentivize hospitalsanalysis to continue to do
their due diligence on thefront end to do their due
diligence in monitoring. Um, sothey can try to get ahead and

(33:15):
minimize these types of claims.
And then, of course , uh, when, uh, actual issues arise as we
discussed , uh, reasonablesteps should be taken to, to
mitigate and address them. Um,so I just wanted to note that
it, in a way, it's, it's almostas if , um, this framework
provides the same incentivesthat HHS was aiming for with

(33:37):
its regulation. Now, you know,as a technical matter, it ,
it's likely that the regulationunder the Biden administration
is not a perfect fit with thestatutory standard that we're
discussing , uh, you know, as alegal matter. Mm-hmm
. I , I don'tthink that's at all what I'm
saying, but what I'm saying isthe incentives to do your due
diligence to monitor thesetools and then to , to address

(34:00):
issues, the same incentives areprovided with this deliberate
indifference framework thatwe've been talking about.

Speaker 3 (34:07):
Yeah, I , I, I think it's, it's a, I mean, again ,
um, can't sort of underscoreenough, I think a really
important point that you'vemade, you know, today and, and,
you know, points that you'remaking in , in your article,
which just as a quick plug ,um, HLA , this was published
February 28th of this year. Sothose that are, that are
looking to pull this up , um,uh, Drew's Drew's article is,

(34:30):
is there, I encourage everybodyto, to take a look, dive a bit
deeper and , um, to, to reachout with, with any questions.
And you know, drew, gr reallygreat talking with you today,
really appreciate yourperspective, a very timely ,
uh, I think really interestingand, you know, a , a a , a
unique take on the , all ofthese sort of conversations

(34:52):
that are, that are swirlingaround now around a , you know,
the use of AI and, andparticularly the use of AI in
clinical settings. So, reallyappreciate your time. Thanks so
much for, for taking the timeto, to talk today and , um,
really just wish you the best.
So absolutely . Thanks.

Speaker 4 (35:07):
Yep . Thank you Andrew. And thank you
Clearwater and a HLA for theterrific discussion.

Speaker 2 (35:16):
Thank you for listening. If you enjoyed this
episode, be sure to subscribeto ALA's speaking of health
law, wherever you get yourpodcasts. To learn more about a
HLA and the educationalresources available to the
health law community, visitAmerican health law.org.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.