Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_01 (00:00):
Support for AHLA
comes from Clearwater.
As the healthcare industry'slargest pure-play provider of
cybersecurity and compliancesolutions, Clearwater helps
organizations across thehealthcare ecosystem move to a
more secure, compliant, andresilient state so they can
(00:20):
achieve their mission.
The company provides a deep poolof experts across a broad range
of cybersecurity, privacy, andcompliance domains,
purpose-built software thatenables efficient identification
and management of cybersecurityand compliance risks.
and a tech-enabled 24-7-365Security Operations Center with
(00:41):
managed threat detection andresponse capabilities.
For more information, visitclearwatersecurity.com.
SPEAKER_02 (00:53):
Good day, everyone,
and welcome.
This is Andrew Mahler, the VicePresident of Privacy and
Compliance Services atClearwater.
We're here to talk a bit moreabout AI.
I know there's been lots ofdiscussions, podcasts,
conference panels, paperswritten over the past, of
(01:15):
course, many years, butspecifically in the past couple
of years as AI has become a bitmore ubiquitous within the
healthcare space andorganizations are becoming a bit
more comfortable tackling AIissues.
So really looking forward to theconversation today.
Just taking a quick step back,as artificial intelligence
(01:37):
becomes even more embedded inhealthcare operations, and that
includes, of course, clinicaldecision support, administrative
workflows, compliance programs,as well as legal general
counsel, challenge with ensuringhow these tools are safe and
fair and in line with existingregulations and rules, as well
(01:58):
as a rapidly evolving legal andrisk landscape.
So in this episode of Speakingof Health Law, we'll explore how
compliance teams can buildeffective governance models and
monitor legal risks and preparefor the surge of emerging laws
and regulations.
So really excited again to havejoining me with you today, Kate
Healy, who's a partner with thelaw firm Rob Robinson Cole, and
(02:22):
she's also the co-chair of thefirm's artificial intelligence
team, as well as Rob Martin,Associate General Counsel at
Mass General Brigham.
So just like to say hello.
Welcome.
Welcome to you both.
And thanks for joining today.
Thank you for
SPEAKER_00 (02:37):
having us.
Great to be here with you.
SPEAKER_02 (02:39):
Perfect.
So let's just go ahead and jumpin a bit and hopefully we can
make the conversation more alittle fluid and tactile, and
hopefully there's, I know youboth will have some really
interesting insights andexperience to share with the
listeners.
So I would like to just startoff by asking you a bit about AI
governance.
(03:01):
You both have spoken on thistopic before and noticed that
you all have spoken about theFAVES principles, so fair,
appropriate, valid, effective,safe.
How would you think about, orhow would you advise somebody
advising a healthcareorganization about building a
dynamic AI oversight framework?
(03:21):
Of course, while you're helpingbalance robust governance,
cutting edge applications,thinking about tools like
ambient clinical documentation,radiology imaging.
SPEAKER_04 (03:35):
Andrew, I think that
the first thing that I would say
is Great.
You should have a governancestructure since that seems to be
the starting point for lots ofconversations.
Do we need to put something inplace?
So I think the answer is yes fora variety of reasons.
And then I think the specific...
nature of the governancestructure that is put in place
(03:56):
for any particular organizationreally should be scaled to the
size and complexity of theorganization and the potential
use cases for AI.
Obviously, the bigger theorganization, the more
complicated the organization,the more complex governance
structure you might want to havein place.
I would also recommend that thegroup that comprises your
(04:20):
governance committee, at leastat the top level,
multidisciplinary, right?
It should have representationfrom the business side.
In a healthcare organization,you're going to want the
clinicians there.
Mass General Brigham has asignificant research enterprise.
So you need some researchersinvolved.
You need finance folks.
You need the administrativeleadership.
(04:42):
And we've got lots and lots ofcapable folks on the digital
team, clinicians who are alsovery tech savvy and researchers
who are tech savvy as well.
So all of those groups should berepresented and then we also
have um participation fromclinical quality and safety
folks as well so we make surewe're getting perspectives from
(05:02):
all of those folks in additionto the compliance and legal and
the risk management folks so ithink that from my perspective
is the starting point and thenthe second point the the second
i think gating item would be tomake sure that that group um
understands that they are ableto take a risk-based approach to
a lot of this stuff, right?
Not every use of AI is the sameor requires the same level of
(05:26):
review and analysis and tirekicking.
So a risk-based framework to tryto tease out high, medium, and
low-risk models and theattention you need to pay to
each one, I think, would be aguiding principle that probably
stretches across organizationsregardless of the site.
And I don't know if you've gotanything else to add from your
experience on top of that, Kate?
SPEAKER_00 (05:48):
Yeah, I would really
agree with that.
I think the only other thing Iwould add is in addition to
ranking them high, medium, andlow in terms of their risk, I
would also rank them in terms ofhow common they are and how
critical they are to thehealthcare entity at issue.
(06:08):
So I think those are kind of thepractical points to keep your
eye on.
SPEAKER_02 (06:15):
Yeah, thank you
both.
I mean, I hear a lot about, youknow, internal communication,
you know, the importance ofhaving good internal
communication practices.
You know, I think that's justreally vital.
And I don't know, for the two ofyou, I mean, I'm seeing, you
(06:35):
know, I'm seeing more, you Moreof an emphasis, I think, lately
when I sort of look atenforcement actions or I look at
some new rules that are beingdiscussed or set up by states or
even within the federalgovernment, I see more of an
emphasis on senior leadershipstakeholder involvement.
(06:56):
So like executive involvement,the board level involvement in
some of these questions.
So I think some of those thosethoughts really can't be
underscored um just curious youknow for for either of you you
know anything for for you knowmaybe attorneys that are
listening that are thinking youknow well um i wonder if there's
(07:18):
certain things you know whenwe're thinking about setting up
a governance you know structureare there certain things we
should avoid i don't know if anyof you you know either of you
feel comfortable talking aboutyou know, some things you've
seen or maybe just hypotheticalsthat you sort of worried about
or kicked around that involve,you know, some lacking
(07:39):
governance issues or some gapsthat you might be willing to
share to help folks, you know,think about how to avoid those
things going forward.
I
SPEAKER_04 (07:49):
think from my
perspective, working in a large
healthcare system, the biggestchallenge challenge in that
space is making sure that thevarious, you know, kind of AI
use cases, because they'reeverywhere right now, right?
It's not a single AI project.
They are basically everythingyou do has some AI component
(08:09):
now, is to make sure that folksunderstand that there is a
process and there's a reviewprocess and a way to get it
reviewed and approved.
Because the biggest issue is tomake sure, A, that the the
business case has been vetted,right?
Is the use of AI better thandoing something that's an
(08:29):
alternative or not doing it atall, right?
You don't wanna do AI projectsjust for AI's, just for the sake
of doing it.
And then of course, anything inthe patient care or research
space, you wanna make sure thatit's valid, safe and effective,
a couple of the FAVES principlesabove.
So the governance committee andany kind of operating folks
(08:50):
under the governance committeereally play a key role in making
sure that that happens.
Somebody should look at themodel to make sure that A, it
works, B, that it's safe, thatit's effective, that it's
sustainable.
And then also once it's beenimplemented, you need to make
sure that all of thoseprinciples continue to hold,
(09:12):
right?
Because these models change,these models drift, the use
cases change, and without arobust governance function that
deals with thepre-implementation, the
implementation, and then thepost-implementation monitoring,
lots of stuff can fall throughthe cracks.
So you want to make sure thateverything is getting into the
funnel at the front end so theright folks can put eyes and
(09:33):
ears on it, and also to makesure that the
post-implementation monitoringis happening so you don't find
out about a quality and safetyissue too late that's caused by
one of these things.
SPEAKER_02 (09:43):
Yeah, thanks, Rob.
And I mean, I think that's aperfect segue into sort of my
next thought and maybe follow-upquestion.
I noticed from the presentationyou both gave earlier this year,
really an emphasis on safety,bias, transparency as some
critical concerns.
And like you just mentioned,Rob, it's really important that
(10:05):
there's some sort of monitoringand view into what's happening
in these tools and systems toreally understand whether or not
these risks are present.
How would you sort of, either ofyou, how would you think about
advising internal auditcompliance teams about how to
(10:26):
effectively assess these AIsystems for things like bias,
accuracy, transparency?
SPEAKER_00 (10:32):
I can start on that
one.
I think there are a number ofdifferent ways that audit and
compliance teams need to assessfor bias accuracy and
transparency.
I think if a healthcare entityis involved in developing or
generating datasets for the AImodel, it's really important
(10:54):
that those datasets reflect theactual patient panel, including
the diversity of the healthcareentity's patient because I think
if the data that's going intothe model contains bias, then
anything generated by the modelwill likely contain bias too.
I think compliance teams need totry to use a variety of data
(11:17):
sources and assess the sourcesaccuracy and reliability.
And that means includinginclusion and exclusion
criteria.
I think compliance teams alsoneed to talk to their vendors
what criteria they use to selectdata sets and assess whether
those criteria mitigate the riskof bias.
(11:43):
I think compliance teamsadditionally need to discuss the
algorithm development andvalidation process with their
vendors.
For example, does an independentteam try to identify potential
biases in the algorithms?
How do they work with oridentify the representation of
(12:08):
minority classes?
And then I think anotherimportant way to assess for bias
and Transparency is by testingthe tools live in the clinical
environment.
So that's often calledsandboxing, where compliance
(12:31):
teams might sandbox an AI modeland tool and run it without it
having an effect in the patientcontext.
care model to see how it works.
And then I think there's alsokind of the post utilization of
(12:54):
the AI tool and that monitoring.
And there I think complianceteams can also review criteria
for when and how the AI model isdeployed with patients.
How are the model predictionsreviewed, for example?
Is there a threshold data set toensure that a model is not used
(13:18):
to make predictions for apatient population that doesn't
have a sufficient training databehind it?
And then I think models need tobe regularly maintained and
reassessed to protect againstnew biases that might crop up.
Rob, do you have anything else?
SPEAKER_04 (13:41):
No, I think that's
great, Kate.
I think that is a key componentof certainly anything in the
clinical space, right, is tomake sure that your quality and
safety folks have eyes on themodel before it's deployed,
while it's being deployed, andthen afterwards to make sure
that things haven't changed overtime.
So that's just an added piece tothe normal kind of internal
(14:04):
audit and compliance teams, Ithink is the clinical piece as
well.
But otherwise, I think.
100% agree.
SPEAKER_02 (14:11):
You both mentioned,
you know, you sort of talked
about risk framework heat mapsearlier.
Just curious for you both, youknow, as you're thinking about,
you know, auditing andmonitoring of, you know, these
sorts of tools and applicationsand systems or, you know, do you
have a, I know this is maybelike a It's more like a wish
(14:34):
list question than maybe a realquestion.
Do you have thoughts about whatsystems should be looked at
first in the world of AI withinhealthcare?
Are there places that you'regoing to go immediately to
examine AI systems versusothers?
SPEAKER_00 (14:57):
Yeah, I can start.
I mean, I think Rob hit the nailon the head early on when he
said, you know, take a riskbased approach.
I think different entities willhave, you know, different tools
that are high risk.
But I think starting with thosehigh risk tools.
tools is really important andranking them so that you're
(15:19):
clear, as Rob mentioned, thehigh, medium, and low risk
tools.
And then as I said, I thinkfocusing on what's used most
often to make sure that you'vereally got that covered.
One of the things that I alwaysworry about is that there are AI
tools that are in use thataren't part of the inventory of
(15:44):
AI tools.
And so they slip through andnobody's really looking at them.
That's what I would say.
Rob?
SPEAKER_04 (15:55):
Yeah, I agree.
I think the way that I tend tothink about this is almost a you
know it when you see it type ofthing, which probably isn't
always a great thing for alawyer to say.
But there's AI everywhere in theenvironment today, right?
So I went to lunch and ourcafeteria here uses AI.
(16:17):
I put a salad and a Diet Pepsidown on a platform and then 10
seconds later pops up that I hada salad and Diet Pepsi and I owe
$10 for lunch, right?
Those types of uses of AI in theenvironment are really low risk
and No one's going to prioritizethose.
For us, anything in the patientcare setting that could touch
patients, impact patients, haveany kind of impact on the care
(16:40):
we deliver would be on the highrisk side.
And then there's also high risksort of non-patient care, but
compliance issues, right?
You know, billing, coding, thosetypes of things, eligibility
checks for insurance are also onthe higher risk side as well.
SPEAKER_02 (16:55):
Makes sense.
Makes sense.
Yeah.
Thanks for, thank you both forfor sharing those thoughts.
You know, I've been a part of anumber of discussions over the
past couple months as people arethinking about rules like the
information blocking rule, 21stCentury Cures Act, HIPAA, of
course, and ways in which HIPAAprovides certain vehicles and
(17:19):
methods for people gettingaccess to data and as well as
required safeguards.
And something that's come up,I've heard in conversations with
other attorneys and complianceprofessionals is sort of the
question of where does sort ofthe medical record conversation
lie in the discussion of AI andmaintaining data within AI
(17:42):
systems?
So I'm just curious, and thisprobably is a question that kind
of overlaps with what I justasked, so my apologies if this
is a bit duplicative, butCurious about your thoughts
about these laws like 21stCentury Cures Act, HIPAA, how it
intersects with, you know, howthey intersect with AI
deployment, like chartsummarization, radiology,
(18:02):
imaging, and are there someother areas we haven't discussed
where you sort of see some bigcompliance questions or issues
around this sort of idea of AIwithin, you know, 21st Century
Cures, HIPAA, and other similarrules?
I think from
SPEAKER_04 (18:20):
the health system
perspective, obviously HIPAA,
the transparency andinteroperability rules, and then
lots and lots of other legalrequirements have been sort of
front and center for us in termsof the evaluation and the
implementation of AI.
Mass General Brigham was a heavyand has been a heavy adopter of
(18:42):
the ambient clinicaldocumentation that you mentioned
earlier, and a core part of thatkind of review and analysis and
some of the intake andgovernance work was heavily
focused on the privacy andsecurity of the data that the
models kind of process and alsothe output that's generated,
kind of where is it, how is itprotected, how long does it
(19:05):
remain in those AI systems.
So I think all of those legalrequirements that you talked
about have to be part and parcelof the governance function.
On the transparency side, thishasn't been as clearly defined,
I guess, globally as the HIPAAprivacy and security
(19:27):
requirements and some of theother things.
But we have, as a principle,leveraged lots of the
requirements that were in the21st Century Cures Act and other
things that flew down to mostlyin the electronic medical record
space.
There were lots of requirementson those vendors about
(19:47):
algorithms embedded in theproduct and transparency and
sourcing of the models and howthey were trained.
And I think that in manyrespects has become a best
practice even in areas that werenot technically subject to that
regulation.
So I think it became sort of aguiding principle.
(20:08):
The biggest issue I think on theissue of the various laws and
requirements now is that folkshave trouble determining exactly
where things are going to go.
There's been a lot of movementin the space, including lots of
movement about whether AI isgoing to be heavily regulated or
not.
I think six months ago, I thinkfolks were hoping that there
(20:30):
might be a federal agency thatmight step in and bring a little
bit of order to the chaos.
Right now, you've got lots andlots of different entities
playing in the space that makesit really hard to manage.
I think that's less likely to bethe case now.
So it's a little bit challengingto figure out where the legal
and regulatory focus is going tobe, other than there seems to be
(20:51):
a core principle to avoid anykind of regulations that would
stifle innovation in the space,which obviously nobody wants to
do.
SPEAKER_02 (21:00):
Yeah.
Thanks, Rob.
Really appreciate it.
Again, going back and thinkingabout your presentation you all
gave, again, I think this wasmaybe back in February,
highlighted HHS's AI strategicplan, the FTC's focus on
deceptive AI claims.
I know back to your point, Rob,I mean, I think there was...
(21:26):
Certainly a differentconversation about where this
was headed six, four months agothan maybe where we are now.
Curious to hear from you both,though.
I know this is a question thatcomes up a lot.
How are federal agencies,whether it's HHS or the FTC,
maybe even state AGs, if youhave experience signaling
(21:47):
priorities around AI compliance?
And I'm assuming it's reallyaround privacy consumer harm,
but I know there's certainlyother items as well.
SPEAKER_00 (21:57):
Yeah, I think just
for historical context, I'll
start out.
You know, a number of theagencies really signaled
priorities before the currentpresidential administration took
office.
And then when President Trumptook office, he imposed via
(22:18):
executive order a regulatoryfreeze.
So I think, as Rob mentioned, itis somewhat hard to discern
where the focus will be, atleast on the federal level.
I think the FTC, for example,has indicated that it's it's
(22:40):
sorting through where the focuswill be.
And I think in terms of HHS, itreally, it's signaled through
the HHS strategic plan that itwould, promote adoption of AI
tools so long as they're safe,effective, and conform to
(23:02):
ethical guidelines.
And so there was thatprinciple-based approach that
was at its focus.
And I think, you know, we're notsure where that will go.
HHS also did publish a notice ofproposed rulemaking that would,
(23:22):
you know, strengthencybersecurity protections around
EPHI under HIPAA security rule.
And I think we have read andreviewed those developments.
Again, they're part of thefreeze, but I would say that We
(23:44):
did see that instead of directlyregulating AI systems and tools,
HHS sought comments about how toapproach these new technologies.
And then they also, would imposesome changes under the security
rule that would include thingslike removing the distinction
(24:05):
between addressable and requiredsafeguards and requiring routine
review and testing ofeffectiveness of security
measures.
including encryption as astandalone technical requirement
to protect data at rest and intransit.
So I think those are certainlyareas of focus.
(24:26):
I expect an emphasis on dataprivacy and security to
continue.
And I think we're just waitingto see how that will emerge.
SPEAKER_02 (24:42):
Yeah, thanks, Kate.
And I mean, I think, you know,with the, you know, we sort of
had that news, I think it wasearlier, maybe last week,
around, you know, the House sortof moving forward to try to
restrict, you know, at leastadvance bills that would
restrict, you know, the state'sability to regulate AI.
(25:04):
I think, I mean, I think backto, you know, to your point,
there's, you know, You know, wesaw a lot of signaling and it
will be interesting to see howthings continue to develop.
you know, as we move forwardthis year.
And I think thinking aboutmoving forward and just trending
(25:24):
in general, you know, curious tohear from both of you, you know,
what trends are you seeing, youknow, outside of anything we
talked about that shape AIcompliance in healthcare, you
know, in the next couple ofyears, or are there certain
things that, you know, areaswhere you're seeing as key
battleground, you know, areaswithin the compliance and sort
(25:45):
of legal framework that, youknow, folks may want to be
thinking about.
I know you both have just awealth of experience in this.
SPEAKER_00 (25:54):
I can kick it off,
although I'd love to hear what
Rob has to say too.
I mean, I think there's gonna beincreased focus on algorithmic
fairness and bias.
I think that there will also beadded emphasis on transparency,
you know, particularly withtools that, affect clinical
(26:18):
care.
So I think as patients know thatAI tools are being used, they're
really going to want tounderstand how the systems
arrive at particular conclusionsand become, you know, more
engaged on those issues.
I do think we're going to see aa continued emphasis on data
(26:40):
privacy and security because AIinvolves such large amounts of
data.
And I think there's a lot ofconcern about unauthorized use
and disclosure as well as reallythe laws being somewhat dated
because AI tools can now, theyhave the potential to
(27:05):
re-identify protectedinformation in ways that the
laws really didn't contemplate.
I also think there's going tohave to be some kind of shakeout
with the patchwork of state lawsthat are being developed.
You know, I'm I'm hopeful I'dlike to see a comprehensive
(27:27):
federal law.
I think we saw with respect toconfidentiality laws many, many
years ago, how difficult it wasto have a patchwork of state
privacy laws and healthcareentities had to really navigate
(27:48):
multiple states laws.
And I think that's even morechallenging in the AI context
because unlike theconfidentiality law context
where the law of the state wherethe disclosure was made
controlled, you have AI toolsthat are utilized in many
different states andparticularly in New England.
(28:11):
I think it's very challengingfor healthcare entities to try
to comply with the wholepatchwork of state laws as we
see them evolve and develop.
I'll pause there.
SPEAKER_02 (28:26):
Yeah, thanks, Kate.
I don't know, Robby, any- Yeah,
SPEAKER_04 (28:32):
I think just two
things jump out to me.
The first one is I agree withKate on the challenges that
organizations face with thepatchwork of state laws or even
new laws that become apatchwork.
Most of those regulations in theAI space seem to be focused on
(28:54):
transparency and disclosure.
And I guess one concern that Ihave is how useful the
disclosure is or can be in allcases.
We're at the point now that itis virtually impossible to
receive care at a healthcareinstitution without AI involved
(29:15):
in that process on the either onthe administrative side or
elsewhere um so if we're notcareful you end up with legal
requirements or obligations sortof flooding folks with
information that won't really bethat helpful to them right um
you know if you get to the pointwhere somebody says you know i'd
like to receive care here but idon't want ai involved anywhere
(29:38):
in my care you're not getting anmri you're not getting a ct scan
or anything else which i don'tthink is the point of all this.
The other, I think, significantlegal and compliance
battlegrounds going forward isgoing to be tied, I think, to
the capabilities that certainlyare there or just about there on
(30:02):
the AI side.
We're not far away from fromsome really, really sticky
conversations about whether theAI is no longer just a tool
aiding the clinicians todocument clinical encounters or
to provide some recommendationsfor them to consider.
I mean, it's hard to go to aconference in this space now
where somebody doesn't float theidea of AI actually treating a
(30:26):
patient a few years down theroad, right?
I think somebody from CMSactually may have made that
comment publicly a couple ofweeks ago as well.
So I think once you get close tothat line, it starts to raise a
whole host of issues, right?
Medical device regulation,patient care, quality of care,
(30:47):
practice of medicine stuff thatI don't think anybody has their
hands around yet.
SPEAKER_02 (30:52):
Yeah.
Yeah.
It's, I mean, I think there's,and I've heard some of the same
comments, Rob, I think it's, youknow, there's, there are a lot
of really fantastic, amazing AItools that exist.
And some of the tools thatexist, I mean, and you both
touched on this, I mean, there'salready tools that kind of
stretch the imagination a bit interms of what we can do around
(31:14):
patient care.
And then there's other thingswhere it's a lot of, you know,
wouldn't it be great if, orwe're probably headed in this
direction, but just don't quiteknow yet.
So I think, I mean, this hasbeen a fantastic conversation.
I mean, I've really, reallyenjoyed talking with both of
you.
I like to leave you both withmaybe any final thoughts from
(31:35):
either of you, you know, interms of what we talked about,
or if you have, you know, sortof words to leave folks with
about, you know, how peopleshould be protecting healthcare
organizations, preparing for thefuture.
I'll just turn it back over toyou for final thoughts.
SPEAKER_00 (31:55):
I mean, I would say
one thing I would mention is
just engage consistently on AIgovernance and really invest in
a really strong AI governanceprogram and personnel, develop
those internal resources,identify external resources.
(32:20):
I think, as we all know, AI ishere to stay, and I think the
regulations and laws that governits use are emerging and are
going to be here to stay aswell.
SPEAKER_04 (32:33):
I agree with that,
Kate.
I think that the approach thatI've landed on is that the
technology is here to stay,right?
It's not a question of whetheror not to use it.
but how best to use it.
So you're not going to stoppeople from using it.
(32:53):
It is everywhere.
You can't pick up a phone or anyother device or do just about
anything without an AI componentto it.
So it's accepting that andfiguring out the best way to
utilize it in an organization,not trying to hold back the
ocean and pretend it's notcoming.
SPEAKER_02 (33:10):
All right.
Well, thank you both.
Kate and Rob, I think, as Isaid, I mean, I think this has
been a, I've really learned alot listening to both of you.
Really appreciate all of yourinsights and the discussion.
And thanks, of course, to theaudience as well for listening
today and hope folks found thishelpful.
You know, please don't hesitateto reach out if there's
(33:32):
questions or follow-up thoughts.
But otherwise, Rob, Kate, thanksagain, and hope to maybe talk
with you both about thissometime in the future.
SPEAKER_01 (33:42):
Thank you for
listening.
If you enjoyed this episode, besure to subscribe to AHLA's
Speaking of Health Law whereveryou get your podcasts.
To learn more about AHLA and theeducational resources available
to the health law community,visit americanhealthlaw.org.