All Episodes

October 17, 2025 51 mins

Shalyn Watkins, Associate, Holland & Knight, and Selena Evans, Founder, Ara Governance Advisory, discuss how health care attorneys can advise clients who seek to implement, develop, or use digital health. They cover implementing policies and procedures, updating data privacy and security policies, understanding professional board requirements, staying aware of litigation, and inserting critical judgement in artificial intelligence (AI) outcomes. They also discuss the biggest mistakes they see in the industry when clients are implementing or leveraging AI, interests being weighed when discussing AI implementation, common pitfalls, and ethical issues. Shalyn co-write an article for Health Law Connections magazine about this topic. From AHLA’s Health Care Liability and Litigation Practice Group.

Watch this episode: https://www.youtube.com/watch?v=mdaz1gw9Vog

Read the Health Law Connections article: https://www.americanhealthlaw.org/content-library/connections-magazine/article/8508ed3c-e3bd-4e60-9a30-0a74e89e805d/Navigating-the-Conflicting-Interests-of-Digital-He

Learn more about AHLA’s Health Care Liability and Litigation Practice Group: https://www.americanhealthlaw.org/practice-groups/practice-groups/health-care-liability-and-litigation


Essential Legal Updates, Now in Audio

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Stay At the Forefront of Health Legal Education

Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_01 (00:04):
This episode of AHLA Speaking of Health Law is
brought to you by AHLA membersand donors like you.
For more information, visitAmericanHealthlaw.org.

SPEAKER_00 (00:16):
Hi everyone, my name's Shaylin Watkins.
I'm an associate at Holland andKnight in our healthcare
regulatory enforcement practicegroup.
And I'm so excited to talk toyou today about a recent AHLA
Connections article titledNavigating the Conflicting
Interests of Digital HealthInnovation and Business

Advancement (00:36):
Five Tips for Healthcare Lawyers Advising
Clients Interested in DigitalHealth.
So that was the biggest mouthfulof all time.
I co-wrote this with two of mycolleagues, Mason and Harshida.
And I'm super excited to talkabout it with no one else other
than my friend Selena Evans.
When we were talking about doinga podcast, I was like, who can

(00:57):
talk about this?
And then immediately your namecame up.
So Selena, you want to introduceyourself and tell us who you are
and what you do?

SPEAKER_02 (01:04):
Thanks, Shaylin.
I'm super happy to be here.
And the article is fantastic.
So for any listeners who haven'tread it, be sure to, because
it's full of great nuggets thatI doubt we'll be able to cover
entirely in this podcast today.
So yeah, I'm Selena Evans.
I run a firm called AuraGovernance Advisory.
I specialize in change andtransformation in highly

(01:27):
regulated spaces.

SPEAKER_00 (02:06):
Yeah, and I think just by way of summary of the
article, which um was sponsoredby our AHLA's um health care
litigation and liabilitypractice group.
Um, just kind of by way ofsummary, you know, we go into
pretty good detail aboutdescribing what digital health
is, which I think is the biggestproblem in the whole article and

(02:27):
in a lot of what we do, Selena,I know you always have the
funnest answer to this question.
But like I think it starts withthe reason why this is becoming
such a hot topic is digitalhealth, the term encompasses so
much.
So when us as lawyers are comingtogether to try to advise our
clients, you're picking up somepeople who are doing something

(02:49):
as simple as just liketelehealth to people who are
creating whole pieces ofadvanced technologies that are
going to be run by AI and aregonna help with diagnostic tools
and things like that, right?

SPEAKER_02 (03:01):
Yes, and how things, how these things all fit
together.
You know, we we tend to want tosegment things off into buckets,
but really what is beingdeveloped around digital health
is really an ecosystem ofproviding care.
And because of that, therelationships between all of
these things are reallyimportant for us to consider.

(03:23):
You can't just put, you know, adigital device, a wearable
device together with telehealthand with, you know, and kind of
put that chain together withoutthinking through what the whole
package looks like, what thatmeans for the patient, what that
means for for your regulatoryobligations.
So um, yeah, it's a quagmire.

SPEAKER_00 (03:41):
Right.
And I would assume that based onwhat you do, you know, the key
component of ensuring that this,you know, new age of AI is kind
of properly integrated inhealthcare is corporate
governance and governancepolicies related to the
implementation of AI, which Ithink was kind of the crux of

(04:02):
what we were getting to in ourarticle, just talking about you
know, the five biggest thingswe're seeing for our clients
right now.
You know, you're gonna haveregulatory obligations.
They're on many differentlevels.
It's it's not just you know whatit takes to put together and
get, you know, a piece um uhcleared by the government, but

(04:22):
it's also the provider liabilitythat's you know associated with
leaning on the understanding ofAI instead of just also putting
critical pieces of your own, youknow, clinical judgment into
patient outcomes.
And I think that's what makesthis such a sticky but fun
topic.

SPEAKER_02 (04:40):
Yeah, I couldn't agree more.
And, you know, it's also it'salso very difficult because
there's um, you know, a jaggedadoption curve.
Um, you know, a hospital systemcan't just like roll out um AI
and be like, okay, we got thepolicies.
Here we go, everybody.
Go have at it.
You know, there needs to betraining, there needs to be

(05:00):
dialogue, there needs to belearning and understanding.
And all of it is changing atsuch a rapid pace that it that
the governance becomes not justnot just the policies and not
just, you know, the committeesand these kinds of things that
we think about um, you know,kind of in traditional ways of
thinking about governance, butlike, how do you make decisions

(05:21):
around this?
How do you prioritize these umthese investments and the
actions against them in a waythat can align your regulatory
requirements with youroperations, with, you know, the
protections that you need toensure that you're not creating
liability for yourselfdownfield?
And it's um it's a realchallenge, but it begins with

(05:41):
that decision making and it'svery multidisciplinary and
cross-functional.
And that is always a challengefor organizations to navigate.

SPEAKER_00 (05:50):
Right.
And I think in the beginning ofthis, like as a lawyer, half the
time the easiest thing is justto tell the client to avoid the
risk and not use it.
But at this point in time,there's not a world where we're
going to be operating withoutthe AI.
Like it's it's for so manyreasons and so many different
facets of what our clients aredoing, they're gonna need it.
I mean, I think the area I'veseen a kind of the adaptation

(06:12):
happen the most quickly was ininsurance claims, right?
Being able to just parse out andsee just large amounts of data
about what's being billed for ornot, it the AI review process
was had been critical.
It was kind of the first time Isaw things happening in a large
scale.
And now when we're talking aboutgetting into the point that, you

(06:35):
know, every type of healthcareprovider or even some of the
startups I work with, you know,is going to be using some form
of AI, it's it's like thebusiness outcomes outweigh the
risk of getting into this verycontroversial area now.

SPEAKER_02 (06:53):
Yeah, absolutely.
And, you know, if if largerorganizations are not going to
take the plunge to figure outhow to navigate this new
technology, they'll be disruptedin other ways.
So there's a lot of strategicrisk around all of that and
around the difficulty in innavigating these things.
Um and you know, one thing thatthat occurred to me as you're

(07:17):
talking is we're using AIinternally to an organization
and in products.
And the way that those thingsoverlap and the way that they
fit together is, I think, um areally, really interesting um
subject that not a lot ofcompanies are taking up.
And I think that that's a realpitfall because it matters

(07:38):
significantly.
You don't want to necessarilytake a large language model and
leave a regulatory decision upto that large language model.
They're not built fordeterminative solutions.
Um, so you have to think aboutthe way you're using it
internally for your regulatoryprocesses and then also the way
that it's embedded in products.
So, yeah.

SPEAKER_00 (07:58):
Definitely.
And as a counterpoint to that,even our smaller organizations,
they're not gonna really havemuch of a choice, right?
If let's pretend most everythingwas out of the realm of
possibility as far as risks areassociated, if you're billing
insurance, or even if you'recollecting large amounts of data
and you're using AI in any formor fashion, there are gonna be
data privacy or healthinformation privacy obligations

(08:21):
that are always gonna be theretoo.
So, like you can't really getout of HIPAA, you can't get out
of state health informationprivacy laws.
Um, as we're collecting andprocessing information, there's
always at least one big bucketthat's gonna come back to bite
you if you don't think about it.

SPEAKER_02 (08:39):
Yeah, absolutely.
And, you know, I think that umwe don't have a clear
understanding of whatenforcement will look like.
We don't have a clearunderstanding of what the
plaintiff's um bar will looklike in terms of of products
liability enforcement and andhow they how they will start to
structure cases around umartificial intelligence.

(09:01):
But it the the laws don't goaway.
They're still there.
So there's gonna be new theoriesand new things.
So I think like if I were togive any advice to any company
in in this space, be do thereally hard thinking.
Start with your with your withthe level of your strategy,
layer on the you know, thedifferent considerations of of

(09:24):
your organization and and worktogether and just don't rush,
don't rush into it, bethoughtful, be pragmatic, and
make sure that you're tacklingthis thing from from multiple
angles and multiple perspectivesso you can get ahead of it.

SPEAKER_00 (09:40):
Yeah.
I think one thing that would bereally fun to do is kind of just
go through the five tips that wecame up with in our article, see
what your thoughts are on thosetips, and then see if you,
because you're a genius, cancome up with other big tips that
maybe we didn't even fully fleshout when we were thinking
through it for our article.

(10:00):
So the first one was umbasically the importance of
implementing policies andprocedures.
I know you talked a little bitabout that, but on a large
scale, what types of policiesand procedures are we looking at
and why are we looking at thatwhen we're thinking about using
AI?

SPEAKER_02 (10:16):
Yeah, um gosh, that in and of itself is a very um
broad topic.
Um, yeah, I mean, of course it'simportant to the like the
policies and the procedures arewhere you are going to anchor
your operations.
And so having having that bethoughtfully done as opposed to
just wrote and off the shelf,you really need to design them

(10:38):
for your business context andfor the way that your company
operates and with a goodunderstanding of your maturity
in where you're trying to embedthese policies.
Um, so the importance of them, Idon't think, can be overstated.
Um, but I think that there's uma real risk in not appreciating

(10:58):
the nuance of an operatingcontext.
I mean, you ask AI to create apolicy for you, it'll it'll spit
something out.
But but what does thatnecessarily mean for um for the
structure of your operations,for the capabilities that you
have within your organization tobe able to execute against those
policies?

(11:18):
Do your workflows have to changesignificantly?
You can't just like rub some AIon your old policies and hope
that and hope for the best.
You know, there's just there'sso many kind of um second and
third order consequences torolling out this technology that
that um need to be thoughtthrough from an operational
perspective, but also from apolicy perspective.

SPEAKER_00 (11:39):
Yeah, and in my early years as a lawyer, I
started out as an assistantattorney general in the state of
Ohio.
And then I went to the U.S.
Department of Health and HumanServices as assistant regional
counsel.
So I spent a lot of time workingfor regulators, and I'll say the
first question that gets askedwhen you're being investigated,
even if it's just a normalaudit, is like where are your
policies and procedures?

(12:00):
Because the absence of thepolicy or procedure is usually a
violation in itself, right?
But then secondly, like let'ssay it's not a violation, your
failure to comply with your ownpolicies is an indicator of your
non-compliance with certainrules or regulations, um, or
your failure to train employeeson them.
So even if we weren't talkingabout AI where we don't know

(12:23):
specifically what enforcement'sgoing to look like, we know that
no matter what enforcement lookslike, it's going to require some
sort of structured understandinginternally for the organization.
And compliance with your ownpolicies is usually step one to
defending against any type ofaction.
And it's not even just to avoidoversight from a regulator.

(12:45):
It's to avoid future liabilitylawsuits too, right?
You know, the the easiest thingto say is, hey, we were in line
with uh the standards ofoperation for this type of
product or for this type of use.
Um and if we really simplifiedit, I talked about HIPAA a
little bit earlier, right?
You know, operating without aHIPAA policy would make you just

(13:06):
de facto non-compliant.
And here we know for a fact,we've seen OCR has let out
guidance about the use of AI.
I mean, it's not the greatestand easiest to grasp guidance,
but we know that there is someoversight coming related to it.
And so therefore, having apolicy or procedure helps

(13:27):
protect against really thechance of a violation if you're
all adhering to it.

SPEAKER_02 (13:33):
Yeah.
Yeah.
And, you know, uh, you raise agood point with guidance, you
know, coming out.
And it's it is in and of itselfiterating.
So I think that that we we dospend a lot of time thinking
that you can create a policy andand kind of leave it alone.
I think we need to get really,really good at the life cycle of

(13:54):
the policies and making surethat you're keeping up with the
changes and getting thatregulatory intelligence and the
guidance intelligence in andmaking sure that you're doing
something with that and reactingto that.
Like it's a whole pat it's awhole pattern that has been
really, really hard forcompanies to digest.
So even just looking at thatregulatory intelligence piece

(14:15):
from where do you get theinformation?
What information do you need?
What comes into you?
And then what do you do with it?
Where does it go?
Who's accountable to it?
How does it get embedded in thepolicies?
Like you kind of have to havethat whole end-to-end process um
designed really well to be ableto keep up with with all of all
of the change.

SPEAKER_00 (14:34):
You're so right.
And there's the piece that weknow that no client wants to be
like, hey, lawyer, please redomy policy or go over my policy.
I don't want to pay for this,right?
You know, I feel like I just didthis.
How long is this life cyclesupposed to be?
I think when we're advisingother lawyers, you know, one of
the biggest things to thinkabout is like, hey, when you see

(14:55):
that new guidance has come outor you see that there's chatter
of something new happening,whether it be on the state
level, on the federal level,even some guidance that we I we
didn't discuss at all the EU AIact in our article.
But, you know, even justunderstanding those kind of
parameters as they're everchanging and being adopted, then

(15:16):
that's the perfect time to reachout to your client and say, hey,
I just thought I just saw thisarticle, or I just saw this
thing pass, or I just saw thisnew change came into effect.
Um, it might be a great time forus to look at that.
And I mean, at the end of theday, the client has to make the
the end call on if they're goingto put the time and effort into
it.
And it's our job to make surethat they at least are on notice

(15:38):
that, hey, it might be time toupdate those policies and
procedures.

SPEAKER_02 (15:43):
Yeah, absolutely.
I agree with that.
And I I I also might push even alittle bit further than that,
that each organization shouldreally have um have a structure
around like the kinds ofguidance that you're looking for
to make sure that thatinformation isn't left to, you

(16:05):
know, a um a client relationshipthat may not exist anymore.
They like that a company needsto be structured to be able to
get in that information so theyknow, okay, I'm getting this
type of regulatory intelligencefrom Holland and Knight.
I am using this particularinformation service from
Bloomberg or whoever, whateverit ends up being, to get this

(16:28):
type of information and makesure that that is robust.
Because I think that theimplicit in um in the sort of
cyclical quality managementthings that we see in a lot of
the AI guidance that is comingout is the um requirement that
you that you stay up to date.

(16:48):
And not that it's ever not beenthere, but you we're seeing a
lot more focus on it in theregulations.
And so I think that getting thatprocess really structured so
that you know, like you canproactively know when to reach
out to your council is super,super, super important.

SPEAKER_00 (17:04):
Yeah, I think that's right.
Okay, our second tip, which I'vekind of hinted at a few times
already, was kind of theimportance of updating data
privacy and security policies.
I don't know how you feel aboutthis topic.

SPEAKER_02 (17:20):
Well, it's so important.
I mean, like, okay, we we talkso much about um about because
we're lawyers, we talk aboutliability, we talk about
policies, we talk about um thatregulations.
Um, like the patient is alwayscenter in this.
Like that, that is like themantra of the patient is is

(17:41):
center in this.
If you have data for yourpatients that gets compromised
due to lax cybersecurity umpractices, uh it it doesn't it
doesn't do you any good.
There's huge reputationaldamage, all of this.
But it also puts your puts yourpatients at risk in ways where

(18:03):
you're kind of used to thinkingabout like the health risks that
come along along with thesethings, but but the the privacy
risk is is really big.
And AI has um a huge attacksurface.
So cyber security rules need tonot just be um and and practices
within a company.
They need to be embedded.

(18:24):
And I think that thatpredominantly we've seen a lot
of like bolt-on, you know, priin in a lot of instances,
privacy um groups and securitygroups that kind of operate in
silos.
And and that that sort of thingjust can't happen anymore,
especially if you have AIembedded in your in your
products and whatnot.
Those those things need to becontinually refreshed,

(18:47):
continually um, you know,addressed from a from a security
space and getting ahead of whatit what the future of
cybersecurity looks like is isanother really important thing.

SPEAKER_00 (18:59):
So yeah, and especially if you're in a place
where things are heavilyregulated, let's forget HIPAA
for a second.
Like I'm in California, theCCPA, like there's already a
robust amount of laws regardingthe way that we collect and
store data.
And if your AI is helping youcollect and store that data and
it's keeping any of it and ithappens to be health data, you

(19:21):
know, you're now sitting on agold mine for any prospective
attack.
Um, and so I always just, youknow, it no one wants to be a
security official until theyuntil they realize how important
the security officer is.
Um, but it I I I know that atleast in Europe, right, there is

(19:42):
a really heavy emphasis on thisas a component of AI
implementation.
And it's arguably the only placethat we already have guidance
here in the United States,right?
We know for a fact that there isregulation on this issue.
Um, and we know for a fact thatAI has propensity to store the
data in large um ways and thepropensity to accidentally

(20:04):
misuse the data because we'restill figuring it out.

SPEAKER_02 (20:07):
Yeah, yeah, absolutely.
And um I all the um the broadsecurity regulations that are
coming out, I think that we canlook to like the financial
sector um around around thesecybersecurity requirements and
the operational resiliencerequirements coming out of the
EU.
Um, because you know, in in theUS also, um we security is a

(20:32):
huge focus of the currentadministration.
Um and so I I think that thatthat that will continue to be um
a really important factor in andseems to me, um, if I were like
reading tea leaves, would be aplace that um that this
administration would look to forour enforcement perspective.
So I think that the securityregulations and like looking at

(20:53):
them from an architecturalperspective um and a capability
perspective is is very wiseguidance for for anyone in this
space.

SPEAKER_00 (21:04):
Yeah.
And lastly on this point, for meat least, there's also we've
seen the Planus Bar has alreadymade a good use of this in the
past five years, right?
When it comes to data andinformation privacy and
security, that has been theeasiest way to get sued for a
lot of healthcare providersright now.

(21:24):
Um, and I think for some of ourlisteners, they're probably
thinking, well, I don't see howas long as I just keep all of
the information inside and Iknow I have to thwart off a tax,
but I can insure against thetax, you know, you're probably
thinking it's very unlikely thata lot of this will happen.
I think the real reason why youhave to think into this space is
because usually if you're usingAI, you're using vendors.

(21:49):
And it's the fact that thirdparty risk.
Right.
When you have those additionalthird parties and everybody has
their own security protocols,um, you know, there's there's
always room for a little, alittle like if that risk exists
when we're dealing with paperfiles, it definitely exists when
something's just sitting in thecloud, right?

SPEAKER_02 (22:09):
Yeah, absolutely.
And, you know, um, without goingdown this uh this road too too
much, I the third party riskpiece I think is really
complicated.
Um I don't think that mostcompanies do a particularly, I
think that they do a really goodjob onboarding their third

(22:30):
parties and making sure that therequirements are in the
contracts and that things aregoing well out the door to
protect against against the theprivacy and security risk.
But I I think that managingthose contracts becomes really
difficult because a lot of timescompanies like the business will
own the contract with thecompanies.
And so, you know, things don'tend up kind of percolating

(22:53):
through the legal department inthe same way.
Um, and and so I I think that itI think that it really does
require a fresh look at yourthird-party risk program,
certainly.

SPEAKER_00 (23:03):
It's so funny you say that because I literally
just spoke on a panel at theAHLA annual meeting about
navigating your vendor contractsfor these exact reasons.
And we're gonna be doing apodcast on it.
So you guys are gonna get tiredof hearing my voice on the HLA
podcast, but this is just suchan important issue right now
that I think all of our clientsare seeing.
So I love that you're seeing ittoo.

(23:24):
Um, and I don't feel it's crazy.

SPEAKER_02 (23:27):
No, you're definitely not crazy, and like
even think about liketerminating a contract with a
company.
Like, you know, we don't we wetend to think about the
contracts as getting into them,but what about what happens when
you separate with a company andwhere are your where are your um
retention obligations and whereare they housed and mapping your
data and and really getting agreat handle on, and I I I know

(23:48):
we're gonna talk a little bitmore about data, but your data
lifecycle management thatincludes your third-party risk
could not be more important.

SPEAKER_00 (23:55):
Yeah.
So the next, um actually, I feellike we can kind of bundle the
next two together.
Um, it's understandingprofessional board requirements
and staying aware of litigation.
We kind of just hit on a lot ofthe litigation risks that are
happening.
Um, and I say to pair that withunderstanding professional board
requirements, because I thinkthis is mostly targeted towards

(24:17):
our providers who areimplementing AI, right?
You don't really see this on,you know, the tech side of
things.
Um, but you know, uh there'salso medical malpractice
possibility issues.
Um, you know, there's issueswith keeping and maintaining a
license for an individualprovider or facility if you have

(24:38):
widespread issues with your AIusage.

SPEAKER_02 (24:41):
Yeah, yeah.
I um I think it is um, I wouldsay that board responsibility
feels a little bit nascent tome, um, but I think we're gonna
see it be more important becausewe are the bridge between
patients and consumers, um,lawyers and and healthcare

(25:02):
providers, you know, alike.
Um so I think that we can expectto see um a lot a lot more of
this too.
And you don't want to be in asituation where where you're um
up against a a board, a boardhearing um because because you
played fast and loose um withthis new new digital technology.
Um, you know, the techno it's sointeresting because we take such

(25:26):
a hard look at um at the aspectsof things that we roll out
within um devices and andpharmaceuticals and and these
types of things.
The technology and AI and largelanguage models and everything
has come really, really fast.
And so I think we're really,really we're playing so much
catch up with how these thingsshould be embedded in those kind

(25:50):
of more rigorous processes thatare out there without having
that like front end of the rigoron the AI itself.
Um, so I think that creates,again, more liability.
And but the professionalresponsibility around that is to
slow down, take a breath, figureout like what is your context,
what is your use case, how arelike it is great for a

(26:13):
healthcare provider to use AI todo their meeting notes and and
everything.
I like it, it I I had a doctor'sappointment last week.
I just it was great.
My doctor was like fully engagedwith me, and it was it was
really super, but I would havebeen horrified, horrified if
that conversation were to be letloose somewhere where where I

(26:37):
didn't intend it.
And there's a lot of systemsthat overlap.
So, what are your connections toyour systems and all of that?
So yeah, landmines everywhere.

SPEAKER_00 (26:45):
I think that's great.

SPEAKER_02 (26:46):
I think and also enablement.
Like how cool is it that you youknow that we can have these
kinds of tools?
I do, you know, I think wealways tend to focus on risks.
There's a lot of possibilitiesthat are that are there too, but
gosh, we just need to becareful.

SPEAKER_00 (27:00):
Yeah, I think that's part of what we spoke about kind
of in the intro of our article,was just the the fact that we're
in this brand new world wherethere is so much left to be
done, and we're continuing touncover so much um as we start
to use these tools.
And I I say it all the time tomy friends that we come a long

(27:20):
way from just like having ourApple Watch, right?
And it and my Apple Watch isevery time that it updates, it's
weirdly even more accurate aboutwhat's happening in my body.
So I'm you know, I'm veryimpressed, right?
Um, but now we got into theworld where the data in my Apple
Watch is something I can sharewith my mind chart with my

(27:42):
provider, and they can start tosee, you know, what my trends
are and that can help them thinkabout diagnostic tools for me.
And I think that that's justkind of amazing.
Yeah, absolutely.
Absolutely.
So that actually gets to thelast piece of our advice, which
was inserting critical judgmentand AI outcomes, which is kind
of what we were just talkingabout, right?

(28:02):
It's like, you know, once themachine spits out information to
you, it's not giving you ananswer, it's giving you
something to help you get to theright answer.
Um, kind of like jump startingyour brain to avoid having to do
the 10 million piece um puzzleto get to the right outcome for

(28:23):
the patient or the end user orthe you know whoever you're
dealing with, right?

SPEAKER_02 (28:27):
Yeah, absolutely.
And I I love that you bring thisup because we we humans are
wired to make sense of what wesee and to make quick judgments.
And we we are influenced bythings that we we can't always
think through.

(28:48):
Um it's neurological, it's not,you know, you're terrible at
your job or you're careless oranything like that.
We are just wired that way andwe we um see those things play
out all the time.
When, and especially with thelarge language models, when it
comes back with a reallyconfident answer and will double
down on confident answers thatmay be wrong, we need to really,

(29:10):
really think through our ownpersonal process for how we take
in information that comes outwithin an LLM and make sure that
we're that we're testing ourthinking.
Um because I I do it too.
You know, I'm I'm looking at AIoutput, I'm like, oh my gosh,
that sounds magical.
And it makes sense and it soundsso confident.

(29:30):
And all of those things reallytrigger um biases in in our
minds that that make us actwithout thinking.
And so I think that that that isa huge part of um of
professional responsibility, butalso like your own personal care
with the way that you handlethis technology.
It's meant to engage you, it'smeant to um to kind of pull on

(29:54):
those things.
So if we don't, if we don'tthink about that in like the
whole ecosystem, we really runthe risk of.
Of just like over reliance onthe I and overconfidence in the
technology and these kinds ofthings.
And the human it at bottom, theway that the the AI is right
now, these LLMs, it's still justmath.
It's still just words andpattern recognition.

(30:14):
It is not, they do not have astable concept of the world or
your patient or you know, thesetypes of things.
So it the construct of how youuse it needs to be needs to be
pretty well thought out.

SPEAKER_00 (30:26):
Yeah, I think we've joked about this before, but you
know, a lot of people see eitherwe're in the Stone Ages or we're
in those end of the world movieswhere the robots all took over
and we're just that close.
And so the truth is we'resomewhere in the middle right
now, right?
We're not in the Stone Agesanymore, but the the robots
still can't take over.
They still need us to inputinformation, they need us to

(30:48):
help them get to the answer.
Um, and I think that's that'sthe really important point
because you know, if we relysolely on what the robots under,
sometimes if you kind of readthe outcome, it doesn't the
beginning of it makes sense andthen the end of it makes sense.
But when you put the two thingstogether, they don't make sense,
right?
Um but it's still very valuablebecause you kind of see where

(31:09):
the missing piece is that got itto those conflicting
conclusions.

SPEAKER_02 (31:14):
Yeah, yeah.
And you know, and it goes, yes,totally.
And there's so many aspects ofthis.
So like the narratives that comeout of big tech that AI can do
everything, I think is reallychallenging because we know that
large language models like arechallenging in certain use
cases.
So you it like just thenarrative that they can do

(31:34):
everything impacts our decisionmaking.
So we need to be careful on thatstandpoint.
And then, you know, when wethink about um about the the um
gosh, now I forget where I wasgoing with that.
This is why I just get like, youknow, so distracted.
Um, but I think that that thepoint of the technology needs to

(31:56):
be used for a specific use case.
We need to understand theimplementation of the technology
within the context of what we'redoing.
Like we lawyers need to be ableto talk to our technology teams
about what that looks like, howmodels drift over time.
Um, you can set up, you know,um, some very well-meaning
protocols for LLMs to do thingsfor you, but they can break down

(32:20):
in complexity over time.
What are you doing to remind itthat it needs to be within a
specific context?
Like these kinds of things Ithink are all kind of baked in
that, in that huge like we'resomewhere in the middle and we
haven't figured it out.
So we need to be really carefulabout the way that we that we
deploy these things.
And even technologists have ahard time with it.

SPEAKER_03 (32:42):
Oh, yeah.

SPEAKER_02 (32:43):
Like it's it's like assuming that we can know all of
it is just it's just not the wayit is.
We learn things all the timeabout the capabilities of of AI
and new forms of AI.
Like right now we're in, youknow, language model land um and
natural language processing, butyou know, there's there's new
things coming on the scene thatshow a lot of promise, but also

(33:05):
will carry their own risks withit too.
And staying on top of that pieceis also important and knowing
that there's a differencebetween different kinds of AI is
is an important piece of it too.

SPEAKER_00 (33:16):
Right.
And I think at the end of theday, what we have learned in the
industry is our clients want touse this, right?
Uh the patients and the endusers, they actually prefer the
the advanced access that they'regetting as really and and
sometimes the the the ease atwhich they're they're getting

(33:37):
information and and being ableto get health outcomes because
of this type of implementation.
Um you know, so those twofactors alone are enough for
people to invest in uh creatingthese models and to build build
out your business to do thisbecause you will yield more
financial gain from it.
Uh but I think both partieswould still say they are

(33:59):
tremendously terrified of therisks that are associated with
this extra access.
And so I came up with and Iemailed you these, but you know,
if you don't remember thesequestions, I came up with a
couple questions I had to askyou before our time is over
because uh I I genuinely I needto know the answer.
And if you think of anythingelse that you have to tell me,

(34:20):
this is I just I waited too longto ask you some of these
questions.
So my first one was what are thebiggest mistakes that you are
personally seeing in theindustry when clients are
implementing or leveraging AI,both internally and externally?

SPEAKER_02 (34:35):
Yeah, rushing into it, rushing into it without
doing the really hard thinkingabout what strategically makes
sense.
Like everything needs to begrounded in what you're trying
to build, the context in whichyou're operating, your market,
all of that.
And and there isn't a technologythat can just deliver you a
report.
There isn't it, it really doesrequire good, hard

(34:58):
multidisciplinary thinking andgetting your leaders together
and cultivating a sharedunderstanding of where you're
trying to go.
Rushing in, I think, has been umthe the AI FOMO and everything
has been probably the biggestpitfall for for nearly every
everyone.
But we're seeing companies thathave like, you know, been more
methodical and and slower aboutthe way that they've done this,

(35:22):
have some have some moresuccesses um with
implementation, which I think isreally important.

SPEAKER_00 (35:28):
And I'd assume there's not like some perfectly
baked timeline.
Sometimes it's just about whatdoes your rollout look like
versus, you know, are you areyou trying to just shove it onto
everyone at the same timeimmediately?
Are you doing any trial periods,stuff like that?

SPEAKER_02 (35:43):
Yeah, and like is your organization ready for it?
Can they ingest and metabolizethis amount of change to
whatever processes?
Like it is um there there isthere is nothing from a
technology perspective that canthat can help organization,
well, I mean they can helporganizations deal with it, but

(36:05):
the hard work of everything isunique.
Everything you cannot have aconsultant come in and give you
their like format and their youknow policies off the shelf and
you know go and implement them.
Um, it really needs to be AIitself is so contextual.
Um, there's so many differentrisks that are it's it both

(36:28):
helps mitigate, it can helpmitigate risks, it can
exacerbate risks, it's like adouble-edged sword everywhere.
So, like really doing the hardwork of sitting down, taking the
time to digest it, taking thetime to put together a
methodical roadmap so that youcan have an architecture of how
you're going to handle um thisAI and how you're gonna govern
it going forward is is like justit's hard work.

(36:53):
It's change is hard and changeinitiatives fail all the time
and digital transformationefforts fail all the time and
all of this.
So, like we need to be better athow we how we handle those
things and how we think throughthem.
And that requires a lot ofdifferent um act, like a lot of
different people coming togetherto be able to make that happen.
Like, you need to involve yourHR department, you need to

(37:14):
involve your organizationallearning um organization.
And so, like getting everybodysynchronized and aligned and
understanding where you'retrying to go at the same time is
is a huge challenge and it takesa lot of work, but it's well
worth it.
Right, you know, play on theplay.

SPEAKER_00 (37:33):
I see that even worse in our startup clients
because they're operating on youknow lower influx of cash, they
just really want to get started,they're really excited, they
know this is a big idea, they'retrying to get to market faster
than competitors because this issomething new and cutting edge.
And it's like, no, you know, yougotta take a step back.

(37:53):
And I think, you know, if wewere just talking about this,
you know, as lawyers, it'sreally easy when we're talking
about you know, our own businessand what we're gonna do versus
our you know other risks thatare associated with what we're
doing.
Usually they're just allprofessional risks that we're
absorbing and we're trying tonavigate around.
We have rules around it and wejust have to follow in the lines
of the rules.
Um, but you know, and so ourbusiness can never really uh

(38:15):
conflict as much, right?
Yeah.
Here we're dealing with businesspeople who, you know, they might
have other professionals thatare working in their
organizations, but the idea isthis is supposed to be a
profitable venture and and theloss that you're sometimes
asking people to take when itcomes to digital health in the
beginning is much greater thanyou would in a normal venture,

(38:37):
but it's because your complianceprotocol has to be top-notch
because the value of the companyis its compliance.

SPEAKER_02 (38:44):
Yeah, trust.
I you know, when you see likeall the articles that come out,
like, well, what if if AI can doeverything, what's the
differentiator?
Well, if A, AI can't doeverything yet, but B, the
differentiator is trust.
The differentiator is buildingthat legitimacy.
And so what I tell startups isis like, no, you can move fast.

(39:06):
What I mean by slow down and dothe hard work is not slow down.
I mean prioritize the hard work,prioritize the planning,
prioritize building the roadmapthat you can execute against,
prioritize alignment um, youknow, within within your
functions, develop a perspectiveon your on your critical path.

(39:27):
Um, and then that way you cansee where you can start to
accelerate things, you can bringin the right levels of expertise
to keep you going um fast tomarket and that stuff.
So we can't in digital health,we can't move fast and break
things, but we can move fast andbe really considerate about our
end users, the patients, and theviability of our companies going

(39:49):
forward.
Because what what else mattersin digital health other than
that?

SPEAKER_00 (39:53):
Right.
And so what are the intereststhat you're generally seeing
your clients weighing whenthey're making some of these
decisions?
Um, and and and I know some ofthe decisions can be hard, but
kind of in the process ofimplementing the AI, so what are
they generally kind of thinkingabout?

SPEAKER_02 (40:12):
Um again, that's like it really is contextual in
terms of the trade-offs.
I think that um that one of thebig things that I see, and I
think that it is born of likethe AI FOMO is the trade-offs
that are implicit to the modelsthemselves, um, just the way

(40:32):
that they operate in order tokeep a large language model on
track in the right way, or asmall language model for that
matter.
You have to build architecturearound it from a technology
perspective.
Um, and that can be very timeconsuming.
It can be really brittle, it canrequire a lot of upkeep.
Don't like we don't really knowa lot about the long tail

(40:53):
consequences of AI, um, butpeople are starting to feel it.
So I think that those are likesome of the bigger trade-offs
that people need to make becauseyou're with with large language
models and natural languageprocessing, you're trading um
accuracy and and determinativereasoning to use that versus
like a traditional um machinelearning and like an

(41:15):
algorithm-based based carebecause those things need to go
together and they need to beable to work um together.
So um those those kinds ofbalances I think are really hard
in a healthcare healthcarecontext because we so often want
it to be very determinative.

SPEAKER_00 (41:30):
You know.
That makes a lot of sense.
Um we talked about mistakesearlier, but what are some of
the common pitfalls in thisjourney that can occur, right?
So we know we shouldn't move toofast at the outset, but as we're
going through and kind of seeingthat this process is taking some
time, where are some places thatare easy to kind of fall in?
Some some tricks of the tradeyou might have.

SPEAKER_02 (41:53):
Um I think I think that um that having the
intelligence structure built ina way um like that you're um so
the way I view governance isit's largely sense making and
orchestrating is how I thinkabout it.
So being able to have um, Ithink there's a huge pitfall

(42:15):
when you're not getting theright intelligence into your
system at the right times.
And because AI itself evolves,because the regulations evolve,
because um, you know, there's umso much um dynamic movement um
in terms of jurisdiction and andwhatnot, um, I think that that
that that is a really criticalpiece that can cause so many

(42:38):
downstream pits of despair thatwe can fall into.
Um so that that would be onething that I would is like the
trick of the trade is gettinggetting that information squared
away.
And that means your datalifecycle management, what
you're feeding it, that meansum, you know, un really
understanding the depth of yourdata quality and your collection

(42:59):
practices, really understandingall of all of those things I
think, I think are reallycritical.

SPEAKER_00 (43:06):
That's really awesome.
I so the last question I have, Ias I was rereading it to myself
just now, I realized we could doa whole podcast on this.
We probably could do a wholepodcast on everything we've
talked about today, but thinkingabout who I'm talking to, can
you just briefly startdiscussing and hitting on the
ethical side of this, right?

(43:26):
Ethics, I think is your breadand butter.
Um so um what are what what'shappening on that front right
now?

SPEAKER_02 (43:38):
I mean I think I think that AI in and of itself,
what does it mean to haveartificial intelligence?
What is intelligence?
What does it mean to be humanare all things that are bringing
ethics to the forefront in inways that we um have not um done

(44:00):
a great job of, I think, um interms of like even discussing
ethics has been like achallenging thing at a lot of
organizations.
Um it tends to feel squishy topeople, but it but it it really
isn't.
There are good models of ethicsthat you can operationalize
against, that you can embed inyour policies.

(44:21):
Um and so the the ethical pieceto me, and also what gives you,
you know, kind of the theeasiest path to prioritizing
your regulatory complianceprogram is starting with the
patient and doing doing what isright for the patient on their
journey through your products,on their journey through your

(44:42):
system.
Um and and that is really whatwhat I think that we owe as
though those of us who who arewho are in this space.
So we talked a lot about it.
Like privacy and security has anethical component to it.
We don't tend to talk about itso much, um, but it's really
important.
Patient care and like how thatinteraction works.

(45:03):
Like you don't want to, like,you know, I had my experience
with with my doctor being reallywonderful because she was able
to be so engaged, but you couldsee how um a doctor who wasn't
as magical as mine would um youknow lean on AI to not engage
with patients.
And so I think that I think thatthere's an ethical

(45:23):
responsibility to make sure thatwe're giving the level of care,
the level of concern, the levelof conscientiousness and product
um and service design in digitalhealth.
Um and so I think that AI is areally great opportunity to talk
about that more and also to likesituate operations around those
those ethical obligations.

(45:44):
And so if there's a message thatI can give to anyone, it's like,
no, it is ethics is not squishy.
It's not, it's not some likeamorphous thing.
Like we can get in there and wecan operationalize against it if
we're willing to have the hardconversations and um and discuss
what we should do, not just whatwe can do.

SPEAKER_00 (46:02):
Right.
And I think part of those hardconversations is especially in
the digital health world, areunderstanding the biases that
are related to the AI, right?
It's don't forget who has taughtthe AI and information, um, what
subset of information is the AIalways working on?

(46:22):
What information is it alsolearning during its way?
I know as a Black woman, forexample, I a lot of the data
that's related to any kind ofailment that could be happening
in my life is significantlyskewed when you change also.
You could you could put my age,you can put my BMI, you could
put all this in.
And then if you also add thatI'm African American, right?

(46:44):
Some of what our informationwill actually tell us could be
fundamentally different than ifyou don't add that critical
piece into making sure you makemy care plan.
So it's not just about makingsure that we um remember that
those biases, biases exist inthe AI, but also as we're
inputting and teaching it alongthe way to start encouraging the

(47:07):
AI to recognize those samebiases and understanding that
information.

SPEAKER_02 (47:12):
Yes, and to really dig in and understand where
those biases occur for yourparticular context.
You know, say that you're um,you know, um developing some
like a um dermatology relatedproduct, um, you know, that that
might be more obvious to careproviders that they're um, you
know, that they need to considerthat black people will be using

(47:34):
this product too.
Um and so considering the biasthere, but how you how you train
it, what data, what does your,you know, clinical, like what
does the clinical diversity looklike?
I mean, that's one of the thingsthat the that the FDA has been
so strong on is diverse clinicaltrial recruitment and those
kinds of things that can, youknow, help and inform um all of

(47:56):
this stuff.
But then also like the thebiases of like uh because we've
we have such a history of in inhealthcare of mistrust in in our
system for really good reasons,and we're adding this technology
that is based on algorithms andnot that that kind of care.

(48:17):
I think we have an extraresponsibility to help people
feel comfortable and help peoplebe able to trust the system as
well, um, to encourage them tobe able to partake in a
different kind of care um sothat it doesn't feel like
extractive and biased and um andum ill-informed for for certain
parts of the community that arenot as well represented in in

(48:39):
our in our data.

SPEAKER_00 (48:41):
Right.
And it's it's you know, thinkingabout your poverty level of your
your patient set.
It's thinking about like my94-year-old grandfather would be
like, oh, well, the computerjust told him to do that, right?
You know, and hit a little biton that in the article where you
know the the information we haveshows that, you know, a large
set of the population is very,you know, open to the idea of

(49:05):
having AI, but there are goingto be pockets of the population
that are, you know, they'realready scared to go to the
doctor, they're already scaredto put on the Apple Watch, you
know.
So when you when you start withsomeone who might be moving a
little bit slower, you'recreating that trust that I think
has been kind of the theme ofyour advice today, which I I
completely love.
So I'm completely word vomited,but what have I missed?

(49:30):
What is left for us to talkabout before we let these people
go?

SPEAKER_02 (49:34):
Oh my gosh.
I, you know, I just think thatstaying on top of this with the
patient in mind is the lastthing that I that I want to
leave us with, whether it islike the care um around around
um the patient, whether it isthe care around the data with

(49:55):
which you feed your systems,whether it is structural,
structural um care.
Like we have to, when things arechanging this much, um the the
duty is is so heightened becausewe can't rely on best practices
or formulaic responses to thesethings lest we go down rabbit
holes or exacerbate problemsthat have happened before.
Um so ethics first, as always.

SPEAKER_00 (50:18):
Well, thanks so much for talking with me.
I'm always excited when we getto speak.
Um thank you for reading ourarticle.
Um It was great.
Thank you.
So if anyone's interested,please check out the AHLA
Connections archive.
Um, and I hope to talk to youagain soon on a future podcast.

SPEAKER_02 (50:37):
Thanks, Shaylin.

SPEAKER_00 (50:39):
Bye, Selena.

SPEAKER_01 (50:40):
Bye all.org and stay updated on breaking healthcare

(51:01):
industry news from the majormedia outlets with AHLA's Health
Law Daily Podcast, exclusivelyfor AHLA comprehensive members.
To subscribe and add thisprivate podcast feed to your
podcast app, go toamericanhealthlaw.org slash
daily podcast.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.