All Episodes

March 14, 2025 32 mins

Based on AHLA's annual Health Law Connections article, this special series brings together thought leaders from across the health law field to discuss the top ten issues of 2025. In the eighth episode, Shalyn Watkins, Associate, Holland & Knight LLP, speaks with Anjali B. Dooley, Senior Partner, DBM Legal Services LLC, about the key litigation risks that are coming to the forefront regarding the use of artificial intelligence. They discuss issues related to accountability, strict liability versus negligence, data lineage and bias, and validation/reliability. From AHLA’s Health Care Liability and Litigation Practice Group.

Watch the conversation here.

AHLA's Health Law Daily Podcast Is Here!

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):


Speaker 2 (00:04):
A HLA is pleased to present this special series
highlighting the top 10 healthlaw issues of 2025, where we
bring together thought leadersfrom across the health law
field to discuss the majortrends and developments of the
year. To stay updated on allthe major health law news,
subscribe to ALA's New HealthLaw Daily podcast, available
exclusively for premiummembers@americanhealthlaw.org

(00:27):
slash daily podcast .

Speaker 3 (00:35):
Hi everyone, and welcome to today's podcast,
where we're talking about thetop 10 issues , um, in
healthcare that A HLA seems forcoming in 2025. Today we're
talking about number eight,which is who do you sue medical
malpractice in the age of AIspeaking with one of the art or
authors from , uh, thisarticle, which is Angel Dooley

(00:59):
, um, from DBM Legal Services.
My name is Shalin Watkins, andI'm an attorney at HOL M Knight
, where I work in ourhealthcare regulatory practice.
I'm also Vice Share Educationfor the A HLA Healthcare
Liability and LitigationPractice Group, which is
hosting today's podcast. Allright , angel , would you like

(01:19):
to introduce yourself and tellus a little bit about you?

Speaker 4 (01:22):
Yeah, sure. Hi , um, my name is Angelie Dooley . Um,
I am a healthcare regulatorycorporate , uh, transactional
attorney. I work for DuncanBergman and Mandel. Uh , um,
we, we call ourselves DBM LegalServices. Um, they're based out
of New York, New Jersey, but ,uh, we're across the nation.

(01:44):
Um, we're, we're fullybasically remote law firm , um,
uh, doing corporatetransactions, but I've been in
healthcare regulatory for nowalmost 20 years. Um, I've
watched a lot of things change.
It's constantly changing now.
Um, and so this kind of, I wasat a party and this article

(02:06):
kind of came to my attention,and I was, I was already asked
to do an article and I waslike, oh, let me write about
this, because I was talking toa doctor friend of mine and she
works for Mayo, and, you know,she gave me some insight on
what was going on. So it was,it was just one of those things
that kind of naturallyhappened, and it was
interesting piece, so I endedup writing about it.

Speaker 3 (02:30):
Yeah, I think that's kind of what I've seen as a
trend in healthcare right now.
When everyone's talking aboutai, there's a lot of like panic
and scares that are happeningfrom every element of
healthcare, right. Be it theprovider side of work , the
developers, what regulationsare not catching up necessarily
with the fast pace of thetechnology. So there's a lot of

(02:51):
questions out there,

Speaker 4 (02:52):
Right? Right. There are a lot of questions. I mean,
some are news . I felt likesummer are definitely new
questions because like yousaid, AI and technology is
moving so fast, but some arelike, okay, are we answering
the same question? Just usingartificial intelligence as

(03:14):
another tool? Is it a tool? Isit, you know, like I think we
sometimes treat artificialintelligence as a whole new
person in the conversation.
, like, it's like anactual human being. So
, I don't know. It's, it's veryhard to separate at some points
, but, you know , um, and Ithink that's where, where

(03:37):
doctors are scared. They'relike, oh, are we gonna be , be
replaced by ai? But I, I say,no, you know, and we'll get
into why I don't think that'sgonna happen as well based on
like, negligence, strictliability, all of those
questions that we're gonna talkabout. So

Speaker 3 (03:55):
Definitely, I, as a side note, I often think that
some of the fear that we'reexperiencing is the fact that
we've been watching robotmovies for decades now,
, and we just worry thatthe robots can do everything
and will do everything. But Ithink that's kind of the beauty
of your article. It does kindof talk about the nuances here
and how there are differentlayers of liability , um, that

(04:17):
kind of come in with theimplementation of AI usage in
healthcare. So let's dig intoit for sure. Okay. Um, you made
your article really easy toread , um, because you also
sectioned it off, so guilty up. I wanna ask you questions
about each section. Okay. Um ,first being your accountability
for AI decisions section. I,when I read it, I was thinking

(04:41):
a lot about the fact that, youknow, we're seeing now a lot of
state medical boards arestarting to weave in the AI
conversation and theunprofessional conduct con
concepts. Mm-hmm . You see, even the FDA has had
its good machine learningpractices and standards for the
development of ai. Um, and Iwonder, will either of these

(05:03):
areas kind of drive uplitigation risk between , um,
the two groups, the providersor the developers? For example
, um, you , you have kind ofthis negligence per se. If, if,
if the regulation says you'resupposed to be doing X , um,

(05:24):
does that mean that by default,the provider or the developer
by non complying with theseregulations or this guidance
would be negligent?

Speaker 4 (05:35):
You know , um, again, I think there's layers
to that question. Um, so let'stake the provider, and I always
go with the physician or theclinician side of it . So , um,
when you're talking about statemedical boards and what they're
implementing in unprofessionalconduct, I think here, here's

(05:56):
where I think think AI shouldbe kind of separated from
clinical. Like ultimately it'sthe physician that's making the
clinical decision making . Um,we have, you know, now we have
Google, let's just say Google.
I as a mother, I do it all thetime. Is Robitussin ,

(06:17):
good with , you know, whatevermedication I'm giving? And
Google gives me an answer, andI kind of rely on that answer.
I , uh, I , I come from afamily of docs though. So my,
both my parents are physicians.
So I call one of 'em up and Isay, Hey, I can I, Google told
me this. And they're like,yeah, actually you can, right?

(06:37):
But ultimately, there no, after50 years of medical decision
making , or 10 years or fiveyears, you're trained on the
human body. AI is trained onthe inputs of people and codes,
right? So AI is not trained onthis human body and what we're

(07:02):
seeing and touching and feelingand what a doctor touches and
feel is , and looks at andobserves and all of those
things, right? So that's asimplistic, like Robitussin,
but there are some things thatare really, really obviously
complex, right? So I thinkmedical boards are saying, Hey,
if you haven't , it's kind oflike digital health and

(07:24):
telemedicine, right? Youhaven't seen this person in
person mm-hmm .
And you're relying on AI togive you this answer. Did you,
why'd you even go to medicalschool then? Right? Like, you
know, that kind of situation.
So I think medical boards aretrying to, trying to implement
safeguards, right? And sayinglike, Hey, clinicians, you are

(07:48):
ultimately responsible stillbecause you are , you are , you
are , if you're only relying onthe ai, right? Right . Yeah .
Now take that another step,right? Then you have the, per
the company that developed aproduct or a instrument or

(08:14):
something that is AI driven ,and they put in all of this
data. So the physician has usedit multiple times, no issues,
things, you know , uh, happen .
Well , that data that was ininputted into this device is
wrong, right? And so thephysician didn't solely rely on

(08:39):
that, but he's used it manytimes. This, you know, this
time it didn't do the rightoutcome that can happen. And so
who's liable in that situationis very different than just
simply saying, I, I, you know,I didn't ask this patient any
questions. He said he had acold, and, you know, I relied

(09:02):
on my, you know, my auto thingon my phone and dispensed him
this medication, and he had aside effect and reaction and
stuff . I don't think anydoctor's that stupid to do
that, but , um, that I know ofanyway, . But, but , um,
but there, there might be, youknow, and, and, and stuff , um,

(09:25):
on that, and I mean, I knowwhat's coming out. They're
saying, I forgot what company'sdeveloping this, but they're
doing a doc in a box where aperson walks into this room
that like, can diagnose what'swrong with you? Well, who,
who's gonna be liable for that,right? Mm-hmm .
It's gonna be the company thatdeveloped that box that you

(09:46):
walk into. It's kind of like aall over x-ray or something, I
don't know. So these are newthings that are coming up. So
it's gonna be interesting as,as we get into more and more of
this, and we litigate how thiskind of plays out and what law,
law is developed. But again,malpractice is local, right? So

(10:10):
it depends on where you live,right? Standard of care is
local, so we haven't even, youknow, what is the standard of
care in that jurisdiction? AndI think that's, these are all
questions that are gonna comeinto, we don't have any
standards. That's the problemis like, we don't have any
standards, right? So that's,this is all gonna be coming

(10:33):
into how we, how medical boardstreat, treat each situation
case by case, how the casesargued in court case by case in
Georgia. It could be one thingin Illinois, it could be
another, we don't know.

Speaker 3 (10:53):
Yeah, I think that's right. It's actually terrifying
to me to hear of the doc in thebox that I'm like, not
gonna get it . A medicallicense. I don't know. Like who
, who , who's even able to sitfor a medical board exam. I
have no idea. You, you'vegotten to gimme a million more
questions.

Speaker 4 (11:09):
. Um,

Speaker 3 (11:11):
Moving to the second kind of part of your article,
we talk about strict liabilityversus negligence, which I
found very interesting mm-hmm . Um , have
there, at least in yourexperience, has there been any
examples from past diagnostictools that might demonstrate
who would win in this fight?
Uh, of, you know, if there willbe a strict liability or if

(11:32):
this would just be a negligenceissue? Um, considering that the
tool would have been created bya developer that has kind of
marketed its product issomething that could be relied
on upon by physicians,

Speaker 4 (11:47):
Right? So I obviously with strict
liability, I think it would bea product liability case,
right? So I think AI would bethe product right in, in this
situation. So, I mean, instrict liability, it was
defective, right? That's allyou have to really prove. And
it was unreasonably dangerousand, you know , um, and that

(12:09):
defect caused the plaintiff'sinjury . So if that happens,
for example, if I think there'spast case loss in private, but
I like quickly looked upsomething I had to use my
, but I think there's aMedtronic case, Bedo v
Medtronic, and it's 1993,right? And it's strict
liability and product, youknow, pacemaker, right? Yeah .

(12:32):
Yeah. So, so I mean, I thinkwe're going down that path as
is artificial intelligent ajust the diagnostic tool or is
it the decision maker? Likethat's, that's where we are,
right? Like, who's the decisionmaker here? Right. In this

(12:55):
situation, who's trained onthis tool? Like, like, I don't
think every physician should beable to use ai, right? Because
it's maybe used as a diagnostic, is it , you know, used for
diagnosis, but ultimately,who's the ultimate decision
maker ? Is it that tool or isit that doc?

Speaker 3 (13:19):
Right ? I mean , you also bring up the point that,
you know, if you're thinkingabout training of physicians,
right? Maybe the medicalschools right now could be
implementing AI as part of thetraining of, you know , young
doctors. But imagine someonewho's been practicing for 50,
60 years trying to implement AIin their practice. Um, it , it
might give extremely differentresults. I think we're seeing

(13:42):
that even in the practice oflaw now as we try to use AI as
part of our tools, right. Youknow? Right . The lawyers are
just a little bit better atusing it than us older lawyers.

Speaker 4 (13:51):
Right. Well, that's a good point. So in medical
school, they should implementusing very tested like AI
diagnostic tools, I wouldthink. Like it's been tested
and, but how can you have yearsof testing if we don't even
have years of ai? Right? So , so, but also what

(14:12):
we're finding is I'm talking tomy parents are in their
seventies and still working,right? And I can guarantee they
don't use a , my dad refusesto, like, he doesn't even wanna
get on a Zoom. He doesn't dotelemedicine either . ,
so, so, but he is stillworking. And he's like, well,
physicians have to u they wentto medical school. So you have

(14:33):
to use your mind, you have touse your, your skills of
detection, your eyes, your, youknow , what's in there now as a
diagnostic. Maybe we use it tofind rare things that we
wouldn't have discoveredbefore. Right? And that's what
the medical school , but whatif you have medical school
students who are relying on ai?

(14:57):
I mean, we see it in highschool, we're gonna see it
across the board in college andthings. They're literally not
thinking for themselves. Right? They're relying on tools that
are helping them . Right. Andthey're not thinking for
themselves. Like the goodlawyers who use AI think we are

(15:18):
like, okay, here are all thesequestions that I have, but
we're gonna go review thisagain, and then we're gonna
make sure the case law isaccurate and we're gonna do,
we're not gonna submitsomething that is not double,
triple checked . Right? But weneed a starting point, right.
And I think AI is a startingpoint right now in healthcare

(15:40):
versus let's just rely on it.

Speaker 3 (15:44):
Yeah. Right ,

Speaker 4 (15:45):
Right. Don't you , I mean, I think, I

Speaker 3 (15:47):
Think that's right .
Yeah. I think that actuallybrings us kind of to the third
point in your article andsomething you were talking
about a little bit earlier,which is about, you know, the
data that's being input is alsoan extremely important part,
of this analysis,right? And so you talk about
the data lineage and howthere's this balance that needs

(16:08):
to happen between algorithmicdesign and integrity. So how do
you ensure that the data that'sbeing input is being analyzed
by a non-biased ai? Like doesthis itself, the, the , the
existence of bias createanother litigation risk of some
sort for developers ?

Speaker 4 (16:26):
Developers ? Oh yeah, absolutely. I think we're
seeing it, and I think that'swhat , uh, the Mayos and the
Geisinger and the Kaisers areall all struggling with, right?
And people are getting suedover this, right? There's bias,
and let's take policing like,just like, let's take policing
AI is used , can be used inpolicing or TSA or whatever I'm

(16:50):
flying on this weekend, solike, I'm going out of the
country this weekend. So I'mjust like, what are they
looking like? It , there's biasin that, right? Um, there's
racial bias, there's, you know,all sorts of things then that
is coming into play. And Ithink, like I said in my
article, garbage in garbageout, right? Like if you're, if

(17:10):
you don't input the right dataand figure out what is the
ground to there is going to bebe bad outcomes by using AI and
misdiagnoses in situationsbecause , uh, as an Indian
person, I, I , uh, nationality,I don't have sometimes the same

(17:35):
, we have a higher, like women,we have a higher rate of heart
disease or whatever, right? Somight not, based on my age, my
weight and all this kind ofstuff, it might not be able to
diagnose that properly becausethey don't know my racial
background. Right? And, and ,and a lot of physicians don't

(17:58):
put that racial background ortake into that, into
consideration. A lot of themare not trained on it, right?
Right . So if they're nottrained on it, how is the AI
gonna tool gonna be trained onit?

Speaker 3 (18:11):
Yeah.

Speaker 4 (18:11):
Right. Yeah.

Speaker 3 (18:12):
It's almost, it's almost interesting because the
question is can you teach theAI all the biases to weed weed
out? Because biases very humanand health equity concerns,
even before AI have been, youknow, bringing up these right
questions of , um, the implicitbias that occurs in the
practice of medicine all thetime,

Speaker 4 (18:34):
Right? So like kidney disease is, is , um,
prevalent in , um, the black,black community and , um, if,
if , uh, based on just puredata, it might not be able to
calculate because you have toinput stuff like blackmail at ,

(18:58):
you know, this, and you have toinput those other data points
in there to make the diagnosescorrect. Because not all things
are based on height, weight,and things like that, right?
You know, it's genetics, it'sall of these different things
that don't, don't, they don'tunder , you know, people don't

(19:20):
understand. So I think we'revery far from getting really
accurate, accurate diagnoses. Ithink it's a starting point.
It's basically my Robitussinsample question
, right? Does Robitussin have a, you know , um, si like, can
you take Sudafed and rot ?

(19:41):
That's like my, my questionevery single time I forget the
answer is, can you take Sudafedand Robitussin together? You
know, because I don't want anything to happen to
my kid or something like that.
But I think that's where we areat . We're at that level and
we're not quite at the levelsthat we should be. And that
means that there's still somany questions to be answered,

(20:05):
right? I mean, I can giveanother example with
dermatology. You know, how ,um, for darker complected
people , we still get skincancer, right? We can still get
, we can still, we still needto wear sun , sun protection
and, and stuff, but AI doesn'tknow that necessarily because

(20:25):
they're looking just at thecamera that's taking the
picture. And maybe like, okay,well, you know, they have a ,
but there is a risk, you know,and things like that. So
there's just certain thingsthat I just don't think we're
quite , um, ready for. And ,um, you know, so there's still

(20:49):
so much to do and there's just,again, no standards for this,

Speaker 3 (20:53):
Right? Well, I think that comes to the piece on
validation, which, I mean, Imay , I may be creating more
issues by asking the question,but , um, you know, there are
some people who would say thatthis bias is gonna exist
whether or not AI is used,right? Because there's also
gonna be practitioners who arenot considering , um, things

(21:17):
based on implicit biases thatthey may have. Um, so is it
even possible to ensure thatthe AI is gonna be reliable?
And I , I take your, your, your, um, Robitussin question,
right? Mm-hmm .
Um , if I've gotten verycomfortable googling my
symptoms and expecting theinternet to tell me what's
wrong is , is there really muchdifference in the provider

(21:40):
doing the same thing? You know?
Yes. Maybe I've created , maybeI've created this spiral of
conspiracies within my headthat, you know, that, that I
need you to talk me down from,

Speaker 4 (21:50):
Right? So I do think that, so I'll go, like, doctors
hate it when us as hu likenon-physicians come in and say,
well, I read on WebMD, Igoogled all my symptoms, and
I've read on WebMD that my dadis like, absolutely. When I
start off that conversationwith that, he is like, stop

(22:12):
reading , stop reading.
That's my job. Right? But , andso, so um, because again, I
think what has evolved, andthis is society in general, I
can take it from healthcare tojust personal relationships.
We've relied on the use oftechnology and not human,

(22:37):
again, the mind, the touch, theseeing a person, you know,
we're relying on, on , um, youknow, so , so many things . But
like if, for example, in thissituation, what if we were
using AI and that AIautomatically updated, just
like our iPhones automaticallyupdated, and that update was,

(22:59):
had a bug in it, right? Andthat bug that goes down that,
well , let's go down this path,you know, and that bug
may , you know, makes somediagnostic accuracy worse
instead of better than who isresponsible in that situation.
So how do we ensure thatreliability? I mean, I think

(23:24):
what, what's what , where I'mgetting at is that we're
relying too much on technologyand ai, and we're not relying
on our human capabilities andwhat we've been trained to do,
whether we're a lawyer, aphysician, or whatever, we've
been trained to think, youknow, as lawyers, like if we

(23:44):
don't put the input, thequestion right in chat, GBT,
, you know, I'm gonnalook at that answer going, what
the heck? You know, you know?
Mm-hmm . Um, this is not theright answer, you know, and
stuff. So , um, I think that'sthe, the , again , like you

(24:07):
said, human decision making hasto be first. And then I think
the use of AI is second. And Ithink we're just not there. And
I don't know if we'll ever bethere, right? Mm-hmm
. And I knowthere's gonna be countries, I
think Australia, I know India'sdoing it where kids can't use
AI to, it's, it's banned rightoff of phones and things like

(24:31):
that. Because again, peoplearen't using their minds to
think Yeah , you know, it's,it's coding, it's doing yes, no
zero ones and stuff like that.
You're not really trained onhumanity , as I saw , um, this
has nothing to do with thisarticle, but kind of does , um,

(24:52):
the Nvidia CEO , he said, whodo I hire? I hire people
trained in sociology,psychology , um, history, all
of those social skill socialareas versus person that can
code, because I can tell AI howto code me. Yeah. Right? So you
need the thinker. And I thinkthat's where I got from the

(25:14):
party that I was at when thedoctor, she works for Mayo and
she's not a clinical physician,but she's doing a ton of AI
work for, for Mayo. And theseare questions they're asking,
they're thinking about it.
They're actually doctors outthere thinking like, how can
we, we wanna use it, we thinkit can be accurate, but how do

(25:39):
we use it ethically, morally,intellectually, right? Mm-hmm
. Way to use it,right? And I think there's
people thinking about this, andthat's why this kind of came up
for me. Um, and is there,there's no, again, the
standards around it and are solike, just like we have

(26:01):
compliance, right? There has tobe that compliance officer that
like at a hospital, let's justsay that is continuously
monitoring a I use and that'stheir only job, maybe because
it's so prevalent, you know?
And are they using the , are weusing the right testing models?

(26:25):
Are we, you know, using , um,is the , are things getting
updated? Are they auditing inreal time AI u you know, how
are AI is being used and thetools that are being used? Um,
is there gonna be acertification process in this,
right? Like, is there gonna bea certification pro ? There has

(26:45):
to be. Yeah. I think that ifyou don't know how to use it,
like that 60-year-old doctorthat doesn't know how to use
it, don't let him use it,right?

Speaker 3 (26:55):
Yeah, for sure.
Unless

Speaker 4 (26:57):
He's trained and gets the certificate on it.
Yeah.

Speaker 3 (27:00):
Yeah, I think that's right. Um , right . What would,
so what would you say are thekey takeaways of this topic for
what we're gonna expect in thisnew year? 2025?

Speaker 4 (27:16):
2025? I think , um, well, , um, we're , I
think to gain efficiency andlet's go down the efficiency
path for a minute, ,um, to gain efficiency, I don't
think AI is gonna be used .
There is some efficiency for,let's use it for the things

(27:39):
that we can make thingsefficient. Is it gonna be
efficient for scribing? Is itgonna be efficient for the
administrative tasks? Sure. Andlet's use it for that. I think
you're gonna see a lot more ofthat in 2025 is for physicians
to make their lives a littlebit easier. Hopefully, maybe
it'll make it harder. Theythought the EHR was gonna make

(28:02):
it easier, but it's not really,they have more burnout of
physicians, right? Using EHR .
So are physicians gonna bereplaced? I highly doubt it,
right? Or clinicians ingeneral, nurses, doctors,
whatever. Are we gonna bereplaced? I think there's
certain areas that we can makeefficient. Are lawyers gonna be

(28:23):
replaced? I highly doubt it.
You know what , maybe what'sgonna be replaced is a legal
assistant, possibly, right? Sothat, that one layer of
administration, I think isgoing to , we're gonna see a
lot more efficiency and we'regonna see high , we're gonna
see people that are trained asdocs trained at as nurses or pa

(28:48):
those are going to rise. Thegood trained ones are gonna
rise up, but we're gonna seesome of these administrative
tasks get a lot easier, Ithink, and faster in 2025. I
think we have a long way to gobefore a doctor is replaced. I
really do.

Speaker 3 (29:08):
Yeah. I think to take it back to my robot movie
analysis from earlier, youknow, since we've all been
watching those robot movies, wewant to avoid them taking over
the world. So we're alwaysgonna have that stop, that stop
gap at the, at the end of thestory. There's gotta be a
human, he can turn everythingoff.

Speaker 4 (29:25):
Yeah. I don't know .
I'm watching Paradise and rightnow,

Speaker 3 (29:30):
Yeah, I'm , I'm literally watching the same
show, so

Speaker 4 (29:33):
I'm like, oh my God, this can happen, you know, and
stuff and there's gonna bereplacement of like the whole
cities and things like that.
But I don't, you know, I reallythink it's just like any other
tool. I think there's going to, I think ai, what I predict is
gonna happen , um, is that AIis going to be a sole

(29:58):
diagnostic tool. There's gonnabe some regulations surrounding
AI use and how we're gonna useit. And there's gonna be,
medical boards gonna have tocatch up everywhere. And every
medical board , uh, is going tohave to put in ethical
standards. It's gonna be haveto put in unprofessional
conduct standards. It's gonnahave to be, it's just gonna be

(30:18):
like that lawyer who ever , Ithink it was in New York where
submitted a brief and they usedbad case law because there was
hallucinations in AI in usingthe case law. And like , who
doesn't check their case lawbefore submitting a brief,
right? I mean, I thought thatwas just silly. They just
submitted it. So I thinkthere's going to be some
additional work for, to doublecheck , but I think there is

(30:44):
gonna also be someefficiencies. It's just like in
anything else, right? That wedo. So , um, it's going to be,
it might make our make a , a ,a clinician's life a little bit
easier. It might make apharmacist's life a little bit
easier. Pharmacy is where Ithink it's gonna be like very,
very important. But again,clinical trials, they're

(31:06):
getting faster and faster,right? For new drugs. I mean,
we didn't even go there andtalk about that, but clinical
trials and I use , they'reusing a smaller population,
right? Mm-hmm . Sois there gonna be a bias in
clinic , in drug drugs, indrugs , interactions? So I
think this is going to besomething that is really, we're

(31:28):
gonna see a lot more litigationin it too . Mm-hmm
. And use . And Ijust don't think plaintiff's
attorneys, there's some thathave already figured it out and
I don't have case law in frontof me, but I think there's
gonna be a lot more litigationin this as well to get those
rules solidified in writing andregulations in writing on how

(31:52):
things have to operate with ai.
And the only way that's doneand policies are changed is if
somebody has to pay a lot ofmoney because they solely rely
on that. Right. Right .
And there's gonna be litigationsurrounding it.

Speaker 3 (32:06):
Yeah . Well, thanks so much, angel . I really had
fun talking to you about thisand going down our own death
spiral

Speaker 4 (32:13):
.

Speaker 3 (32:14):
I just wanna encourage our listeners to go
back and read the top 10 issuesin health law article that is
out on the A HLA website. Andjust as a reminder, we were
talking today about numbereight, who do you sue? Medical
malpractice and the age of ai.

Speaker 4 (32:29):
All right . Thanks Shaylyn .

Speaker 2 (32:35):
Thank you for listening. If you enjoyed this
episode, be sure to tosubscribe to ALA's speaking of
health law, wherever you getyour podcasts. To learn more
about a HLA and the educationalresources available to the
health law community, visitAmerican health law.org.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.