All Episodes

April 22, 2025 50 mins

Artificial intelligence (AI) adoption is skyrocketing across the health care industry and innovation is booming, but regulation is struggling to keep pace. Dennis Breen, Managing Partner, Source 1 Healthcare Solutions, speaks with Alicia Shickle, President and CEO, ProCode Compliance Solutions, David Bolden, Senior Health Care Compliance Consultant, and Lyman Sornberger, CEO, Lyman Healthcare Solutions, about how the health care industry can remain compliant in this rapidly evolving environment. They discuss the current challenges and opportunities that AI poses, using AI to navigate compliance, practical tips for the health care industry, and what the future holds. Sponsored by Source 1 Healthcare Solutions.

AHLA's Health Law Daily Podcast Is Here!

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):


Speaker 2 (00:04):
Support for A HLA comes from Source One
Healthcare Solutions, whichcombines compliance, expertise,
advanced technology, andcompassionate service to help
healthcare providers minimizerisks, optimize revenue
integrity, and ensureoutstanding outcomes. Trusted
by organizations nationwide andinternationally, they deliver
immediate remediation andsustainable improvement,

(00:27):
empowering their clients forlasting success. For more
information, visit source onehc.com.

Speaker 3 (00:40):
Hi, this is Dennis Brine , and , uh, we have our
group here. Uh, we're actuallytalking about artificial
intelligence, ai, and whatwe've done is we've compared it
to the wild, wild West. What'sreally gonna happen with ai?
Who knows? It's a frontierthat's unknown, and we're going
to rip it apart and discuss itfrom a compliance perspective.

(01:02):
I have three incrediblyexperienced and really, really
wonderful guests who have a lotof insight into this topic. Uh
, first person we have isAlicia Shekel from Pro Code
Healthcare Solutions. She'sbeen a , uh, longtime colleague
and friend of mine. She's beeninvolved in many aspects of
compliance, from expert witnesstestimony to investigating

(01:26):
compliance issues, to settingup compliance programs As a
coder and a practice manager,she has a lot of real world
experience. We also have DavidBoldin , who , uh, comes to
Source one , uh, as acontractor. He, too has many,
many years of , uh, exposure tocompliance to coding in the
healthcare world. And the , uh,uh, introducing AI into this

(01:50):
world has really caused a lotof thought and , uh, ideas that
have popped into David's headfrom his many years of
experience. And then we alsohave, our third guest is Lyman
Schoenberger from LymanHealthcare Associates. He, too,
he's been on , uh, thefoundation board member of many
healthcare organizations. He'sbeen involved in with the top

(02:11):
notch , top level healthcareorganizations without the
country, and has often been aconsultant on when , uh, new
types of programs and apps arecoming into play and how it
fits into the revenue cycleworld. So, thank all three of
you for , uh, uh, joining methis morning. And I'm excited
to hear your , uh, yourthoughts and your comments on

(02:35):
artificial intelligence inhealthcare, and truly how it is
like the wild, wild west. Andthat brings me to our first
question , um, that I wanted toat least for us to ponder. Um,
with it being the wild, wildWest, what's your perception of
Alyssa , of where AI and theadoption of AI types of tools ,

(02:56):
uh, for healthcare and byhealthcare pro professionals,
where do you think it , uh, weneed to address the risks from
a compliance concern?

Speaker 4 (03:04):
Yeah, I mean, I think, you know , uh, the Wild
West analogy really kind ofhighlights the urgency, both,
both the urgency and the lackof regulation, right? Which is
absolutely critical. Um, but Ithink it also , um, you know,
risks oversimplifying thecomplexities of ai. Um, and

(03:25):
that in turn, I think, ismaking it like hesitant. Some
people are hesitant to , uh,innovate while others are kind
of bow barreling forward, recrecklessly, right? So I think
the real challenge is gonna bestriking a balance between
caution and ambition here. So,David, what, what do you think?

Speaker 5 (03:46):
I actually think that the analogy creates kind
of an awareness that this isdefinitely like a free for all
kind of situation with ai, butthere is also this ex this kind
of like, unknown, right? Thegenesis point before that new
frontier , um, that is waitingto be explored. So I think that
, um, you know, it's definitelya point to take on that this is
, uh, this AI venture should benavigated responsibly. Um, and

(04:09):
then kind of connecting thosethings to the past and what
we've done in the past withregards to EHRs and, and
medical records anddocumentation standards and
coding standards, and kind ofmoving forward into this new
world of, of AI and how it canhelp enhance quality of care
for members. So , um, Lyman ,what do you think?

Speaker 6 (04:31):
Well, it's definitely a wake up call for
the industry, and rightfully sobecause it's , uh, it is a
dramatic , uh, change andanything , um, of this
magnitude is difficult for ,uh, the industry to , uh,
perceive and adopt. But, youknow, I think we as an industry

(04:54):
really look at need to look atit , um, a little differently
and not perceive it asnecessarily a threat without
question. It's more complexthan what we originally had
thought. Um, and I think itwill continue to evolve to
something to where the industry, um, will be sensitive to it.

(05:14):
But I think that we're on theright track. And part of that
is , uh, is a byproduct ofthese education sessions for
with our colleagues and ourpeers.

Speaker 4 (05:27):
Absolutely.

Speaker 3 (05:28):
Mm-hmm . That's actually a great point. Um, I
know with, with ai , um,oftentimes even when we all use
it , um, it spit out an answer,and it's something that is
unique. It's all put togethervery well, but what it doesn't
do, it , it, it misses and itlacks that human component

(05:49):
that, that gut feeling. So,Alicia , I see a lot of
physicians using , um, AI inclinical decisions, and that,
to me, brings up some ethicalissues among other items. You
know, what, what happens whenthe AI tool disagrees? Or the
provider has a hunch that maybethere's something missing , um,

(06:11):
and really has an issue withwhat the AI is spitting out to
them from a clinical judgmentperspective. We know that we
had a , uh, I had a clientthat, that uses ai. It was the
premise of AI in healthcare,and it would spit out a care
plan based on what that thatprovider was putting in. And
several times that provider didnot agree with what the AI

(06:31):
treatment plan was, and itcaused a lot of problems.
'cause they had a hard timegoing against the AI and had to
document , um, why they werejust disagreeing with it. What,
what is your take on, on ethicsand, and the ability to
override what the AI isspitting out , um, what, from a
provider perspective,

Speaker 4 (06:52):
Legally, the clinicians are gonna be
responsible for the decisions.
Um, but I think we need someclear framework like outlining
AI's role, right? Again, we'vetalking about it, it's a tool.
Um, if the AI conflicts withthe provider or human judgment,
you know, I think we have tomake sure that we're building

(07:12):
in like some accountabilityprotocols , um, that are gonna
address both perspectives in atransparent way. Right? I think
overall, again, just, you know,proactively, we talked about,
you know, the importance ofproactive education
implementation, having thoseguardrails, but I think when it

(07:34):
comes down to the bottom line,like the docs are gonna have to
take the responsibility to makesure that they're reviewing ,
um, you know, like we talkedearlier, a little bit about,
you know , garbage and garbageout, right? So, you know, I
think it's gonna be helpful.
The docs are gonna have to takeit seriously to , um, you know,

(07:54):
review the output. And, youknow, if they don't agree with
it, they have to have theability , um, to make those
changes based on clinicaldecision making . I don't
think, you know, we're evergonna get away from that no
matter how good the AI gets.
You know, and we're really,again, like , like to Lyman's
point, it's a very positivething. I think it should be and

(08:15):
can be. Um, but overall, thedocs are gonna have to take the
responsibility to one, changethe information if they don't
agree with it. And then I thinkfrom a compliance perspective,
we've also gotta be diving inauditing, monitoring and
holding folks accountable that,that are not catching these
things that are, you know,getting out there in the

(08:36):
industry without, without thehuman element. So, David, yeah.

Speaker 5 (08:44):
Yeah. No, I agree.
And, and kind of to piggybackof what you were saying is that
I think that it's veryimportant for clinicians to
understand that they need toexplain those things within the
documentation, and then alsoduring , you know, reviews,
right? So coming from an auditkind of world and background,
you know, I've been throughplenty of audits and reviews
where the clinicians, you know,during an exit interview, they

(09:07):
need to kind of explain whatthey're doing in that
documentation, because it's notclear for the person who's
looking at it now all the time.
And depending on the expertiselevel of the auditor, they
might not be able to infer whatthe doctor or the physician
was, was doing during thatvisit. So it's very important
for the clinicians to be ableto explain ai, because AI could
be just another version of aclone note. And we see that all

(09:29):
the time where a cloninghappens over and over and over
again . And so, you know, it'sreally, absolutely, AI is kind
of like leading, you know, likekind of leading that
interpretation of what's goingon within the, the visit. Um,
you know, unlike software likeDragon, right? Dragon Software,
where it's kind of a dictatingkind of software where it's
word for word, you know, AIcould be something where it's

(09:51):
in the room with you and it'skind of listening and, and, and
, um, you know, reporting onwhat it thinks is happening
during that visit. And in thatcase, how is a doctor gonna
explain, you know, what, what ,what that actual visit was for
Lyman ?

Speaker 6 (10:07):
Like, everything, it's all about communication
and education, and that's atall levels of the organization
and all positions that have anyassociation with that . Um,
patient's delivery of care ,uh, I can't stress enough that
the clinicians and the codersand the staff need to be
empowered to challenge the AIand feel comfortable

(10:31):
questioning it and to learnfrom it. Probably the key
message, I think, operationallyis that it needs to be embraced
as a resource, and notnecessarily , necessarily the
message is that it's an end allsolution. Um, if they can look
at it as a tool and not look atit as a mandate, it will, it ,

(10:55):
I think the industry will findit more comfortable to adopt.
That's my personal opinion.

Speaker 3 (11:04):
Um, we all know that AI is based on historical
information. That's exactlywhat's out there. So there is ,
um, a consideration that weneed to , uh, take into
consideration or , um, to workthrough is the bias and the
fairness of the tool itself,what information it's based on.
I know that , um, the , uh, inthe OCR , uh, or what was

(11:28):
released by the OCR, there wassome , um, information that was
published that basically said ,uh, if there is, if you're
using it in clinical judgmentor use it in treatment of the
patient , that it's up to theproviders really to assess ,
uh, if there is any bias in thetool that they're using. They
pushed it back to the providerto determine if the AI that

(11:52):
they're using has any potentialfor that bias or fairness. So,
so that also brings thatequality question , uh, into ,
uh, into play here, but it'salso pushing the burden on each
individual provider as as , asopposed to the government
itself going after thedeveloper or holding the

(12:12):
developer accountable. Alyssa ,where do you see that bias? Uh,
or what's your comments orthoughts on where bias comes
into the ai, especially whenit's used for , uh, clinical
decision making ?

Speaker 4 (12:25):
Yeah, I mean , uh, look, I, I think, you know,
prevention is key here, right ?
And I think, you know, we'regonna need to be really
diligent in ethical audits, youknow, going forward I think are
gonna be key. I think, youknow, we need to review the AI
systems for bias , um, beforethey're rolled out, and then to

(12:46):
continuously monitor and auditthem going forward, right? Um,
I also think that regulatorsare gonna need to provide some
guidelines so that we have someguardrails around there that
are specifically addressing thebiases , um, in healthcare ai.
So, yeah, I mean, it's not justlike a set it , forget it kind

(13:06):
of thing, right? I think weneed to proactively look at
that before the tools aredeployed and then continuously
monitor it after , um, as well.
So David,

Speaker 5 (13:18):
Yeah, I think you are right on the money. And I
also think that the , the humanfactor and that training that
you were talking about, thosethings are non-negotiable,
right? With ai, I think that inthe world that we are kind of
developing and creating rightnow , um, to your point, just
kind of putting these thingsout and just letting it be is,
is, is not the way to go,especially with compliance. Um,

(13:39):
you know, how is AI gonna knowwhen there are , you know,
updated medical policies , um,for medical necessity and
coding? You know, how are theygoing to know , um, you know,
how to decipher eligiblehandwriting from a physician ,
um, to make sure that whateverservices being provided was
actually medically appropriatefor that particular visit.
Like, how, how does it going todecipher those things? And

(14:01):
really, that comes down to thebias, right? So how are we, as,
as users and as people who arereviewing AI, going to kind of,
you know , um, understand andkind of explain that in a way
where, you know, we're notputting patients in harm's way,
right? Because that's numberone, patient, patient care is,
is where this is all kind ofleading to. So , uh, Lyman ,

(14:24):
what are your thoughts?

Speaker 6 (14:26):
Yeah, I I actually think this conversation reminds
me of a sidebar that I had witha physician a couple of weeks
ago. And we were talking aboutthe use of ai, and ironically,
he was welcoming , um, thetool. And part of me in that
discussion was trying to feelhim out as to why he was so

(14:50):
receptive to it. And when Imade the comment to him that ,
um, what do you think of theerror rate that might exist
around that, between theclinical documentation or maybe
even the coding area? And hisfirst response to me was , uh,
do you realize the error ratethat exists today with the

(15:10):
human factor? And, and what hestressed? To me, it all boils
down to what Alicia saidearlier, and that is , uh,
making sure that you have aquality program in place,
particularly one that isstrongly enforced if you're an
early adopter mm-hmm . Um , while the
tool was being matured. So ,um, you know, I, after I talked

(15:35):
to him , I stepped back andsaid, you know, that's an
interesting concept. We, thereis a , uh, a certain percentage
of human error that existstoday.

Speaker 3 (15:47):
Good point. That's a great point. It is a great
point, Alicia and Lyman ,because we know that , uh, the
AI tool is based on , uh,historical data that's put in
there. And we all know that ,um, in the early stages of ,
uh, healthcare, even in thecountry, the oftentimes people

(16:07):
who were seeking treatmentwouldn't go and get treatment
right away for whatever reason.
They don't have the time. Ifthey were a farmer and they're
out in the field, they wouldwait to , uh, uh, or, or
someone who lives in the innercity, they would wait till the
disease gets to a certain pointwhere they can't handle it
anymore. So a lot of thathistorical treatment or
prevention is not there. Soit's, there is a certain amount

(16:29):
of bias just inherently wherethe data's coming from, because
history oftentimes does repeatitself. So that's where I was
coming from, from a bias, ifwe're using information that's
already has some, some touch ofbias or inequity in it, that
we're going to really have ,um, some problems going
forward. So the data now isbased on old information, and

(16:52):
maybe in another a hundredyears, the information that
we're providing and feedinginto that will make that AI
stronger. So I think there ismore, more to come in that way.
And I also think that from aregulation standpoint, keeping
that unbiased or thatrequirement by law, that
everyone is treated equally isvery difficult unless you truly
know the source of theinformation that's feeding that

(17:14):
AI tool. So I think it isreally important for all of us
to understand where it's comingfrom, not just understand how
to use it or where it's gonnatake us. And that actually
brings us to that, to our nexttopic. Alicia , we were talking
about using ai , uh, to, tonavigate through compliance.
Um, where do you think the AIcan , um, can cause risk with

(17:38):
either violating , um,regulations or when we're using
ai , AI to try to en enforceregulations? What's your take
on , um, navigating compliancewith ai?

Speaker 4 (17:50):
Um, I think the governance is gonna need to set
the standards for the tools ,uh, that we're gonna be using.
Um, I think AI is gonna have tooperate within clearly like,
defined parameters and that ourorganizations are gonna have to
verify compliance on an ongoingbasis, right? To some degree.

(18:13):
And we're talking about thewild west right now, but it's
like, you know, we want to, youknow, you can't set it and
forget it. And, you know, allof us as compliance folks in ,
in the compliance industry, weknow that we , like , we wanna
trust, but we've gotta verify,right? So while I think that we
are super excited about thecapability , um, and how AI can

(18:36):
help us all to be moreproductive and to do more , uh,
get more done in a shorteramount of time , um, I think we
really just gotta make surethat we have those guardrails
that we have. You know, we'redoing our, our ongoing risk
assessments. Uh, we talk about,you know, the likelihood and
probability , uh, of an eventoccurring, and then the

(18:57):
severity and impact, right? SoI think , um, yeah, I mean,
just really critical. I thinkthat we've gotta do the risk
assessments, get this on ouraudit and monitoring plan , uh,
for continuous , uh, oversightgoing forward. That's

Speaker 3 (19:11):
A great point. I did wanna say we had , uh, I'm
sorry, Alyssa , Alicia , Ididn't mean to interrupt you.

Speaker 4 (19:15):
No, no , no . We've

Speaker 3 (19:16):
Already started at a lot of our, our , uh, clients ,
uh, because we do someoutsource compliance
management, we assist settingup compliance programs and
testing mm-hmm .
And one of the areas that we'rereally trying to push, even in
the smallest of organizations,maybe not a a , a small
physician practice, but a smallfacility type, is to really put
together a group that we'recalling , um, uh, ai , uh,

(19:39):
standardization or AIcommittees , um, you know,
innovation committees ordifferent names of it. And it
really is, it's pulling inpieces of all areas, not just
clinical, not just compliance,but billing, revenue code,
coding, quality, et cetera.
David , what's your take onwhere , um, on where AI is
gonna be when you're navigatingthrough , um, an organization

(20:00):
and also , uh, what's to comeas far as navigation? Yeah,

Speaker 5 (20:04):
No, and I , and you know, you bring up a great
point, Dennis , and I think itjust, again, ties back to that
human factor, right? You'recreating these, these
environments and these controlswhere people are involved in
the navigation of AI and howit's gonna perpetuate the, the
healthcare system. Um, youknow, because right now, you
know, AI can kind of perpetuatesome of that fraud, waste and

(20:25):
abuse that we see in the, inthe, in the system right now.
It could, you know , uh,misclassify services, so
there's fraudulent billingthere. Um , you know, could be
a kind of like encouragingthose medically unnecessary
services if the cliniciansaren't appropriately looking at
the documentation and makingsure that it's suggesting the
right recommendation,suggesting the right treatment.
So that's waste right there,right? So we're seeing, seeing

(20:47):
all these things that, youknow, the AI could potentially
perpetuate if we don't havethose human connections
involved with the ai, if we'rejust letting it roam free, you
know, like the wild, wild west,you know, all kinds of things
can happen, right ?

Speaker 3 (21:04):
Yeah. I ,

Speaker 6 (21:05):
I think ultimately, again, it starts , uh, as we've
all , uh, mentioned witheducation and training and
clear communication. I meaningeveryone in the healthcare
arena at all level need tounderstand its value, but at
the same time respect its riskwith compliance. I mean , uh,
you know, and at the end of theday, you know, I'll go back to

(21:29):
my Cleveland Clinic years ofour motto, which was patients
first, you know, the messagingclearly needs to , uh, stress
that operationally we want toembrace this to enhance patient
care. And that's what we're allhere to , um, you know,
ultimately strive , uh, to doin the industry healthcare

(21:52):
space. And I think that shouldbe a clear message , um, as we
try to , uh, educate andcommunicate , uh, the change.

Speaker 4 (22:01):
You know, I just wanna kind of echo off of that
for a second. I mean, it'sinteresting, right? Like we are
seeing this ai , um, isimpacting all areas of
compliance, right? Uh , legal,financial, and operational. So
I think we're gonna see itacross all lines. And again,

(22:22):
you know, like we're saying,like everyone in the
organization, you know , we say, who is responsible for
compliance across in theorganization, right? Everybody.
And I think it's gonna beimportant for Lyman , like
you're saying, organizationsare gonna need to make sure
that everyone in theorganization is getting
appropriate training andeducation, how AI is gonna

(22:44):
impact their role. What does itmean? I think it's gonna be
important to see role playinggoing on. Um, you know, how how
does it impact me in myday-to-day , uh, job, right? I
think everybody needs tounderstand, oh , it's not just
a doctor thing or a , you know,biller thing. Like it impacts
the front desk as well, right?
Sure . So everybody's gonnahave to understand their role.

Speaker 3 (23:08):
Um , I was actually talking and thinking about when
we were, you know, talkingabout wild, wild West and all
this, if you, when thosefrontiers matter came to the
country, they didn't have GPS ,they didn't need , they didn't
have, you know , uh, AAA thatwould print trip ticks and have
the highlight through the mapthat we were all used to
without GPS . Now my kids knowexactly where I am at all

(23:30):
times. I pull out my, mygarage. If I didn't tell my
younger daughter that I wasleaving the house, I'm getting
all kinds of messages tellingme, why are you going, why are
you, oh, wait, I see thatyou're at target. Why are , can
you pick me up A, B, and C?
Back then, in the wild West,you know, they would hop on
their horse and just head intothe wilderness. I know with my
directional impairment, I wouldprobably be eaten alive by

(23:51):
whatever wild animal we wouldencounter. But we keep saying
that it's an unknown, but it'salways gonna be an unknown.
There's always gonna bedevelopment, there's always
gonna be training that'snecessary. So what from , um,
the busy lives that we have andthe AI tools that are out there
to make it better, we grabthem, we use them, but we may
not understand , um, what we'reusing or how to use it

(24:14):
properly. So if I were to askeach of you, what tips would
you give someone who's juststarting to use AI in their
role in healthcare? Both eitherfrom a clinical judgment, from
compliance, from billing, whatare some tips that you'd want
to give individuals that arejust realizing that AI is out
there and it's not going away?
Everyone has the feeling like,oh, it's gonna replace me, but

(24:36):
it's not. So where can you givesome information to people so
they can feel better abouttheir jobs? And what can they
do to , um, instead ofeliminating ai, where can they
use it to enhance theirposition? What kind of, what
kind of guidance can we givethem?

Speaker 4 (24:52):
Yeah, I mean , uh, just from my perspective, I
think, you know, it'sdefinitely coming. I think we
absolutely, you know, anyonewho thinks that they're not
gonna be able to embrace it isgonna get left behind, right?
And we know, like the onlyconstant thing in healthcare
right now is change. Um, Ithink we also, all of us in the

(25:14):
industry recognize that we'reregulated second only to
nuclear power plants. Um, so Ithink prevention , uh,
proactive education is gonna bekey. I think we, we should see
it as a positive thing. Weshould embrace it, but we
should also be mindful thatit's going to come with some
significant compliance risks.

(25:36):
And I think just, you know,getting back to basics, you
know, with everything, right?
You've gotta, you know, yeah,put those controls in place,
and then you've gotta go backand you've gotta do your risk,
ongoing risk assessments. Ithink we need to continuously
do our auditing and monitoringto make sure that the controls
are effective, that they'reworking. I think, you know,

(26:01):
people really just need tounderstand that it's not gonna
replace us, right? But it isabsolutely a fantastic tool
that can make us all moreefficient , uh, and effective.
And I think it really overallcan , uh, ultimately enhance
patient care , um, you know,from the big picture. Uh, but I
just think, you know, just beproactive. Put the education

(26:24):
and training in place, continueto monitor it for, you know,
risks and effectiveness. Andyeah, I mean, I think overall,
it , it's gonna be a good thingas long as we , uh, have
parameters, accountability,have good controls around it.

Speaker 3 (26:41):
I wanted to at least , uh, uh, echo what Alicia was
saying there with training. Weall went through our computer
training, and the first thingwe learned is garbage in,
garbage out. And I really thinkthat's something that we all
need to consider when we aredoing this. Um, in using any of
the tools , uh, that are basedon AI type of formatting or
methodology. What's your takeon, on, just, you know, how

(27:05):
would you tell someone ifthey're just starting to use
ai, what would you say to them?

Speaker 5 (27:08):
You know, I, I like that analogy of, of garbage in
garbage out because, you know,with ai, you're looking at it
from the standpoint of it'ssupposed to be enhancing
patient care. I think Lymanbrought that up before, and
that quality of care that we'regiving to the patients. Um, and
so we wanna have that samequality when we're developing,
you know, AI algorithms and,and things that AI is gonna be

(27:29):
doing for us in the healthcaresystem. Um, and I think
skipping that training, like,to everyone's point that's
saying here, you know, rightnow, is that when you skip that
training, we don't have thateducation. That's when the
problems start. That's when theproblems arise, is when that
initial education and trainingis done. And I think everybody
malicious point beforeeverybody is, you know,

(27:49):
responsible for compliance.
Everyone is responsible formaking sure that AI is working
appropriately and that thingsaren't going off the rails, and
we're not, you know , um, youknow, suggesting medical
services that aren't necessaryto patients and putting them in
harm's way. We're not , um, youknow, overutilizing , uh, you
know, taxpayer dollars in orderto make sure that, you know,

(28:10):
businesses are staying afloat,things like that. Like, these
are things that in thecompliance world, like they're
scary for us because we'reseeing this all the time and,
and it's something that we needto take account of. But, you
know, it is a great resource.
It is a great tool. So how canwe balance those things out,
Lyon ? Yeah . Okay .

Speaker 6 (28:27):
Yeah . I , I , you know, I wanna step outside of
the bricks and mortar a littlebit and, and stress that. I
also think it's important thatwe engage the patient in the
flow of this change. And what Imean by that is, the worst
thing you can do is erode thetrust between the patient and

(28:47):
the physician. And they'regonna be curious as it white ,
what it might mean , um, withthe introduction of artificial
intelligence into whether it betheir documentation or their
medical record. Um, and, youknow, I personally think that
the messaging should be aroundthat , and that doctors and ,

(29:10):
uh, staff should stress thatit's not a solution as Alicia ,
uh, Ali , Alicia , uh,mentioned it , but more stress
that it's an extra set of eyesand it's not a machine doing
the work. It's giving you a , aseparate set of a perspective

(29:31):
and a separate set of eyes andears. I think that messaging in
layman's term , um, will helpthe patient better understand
what exactly the introductionof AI is doing with their care.

Speaker 4 (29:45):
That's a good point . Hey, Leman , I think, I think
you make a great point too.
Like, not only do we need toeducate, like providers and
staff and our legal, butpatients, right? We're all
educated shoppers right now,right? I wanna know what's
going in my medical record. Um,I know Dennis, we talked a
little earlier too about, youknow, some of these prac , you

(30:07):
know, practices that we workwith, we're like, oh, wow,
they're , they're havingissues. And then we get, do a
root cause analysis and we findout that somebody said an
override in the practicemanagement system or the EMR ,
right? And so automatically,when, when the provider picks a
certain, you know, procedurecode, hey, it automatically
dumps in this , uh, diagnosiscode, right? From a patient

(30:31):
perspective, I don't wanna begetting assigned diagnoses for
conditions I don't have, right?
Like that could be , uh, so I,again, you know, it kind of
broadens the scope ofeducation. You know, I think
patients need to be educated aswell. So that's

Speaker 3 (30:46):
A great point . When using any, any of these AI
tools, it's really critical tounderstand and know how you're
using it. Um, I'm sure doctorsnow all the time, I , when the
practice is , they wish theinternet was never embedded.
'cause now we're all diagnosingourselves. Why do we even need
a doctor? If we have an AIthat's gonna tell me what, what
, uh, drug I should take orwhat exercises I should do, or

(31:09):
what's wrong with me? I shouldprobably do a how to do an
appendectomy and just remove myown appendectomy to save me a
couple thousand dollars , uh, where you know that ai ,
uh, it's only as good as howyou know how to use it. Uh,
yeah . If it's used in thewrong way, it could really,
really go down a very dangerousroad. Uh, so knowing exactly
what it is, how it's supposedto be used by the experts who

(31:32):
either developed it or have putthe time into developing the
training, it's very criticalfor each one of us to pay
attention to that. Becauseotherwise, we're all gonna be
physicians. There's no need,you know, at , at this point,
we'll just use AI to , uh, youknow, prescribe some medication
or tests and, and we'll do itall ourselves. Uh, it's a very
dangerous assumption, but somany people believe that with

(31:56):
ai, that it's, that it'sabsolutely correct. There's no
bias or there's no , uh, errorrate that could come out of it.
Uh, and it's true, theinformation in there is
probably very accurate or moreaccurate, but it's really what
information you put in to getthat information out. And
that's what I meant by garbagein , garbage out. You really do
need to , to know how to use it, um, before even thinking

(32:19):
about , uh, relying on theoutcomes. So I guess , uh,
really that brings us intowhat, where is the future with
ai? I mean, when they wereworried , worried about the ,
uh, the wild, wild west, and wehad the 13 colonies and they
moved west, there was a vast country that was out
there, there, and that openedup to where we are now, you

(32:41):
know, all 50 states. And it'san incredible world that, that
we have. And it just took a fewof us to go west and to start
expanding in those areas. Andthat's what's happening now. AI
is everywhere, both, even justfrom a basic, my grandmother,
who's probably 97, 98, isasking me what chat GPT is, and

(33:03):
she's spitting out informationto me. She has no idea what's
going on or what it means, butby gosh, she's reading it off
to me. And it's, it'sincredible. I mean, she has an
MD degree , um, but where doyou see, where is the future of
ai? Where are we going? What'sgonna happen for our jobs from
a compliance perspective withnavigating AI and basically

(33:24):
keeping a pulse on it? What'syour take on the future? Isha ?
I don't know , uh, where toeven begin with that.

Speaker 4 (33:31):
Yeah, look, I, I , I think like everything, right?
Like, I think the regulationsare going, I think it's gonna
be a fluid situation, right? Ithink , um, the regulations are
likely going to adapt out ofnecessity, right? I don't, I
don't think it's gonna be fast.
I think we're gonna have somethings happening pretty quickly
coming out of the gate. And Ithink, you know, pressure from

(33:54):
stakeholders, providers,payers, patients , um, are
gonna push regulatory bodies toevolve. Uh, but I think it's
gonna be incremental. We'regonna see incremental changes.
I think that's gonna be muchmore realistic than seeing
sweeping reforms. Um, you know,and like, you know, even like,
look at the evolution of like,hipaa, right? We're , we're,

(34:18):
we're seeing new changeshappening right now in the
cyber world, right? Because of, you know, it's , uh, it's
like everybody's really big oncompliance and we're all up on
it, and then we get a littlelaxa daisy, and then like,
something happens, right? Andthen we're like, oh, yeah, oh
yeah. We have to, we have to,you know, get back to being
diligent about those things.

(34:39):
Um, interesting. I had , uh, Ihad seen a , um, a keynote
speaker from Nassau one timewho said the same thing happens
in every industry, right? Like,look, you know, they're sending
rockets out into space and, youknow, oh, things are going
great. So they get a littlelaxa daisy in areas until like
an event occurs, right? So Iguess for me , uh, from a

(35:02):
compliance perspective, youknow, making sure that we're
doing ongoing risk assessments,don't take our finger off the
pulse, go back. I'm all about,you know, the auditing and
monitoring. We, we canimplement controls, but unless
we're going back to check themand test 'em to make sure that
they're actually effective ,um, I think that is the most

(35:24):
tangible, real world practicalthing that we're gonna have to
do as the evolution ishappening. I think we're just
gonna have to keep, keep , keepour eye on the , on the, on the
ball , Dave.

Speaker 5 (35:38):
Oh , yeah. I do think that , um, and to your
point, I do think that , uh,you know, AI isn't going
anywhere, right? It's beenaround already for a while .
You know, we've been using itin our day-to-day activities,
and physicians have been usingit in offices and , uh, you
know, dictation software. Andnow, you know, auditors and
coders are using codify andaudit manager in order to, you
know, make sure that theircalculations are appropriate

(36:00):
for what the physician isdocumenting in the medical
record. And I think, you know,to your point, and to
everyone's point here on thecall is that, you know, making
sure that human factor is there, um, and making sure that the
education is there to , tounderstand like, okay , yes ,
I'm making these clicks, andboom, boom, boom, boom, boom,
I'm at a level four e and m. Isit really a level four based

(36:22):
off of what's actuallydocumented? Is it really a
level four based off of whatthe doctor, you know, assessed
the patient with and also whattreatment he recommended? So
it's, it's that piece of it. Ithink that we're all kind of
saying that, you know, it's,it's not just letting it sit
there. It's not just rollingout AI and saying, oh, it's the
end all , and that's all wecan, you know, that that's it.

(36:44):
Like, we don't have to doanything else. Like, I think
that education, the riskassessments, the ongoing
training, monitoring, likethose things are, are crucial
to make sure that we're havingthat significant change with AI
versus just AI being justanother tool that we can use in
order to , um, you know, makemore money, right? So I , I

(37:05):
think those are the big thingsto look at.

Speaker 4 (37:07):
I also think, you know, a lot of us in the
compliance world, we're like,Hey, you know, we've gotta do
our risk assessments annually,you know, but then also being
mindful, not really like it'sas or more frequently as
necessary, right? I don't thinkthis is gonna be an area that
you're gonna be able to justlike dump on your risk list of
things for your risk assessmentin your annual audit work plan.

(37:30):
I think any organization who'sgonna adopt this into their ,
uh, into their workflows andtheir, and their organizations
are gonna have to really bediligent in staying on top of
it and monitor on a daily basisand then do their auditing, you
know, regularly.

Speaker 6 (37:48):
You know, I guess , uh, in the spirit of the theme
of this , uh, forum, I thinkwhat comes to mind is , um,
when I try to look at AI in thefuture, I can't help but , um,
make the quote, this isn't myfirst rodeo and . By
that I mean , uh, we could takea look back and have flashbacks

(38:11):
and probably painfully so ofback when we had to adopt the
EMR , and most recently in thelast , uh, post covid or , uh,
during the Covid period wherewe had to embrace telemedicine
and telehealth to the extentthat we didn't know what it
meant to the industry long term. And everyone was trying to,

(38:34):
you know, forecast what doesthis mean to healthcare and to
patient care overall. But allthat said, looking at the past,
we got there, we made ithappen. Um, I think if it's
done appropriately and themessaging is strong, that says
if AI is used , uh,appropriately, it's a valuable

(38:57):
tool. And, but along withanything of that robust and
that change , um, there's somesensitivity and we need to
respect the risk. And part ofthat is compliance. That's

Speaker 3 (39:09):
A great point. Uh, when you look at the wild, wild
west or when those , uh,pioneers were there, there are
some that say said, no way, I'mnot leaving the compounds of my
own little world here. Andothers saw it as an
opportunity, something that'sgrand and majestic and , uh,
the , the world is at theirfingertips and they pushed
ahead. And that's where we arenow with , with innovation and

(39:32):
opportunity. If we don't takeadvantage of that, then, you
know, we will be stagnant.
Opposite of that, though, thereare opportunities that may not
have the integrity of what'sbehind the AI and using it to
go forward and to pushhealthcare into, into new
realms for, for the good, forthe benefit of patient care.

(39:54):
What we also find are thoseinnovators and those
opportunities that, that don'tquite have that integrity or
the ethics behind it, and theyuse it to line their own
pockets. Hmm . And that's whereour job from compliance comes
in, is to , uh, identify thosebad actors or keep , uh, uh,
monitoring on them. I think ifI hear Alicia say it once, I

(40:14):
have heard her say it a milliontimes, that whole trust and
verify , uh, it's absolutelytrue. We don't have to be the
police force or everythingthat's coming in and, and
mandating proof right here. Weneed to work with them and work
collaboratively and we trustthem, but absolutely verify. We
need to make sure that thecontrols are in place, that the
auditing monitoring are inplace, so that when we do move

(40:36):
forward, when we do see thesenew opportunities, that we, we
seize the opportunity, we takeit, we see where it's gonna
help, but we also are mindfulof where it , where it can go ,
um, go wrong. So where doesthat future lie at this point ,
uh, from ai? It's not goingaway, like you said, it's

(40:59):
constant. The only thing thatis constant is change. Uh, and
we're forging ahead. Uh, whatare your last comments on, on
the future of AI there?

Speaker 4 (41:10):
Yeah, I mean, you know, I think I love , uh, you
know, look , none of us have amagic eight ball , right? Uh,
you know, that's gonna give usour magic. Crystal ball is
gonna tell us, but I thinkLyman really kind of like hit
it on the head, which is why Igiggled when he said, this
isn't my first rodeo, right?
Like, I think we have to lookat things historically. Um, I

(41:31):
love Lyman that you said like,look man, we were all, were in
the same boat when we startedswitching over to EMRs, right?
There's gonna be good andthere's gonna be bad in
everything. Um, you know, wealways used to say like, the
house is great and the yard isterrible, right? Like, when the
EMRs started coming out to be atotal solution, right? MR and

(41:52):
practice management system . SoI think, you know, you know,
using our, our many, many yearsof experience and staying ahead
of the curve, which is what weall do, right? Because that's
what we've gotta do, like to bethe best in the field. Um, I
think, you know, yeah, I mean,I think we learned from history
and we apply all of the thingsthat have made us successful ,

(42:15):
um, and we carry that forwardwith us. Like you said, it's
not our first rodeo, you know,it is the wild west, but, you
know, we've all wrestled a , ahorse, you know, in the past.
So , um, yeah, I think that wasgreat. Great advice.

Speaker 5 (42:29):
Think that it's, it's gonna be a , a , a
challenge all around. Um, Ithink that you had a , a word
that you said in your last ,uh, kind of like explanation
was, was collaboration, right?
AI is a collaborative tool.
That's what it's supposed tobe. We're supposed to be using
it as a tool to collaboratewith physicians, with, you know
, um, coders, with auditorswith the regulatory bodies.

(42:51):
Like everybody is supposed tobe collaborating in a way that
ensures that we are providingquality care. And that is what
it comes down to is qualitycare. So AI is a tool like any
other that we need to use inorder to make sure that we're
providing that care that, that,that the patients expect and,
and that we're supposed to begiving them. Um, you know, and,

(43:13):
and as compliance, as auditors,as coders, like that's, that's
our job is to make sure that weare, you know , um, really
looking out for the, the , thepatient, right? And the , the
patient also has aresponsibility as well. But you
know, that's our job as well,is to make sure we're looking
out for the patient and makingsure that , um, you know,
things aren't, aren't the wildWest

Speaker 4 (43:41):
Lineman . Give us your words of wisdom, dude,
.

Speaker 6 (43:47):
I actually think , uh, probably my ending thought
would be , um, that we stressthat this is not a tool to
replace individuals. There's ahuman element and aspect to
healthcare that cannot go away.
And so no one should be feelthreatened by that element ,

(44:08):
um, to this and look at it froma positive standpoint. But I
know that even withtelemedicine, I heard people
say, am I never gonna see mydoctor again? FaceTime. There
is , uh, an aspect to FaceTimein real time , and we all did
this when we were going throughCovid and got closeted or sat

(44:30):
on Zoom every day . Um, there'sa human element to meetings,
there's a human element toFaceTime, and that isn't gonna
go away, and that interventionwill continue in spite of the
fact that we have a new andsexy tool out there.

Speaker 3 (44:47):
Yep . That's a great point. 'cause I, I do think
what all of these tools andeven our jobs and why we're,
we're performing the functionsthat we are, it's to protect
that. And I might be the onlyone, but I view that
relationship between theprovider, the doctor, and their
patient as a sacred one. Thatinformation, you're putting
your lives in the hands of thisprofessional doctor, and we're,

(45:11):
we're relying on them to keepus healthy, to keep us alive,
and in some cases keep us happy, uh, so that we can enjoy our
life and these tools, you know,before when the pen and paper
were involved , uh, it was amedical record and it was to
record the information from thepatient to the doctor and the
doctor to the patient. It'sstill to protect that sacred

(45:34):
bond between doctor andpatient, and everything from
the pen and paper to themedical record, to the EMR and
now ai, it's all there tosupport that sacred
relationship so that thephysician can concentrate on
treating the patient, thepatient can take that
information, get healthy, behappy, support others on this
world and so on. And I think AIis just, it just fits in there

(45:57):
to support that sacredrelationship and that , and I
think if we keep that as our,our motto and our mantra in all
that we do, I think it will beused for the good and it will
be used correctly. Um, inaddition to that, part of what
I train others in, inhealthcare consulting and fraud

(46:18):
and abuse is , um, follow themoney. If there's a cost, then
they're the ones who should begetting the money to pay and
cover that cost. Medicare wantsto pay for the care of treating
their patients and theirpatients only. And that's where
, um, we sometimes lose , uh,the ability to, to trust and

(46:39):
verify is those who's spendingthe money and who's getting the
money gets very clouded throughthese , uh, these times. And AI
doesn't always make that clear.
Sometimes it does bury who'spaying for what and who's
getting the money and thereward. Uh, so it takes a while
to navigate through that. Justlike the EMR Alyssa , you had

(47:00):
mentioned one , you know,earlier when we were talking
about a diagnosis in that , uh,in the care the EMR was set up
to make the physician's lifeeasier so that they can treat
their patients well by makingit easier. In the one tool that
I was , uh, or I'm sorry, theone case that I was involved in
with one of these attorneysthat I do work with, it was a
$1.4 million payback justbecause of one diagnose

(47:25):
diagnosis , uh, attached behindthe scenes to a bunch of lab
tests. So tens of thousands oflab tests were billed with,
with the wrong diagnosis. Theywere paid, and they were kept
going, paid , you know, gettingpaid. No one knew where this
was coming coming from. And allof a sudden, one doctor said,
well, my patient doesn't havethat diagnosis, doesn't have
diabetes. Why is it on here?

(47:46):
And then when we uncovered itall, it came down to a , a very
smart computer system, but itwas so buried within that
system, it took a very longtime to even try to get the
data out and analyze it toreach that $1.4 million
payback. So , uh, the morecomplex the tools get, the
harder it is going for us to ,to make sure that we're

(48:08):
following the moneyappropriately. Alicia , I know
you were just gonna saysomething. I'm sorry , I

Speaker 4 (48:12):
Just No, yeah, and I was too, I , I , I think just,
you know, one final thoughtfrom my perspective is , you
know, we, we talk about fraud,waste abuse , um, in the
industry, and I think ineverything good for all of us
who've been around a reallylong time, we see bad actors,
right? So everything good, thebad actors are gonna come out
of the woodwork, but I thinkfrom the, you know, most

(48:35):
providers are, you know, inhealthcare for the right
reason, they hear things likefraud, waste, abuse, and then,
you know, instantly there's alittle wall , wall goes up and
it's like, you know , well,that doesn't apply to me
because I'm not a fraudster. Idon't have bad intentions. But
in my world of trainingproviders on compliance, it's
not just fraud, waste abuse,it's about errors. So I think

(48:58):
we still have to be mindful todo our due diligence because,
you know, yeah, if you're notcommitting fraud, you know,
intentionally doing thingswrong, and, you know, you're
not trying to be wasteful orabusive, we're still subject
with all the, you know, comcomplexities in our day-to-day
world to errors. And I think,you know, I , I think everyone

(49:19):
has to come at this with anopen mind that, you know, it's
not just about intentionalfraud and abuse, it's also
about the errors that canoccur. So that's my final, my
final thought.

Speaker 3 (49:32):
Thanks HLA for giving us this opportunity. And
, uh, let us know how you feel.
Guys and girls. We're excitedto see where this , where the
frontier's going.

Speaker 2 (49:47):
Thank you for listening. If you enjoyed this
episode, be sure to subscribeto ALA's speaking of health
law, wherever you get yourpodcasts. To learn more about a
HLA and the educationalresources available to the
health law community, visitAmerican health law.org.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.