All Episodes

November 14, 2025 53 mins

As health care organizations rapidly adopt advanced technologies, including artificial intelligence (AI), they face complex challenges around health care delivery and accountability. Christi Grimm, Managing Director, BDO, and Julie Taitsman, Managing Director, BDO, discuss how AI is showing up in clinical care and the business of health care, from helping physicians manage information to transforming the revenue cycle process, and how technology is supporting government efforts to protect public funds, detect risks, and promote transparency. Christi is the former Inspector General, U.S Department of Health and Human Services (HHS), and Julie is the former Chief Medical Officer, HHS Office of the Inspector General. Sponsored by BDO.

Watch this episode: https://www.youtube.com/watch?v=oOHMEoTTvGk

Learn more about BDO: https://www.bdo.com/ 

Essential Legal Updates, Now in Audio

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Stay At the Forefront of Health Legal Education

Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:04):
This episode of AHLA Speaking of Health Law is
sponsored by BDO.
For more information, visitBDO.com.

SPEAKER_02 (00:16):
Hello, I'm Julie Tatesman, a managing director in
the healthcare forensics groupat BDO.
And I recently retired from theUnited States Department of
Health and Human Services,Office of Inspector General,
where I was Chief MedicalOfficer.
And I am here today with formerInspector General, the Honorable
Christy Grimm.

(00:37):
Christy.

SPEAKER_01 (00:39):
Thank you, Julie.
Yes, I am Christy Grimm, and Iam also a managing director at
BDO.
And most recently, untilJanuary, I was the Inspector
General for the United StatesDepartment of Health and Human
Services, and I ran the OIG fora grand total of five years.

(01:00):
And I want to thank everyone forjoining us.
Today we are going to talk aboutartificial intelligence in
healthcare and how it is alreadyreshaping how care is delivered
and how accountability worksbehind the scenes.
In the first half of theconversation, Julie is going to
walk us through how AI isshowing up in clinical care and

(01:23):
in the business of healthcare,from helping physicians manage
information to transforming theresume cycle process.
And then we'll turn to theoversight side of the story, how
technology, including artificialintelligence, is supporting
government efforts to protectpublic funds, to detect risk,
and to promote transparency.

(01:45):
And to be clear, this isn't aworld where algorithms are
running audits on their own.
Across both halves of theconversation, Julie and I are

(02:07):
going to continue to come backto the same themes: how
technology can improveperformance, why human judgment
still matters, and what goodgovernance looks like in this
new environment.
And so with that, Dr.
Tatesman, I turn it to you.

SPEAKER_02 (02:23):
Thank you, Christy.
And thank you to AHLA and to BDOfor making this podcast happen.
And it is such a timely momentfor us to be doing a podcast on
AI and healthcare.
Christy and I are recording thisin November of 2025.
And just last month in the NewEngland Journal of Medicine, the
case records of theMassachusetts General Hospital

(02:44):
had an AI chatbot be the guestdiagnostician for the first
time.
Now, this is a special sectionof the New England Journal that
was founded by Dr.
Cabot a couple over a hundredyears ago.
And how it works is a physicianis invited to assess a mystery
case.
And you know you've arrived inmedicine when you're invited to

(03:05):
uh participate in this specialsection.
And in October of 2025, theyinvited Dr.
Cabot, capital C A, capitalB-O-T, Cabot, to be the guest
diagnostician.
And not only did Dr.
Cabot arrive at the correctdiagnosis, but the reasoning

(03:25):
that Dr.
Cabot presented was so clear andso rationally organized to reach
the diagnosis, it was truly amilestone.
So, Christy, before we uh diveinto AI and healthcare, let me
just ask you, how do you use AI?

SPEAKER_01 (03:44):
I use it more now than I did even as uh recently
as uh last November.
Um I use it as an assistant.
Um uh I, you know, again beforethis, uh before the you know the
most recent updates to um to AIand things like chat um GPT 5.

(04:06):
I was worried about accuracyissues and I used it really um
for basic things likescheduling, uh, to proofread
something that I'd alreadywritten, uh, to win out options
for a choice that I needed tomake.
Uh but now um through testingand feeding and teaching, uh I'm
using it in more comprehensiveways to collect, to analyze, um,

(04:29):
to synthesize topic-specificinformation for me to review and
to potentially use.
Uh, and it's great.
Um, I've I've I've shared withpeople that it's my, it's like
having a work partner.
Um, I still do a lot of checkingto make sure the information is
correct, though, uh and kept inthe right context.

(04:49):
But I use it daily throughoutthe day.
Julie, uh, how do you use AI?

SPEAKER_02 (04:54):
I have to uh bifurcate that answer because I
use it very differently in mypersonal life versus my
professional life.
In my personal life, I don'treally worry about my data.
I use the public systems, I useit mostly passively, and I
accept every cookie.
I love that the machine learningis tailoring what ads to send to
me, that my Google feed is goingto show me, you know, pictures

(05:16):
of dogs being adopted, or thenext period drama I might like
because I enjoy Downton Abbey.
I enjoy that it's learning frommy from my past use, and those
algorithms are working reallywell to show me content that I
want to see.
In my professional life, it'scompletely different.
I am much more protective of thedata.

(05:37):
I only use private AI systems.
Um, as you know, at BDO, weinstead of Chat GPT, we have
Chat BDO, an internal system,because I'm very, very conscious
that the data that we use haveto be protected.
And there's always a fear thatif you're using a public system,
they can aggregate thatinformation.

(05:59):
It can gather information thatcan move the market, or
aggregating a whole bunch ofpublic information, they can
reverse engineer intoproprietary or um or
confidential or even classifiedinformation.
So in my private life, I use itmore passively and for more fun
things.
And in my professional life,very, very guarded and protected

(06:22):
of the uh data security.

SPEAKER_01 (06:24):
All good points.
Uh Julie, uh, bring bring outyour clinician side, your
physician side.
Tell us how AI is being used inhealthcare delivery and in the
business behind healthcare.

SPEAKER_02 (06:38):
Sure.
We can we can go into moredetails in a bit, but I would
lump it into two big buckets, ormaybe you know, three or two and
a half um buckets.
Uh, first is is patient care interms of making a plan, a
diagnosis, and a treatment.
Uh, the second is revenue cyclemanagement.
That's the business behindhealthcare, the front-end

(07:00):
scheduling, the priorauthorization, submitting bills,
appealing denials.
And then the third oroverlapping bucket is
documentation, becausedocumentation of the clinical
encounter is important both interms of delivering quality
patient care and then alsomaking sure that providers get

(07:21):
paid accurately and payers payaccurately in the revenue cycle
management aspect.

SPEAKER_01 (07:26):
Uh, Julie, um, AI has been around really for quite
a long time.
Um, when you think about some ofits early rudimentary uh uses in
healthcare, what is AI alreadygood at?

SPEAKER_02 (07:41):
Sure, there are some things that AI is already super
good at, possibly even betterthan an average clinician.
AI is great at assimilating hugevolumes of information quickly.
So think of taking a voluminousmedical record, reviewing that
chart instantaneously,identifying all of the

(08:02):
medications that that patient istaking or being prescribed to
take, and identifying drug-druginteractions and
contraindications.
Another thing AI is really goodat is pattern recognition.
And that can be applied forvisual type patterns for
radiology, reading an MRI, uhpathology, reading a tissue

(08:23):
sample.
The um AI has uh has been provento be pretty um pretty accurate
on that.
And again, you know, this is anenduring podcast, so when folks
read it a year or listen to thisa year or two from now, um AI
just keeps uh keeps improving inthat regard.
Um I will note about AI is AIcannot replace uh physicians' uh

(08:48):
medical judgment.
It can be a tool to enhancemedical judgment.
And um AMA, the American MedicalAssociation, is trying to uh
rebrand AI in a way instead ofhaving AI stand for artificial
intelligence, they're promotingthat it should stand for
augmented intelligence, to makethe point that AI is a tool that
human physicians use, not a toolthat will replace human

(09:11):
physicians.
And we see that in other fieldsas well, trying to rebrand AI to
make it uh friendlier, uh moreaccessible.
I stopped reading the WashingtonPost, the paper version, and now
I listen to it while I walk mydog.
And just last week, the thisstory is read by an AI-generated
voice.
They AI started introducingthemselves instead of one voice.

(09:34):
Now it's I'm Josh, the AI voice,and I'll be reading your story.
I'm Chloe, the AI voice.
And we we see that in uh youknow, in in marketing in
general, to make things seemmore virtuous, uh less scary.
Like I go to uh what I used tocall Dunkin Donuts in the
morning and now is rebranded asDuncan.
Either way, I'm still getting aFrench crawler and a Boston

(09:54):
cream, but I feel more virtuousabout it, going to Dunkin' or
going to uh KFC instead ofKentucky Fried Chicken.
So I think um there is amovement to try and make AI seem
a little bit more accessible andless frightening.
And then one particular thing Iwant to talk about about how

(10:15):
some clinicians are using AI isan ambient listening uh
technology, which um basicallyhow how that works is you can
turn on this ambient listeningsystem when a physician, nurse,
I I apologize, I know physiciansand nurses don't like being
called providers, but sometimesI will say provider to try and

(10:37):
be more inclusive of all typesof healthcare professionals.
So when providers are meetingwith a patient and the ambient
listening can create a record ofthat encounter to um be used for
the medical recorddocumentation, but it can also
go beyond that and start workingon differential diagnosis and uh

(10:59):
treatment plan.

SPEAKER_01 (11:02):
So I imagine that ambient listening um that that
uh can save a lot of time, canit?

SPEAKER_02 (11:09):
Well, I would say maybe.
And a lot of times we make theconnection between electronic
health records and AI, like ifthe past is prologue for
electronic health records,electronic health records
promise to save time and improvecare.
Well, how's that working?
I mean, most physicians now saythey spend more time on

(11:31):
documentation, more time onpaperwork, more time on record
keeping than ever before.
So I'm optimistic that thatsomeday the promise of of time
saving will um will occur.
But uh, but but maybe quite notyet.
But there are some otherpositives.
I remember when we first rolledout electronic health records,

(11:52):
the uh provider had to, youknow, pull out a you know, turn
their back to the patient, pullout a keyboard, hunch over and
start, start typing before wehad, you know, uh the cart so
that you could at least stillmaintain eye contact with your
patient.
So even if it doesn't save time,some potential might be that you
can focus on talking to thepatient, make eye contact with

(12:15):
the patient because you're notturning your back to type at a
monitor.
But um what is super importantthough is to remember that the
physician, the provider has fullresponsibility for what goes
into the record and anobligation to review and verify

(12:37):
what the ambient listening uhgenerated uh to make sure it's
it's accurate because that's nota you know, that's a
non-delegable responsibility toensure that you know the note,
the record that you sign andcertify is accurate is.
And then we worried aboutelectronic health records with
copy and paste and notebloatthat the records could balloon.

(12:59):
You could see that potentialhappening with ambient listening
as well.
And then one other thing tothink about with with notebloat
is that the provider isresponsible for everything
that's that's in the record.
And I remember back when I was amedical student in the 1990s, we
would um, you know, we had paperrecords, and sometimes the

(13:22):
medical students would spend solong writing their history in
physical that instead of writingit directly in the chart, you'd
take the pages out, walk aroundon your uh with a clipboard, uh,
write the note and then insertit later.
And I had one attending positionwho said, you know what, I'd
rather you didn't put that inthe record.
Like you can just write yourextensive history in physical

(13:45):
and just keep that foreducational purposes because you
don't need a medical studenthistory and physical.
And his reasoning was that atthe end of a note for a history
and physical, you write yourdifferential diagnosis and you
consider every possiblediagnosis that the patient could
have.
And you start with the mostlikely, and then you work your
way down to the real thing, realuncommon things, what in

(14:05):
medicine we call zebras.
And he had had a malpracticecase where he hadn't read the
medical student note, theirnumber nine on the differential
diagnosis, ended up being whatthe patient actually had.
And he was held responsible for,well, why didn't you consider
that condition?
He said, because it was reallyrare and it was presented, but

(14:28):
your medical student consideredit.
And so you could see the samething happening where the
artificial intelligence createsa differential diagnosis that is
so much more extensive than whata busy clinician would actually
include.
And then my last point on savingtime, there's also the question
of if you have to disclose thatyou're using it to the patient.

(14:48):
So for true informed consent,that might take a little while
to explain to certain patientsexactly what the ambient
listening is and get approval todo it.

SPEAKER_01 (14:56):
Interesting point about uh, you know, seeing the
forest for all of the trees uhwith ambient listening and all
of the additional data that uhis uh potentially being added.
So, Julie, it does seem like theprimary use cases and
advancements for physicians, um,their use of AI are as a

(15:18):
physician assistant.
I described at the beginning, Iuse it like an assistant to help
manage all the data that'savailable to them.
Um, so let's get into how AI canhelp with the administrative
side of things, like revenuecycle management.
Sure.

SPEAKER_02 (15:34):
It might be a little bit more exciting and futuristic
to talk about how AI can be usedfor direct patient care, and
that potential keeps growing.
But for the here and now, Iwould say that the more
immediate use is foradministrative purposes and it
can be used on on both sides bythe by the pair who's processing

(15:57):
claims, authorizing care,denying claims.
We're seeing a lot more denials.
And uh the healthcare providers,the hospitals, the physicians.
And some estimates say thatabout half of providers right
now are using AI in some way,maybe to help with appeals,
maybe to help withdocumentation.
But again, so important torealize that AI is just a tool,

(16:19):
and you always need to keep aperson involved and rely on the
person's professional judgment.
With the rise of managed care,uh utilization controls are even
more important.
We're seeing more priorauthorization requirements.
We're seeing payers increasinglyusing uh denials as a tool, uh,

(16:39):
maybe even using denials thatwill ultimately get overturned
if the provider appeals.
And I've seen um some estimatesthat providers are spending
about$20 billion a year nowtrying to uh overturn uh denial.
So there's a huge opportunity touse AI to reduce administrative

(17:02):
costs and um maybe even uh getbetter outcomes.

SPEAKER_01 (17:08):
So do you see the potential for AI use on uh more
on the provider side, the payerside, both?

SPEAKER_02 (17:16):
Definitely both.
And think back to uh one of myfavorite movies from the 1980s,
uh Real Genius with a very youngVal Kilmer.
There's that classic scene atthe university where the
students have stopped coming toclass because it's not worth
their time.
So the students have set up uhtape recorders on each of their
desks, and the teachingassistant walks in, sets up the

(17:39):
projector, hits play, and the uhprofessor's lecture is beamed
out to the students to record.
And I can see both sides usingAI in a way like that, where the
uh provider is using AI tosubmit the claim, and the payer
is using AI to deny the claim,and then the provider is using

(17:59):
AI to submit the claim, andneither is programmed to um to
end the conversation.
So the cycle to infinity, whereuh the Washington Post just had
a um had a data article on uhhow much energy AI uses, and
apparently saying thank you toyour AI, then they say you're
welcome, and it measured that interms of how many uh disposable

(18:21):
bottles of water that'sequivalent to.
So we need to make sure that umthat the AI is set to at some
point uh end the cycle.

SPEAKER_01 (18:31):
Well, talk about the bright side of both the insurers
and providers both using AI.
Sure.

SPEAKER_02 (18:39):
I would love to see AI used for real-time um prior
authorization approvals.
I mean, we can reasonable peoplecan disagree about what kind of
administrative burden uh priorauthorization entails and when
it's appropriate and when it'snot.
But for a procedure that isrequiring prior authorization,
if we could do that in realtime, that would help everyone.

(19:03):
In medicine patients, we call itloss to follow-up, when you
recommend something, but theperson has to come back at
another time and it just doesn'thappen for whatever reason.
So think about the example of amedicine that needs to be
injected.
So there's many situationswhere, like an arthritis
medicine or something, where thepatient is there, the doctor's

(19:25):
there, ready to do it, and theysay, okay, can you just do it
now?
And the doctor says, No, we needto get prior authorization from
your insurance company, which isa process that takes a couple
weeks, the patient leaves, maybethey get approved, maybe they
take another day off work tocome back.
If that could be done rightaway, you wouldn't lose the
patient to follow-up.
It would cost less for theinsurance company, and it would

(19:47):
save that future appointmentthat the physician could give to
a different patient.
So it's it's win-win-win forthat patient, the payer, the
provider, and and other patientswho would have better access to
care.

SPEAKER_01 (20:04):
Okay, uh so Julie, talk about uh how it can help
with staffing issues andadministrative burden, how AI
plays a role there.

SPEAKER_02 (20:12):
Sure.
Um providers now say that theyhave more administrative burden
than than ever.
Um, there are staffing issues,tighter margins, um, more use of
prior authorization, and um,it's not just managed care,
other more traditional types ofinsurance are also increasingly

(20:33):
using prior authorization andincreasing denials.
So there's great potential touse AI to streamline.
So on the provider side, you canuse AI to proactively identify
which services the insurers aretargeting for denials and then
proactively ensure that thedocumentation is good and hits

(20:57):
the um buzzwords and therequirements before you submit.
So when you submit the firsttime, as opposed to on the on
the back end.
And the AI can gather data fromthe chart that's needed to
support the request, and ofcourse, not input it without
oversight, um, gather the data,and then the uh provider would
have to certify that that datais correct and um and accurate.

(21:21):
And with machine learning, itcan even tailor the request to
what that particular payerrequires and what that um
particular payer is focusing on.
So maybe your staff can focusmore on spending times uh
spending time with withpatients, and the AI can

(21:44):
continue learning and um keepgetting better.

SPEAKER_01 (21:48):
So tell us, though, about some of the
vulnerabilities associated withusing AI in that context.

SPEAKER_02 (21:54):
Sure.
Well, machine learning is onlygood as good as the inputs, as
the data going in.
So it's important to be carefulwho's programming the AI and
also to worry about uh bias,both intentional bias and
unintentional bias.
Um, first, you know, coming fromthe OIG perspective, we always

(22:16):
said that if you make a systemthat's giving out money,
especially government money,someone out there will figure
out how to cheat that system.
And, you know, industry playersmay be looking to increase
sales.
Now, in the uh not in thehealthcare context, I don't know
if you saw earlier this weekthere was a lawsuit against
Spotify that someone was messingwith the algorithm to try and uh

(22:40):
increase um the sales for Draketo suggest that people are
listening to more Drake.
So the stakes on that are arepretty low, right?
Who cares if um the algorithm isgoing to give you more Drake?
How about when the algorithm isgonna give you more opioids?
Now, if you recall, you and I,with our colleague Andrew Van
Landingham, uh, published anarticle in Jam Internal Medicine

(23:00):
back in 2020 where we discusseda case where the clinical
decision support, the CDS, theclinical decision support in the
electronic health record wasmanipulated by a drug company
that manufactured opioids.
And it was manipulated in such away to encourage physicians to

(23:21):
prescribe more opioids, uh, moreof this particular kind, um
expanding the indication so formore patients, higher doses more
frequently.
So the stakes are really highthere if the algorithms are
being influenced by someone withan agenda to push a particular

(23:43):
kind of treatment.

SPEAKER_01 (23:45):
Yes.
Uh clinical decision supportsokay to remind someone about it
being flu season, not so greatto be uh, you know, pushing.
Yes.

SPEAKER_02 (23:56):
Right, like what we used to call academic detailing
versus commercial detailing,where you're promoting certain
kinds of prescribing for provengood reasons versus certain
kinds of prescribing forfinancial gain.

SPEAKER_01 (24:11):
So, Julie, uh, tell us about unintentional bias in
the machine learning depends onwhat's being fed in.

SPEAKER_02 (24:20):
So there's the organic, uh, what cases it's
seeing.
And then there's also someefforts to train the AI on
LinkedIn.
All the time I get umadvertisements to see if I want
to um get paid to uh train toreview medical cases and train,
um train as.
I think they pay you in Amazongift cards or something.

(24:41):
I've I've never tried it.
A friend of mine does it and uhhe says it's kind of kind of
fun, but it reminds me of a jobI had back in college.
I grew up near where theeducational testing service,
where ETS is.
So if you remember taking yourGRE on a computer, well, you're
welcome.
I was in the group that helpedthem transition from paper tests

(25:01):
to a computerized version.
But what's curious about thatis, you know, I thought it was
the best job in the world.
You got$50 a day in a cateredlunch to show up and take
practice tests.
And who did they hire to do it?
It was college students whohappened to be on break
somewhere near Princeton, NewJersey, and whose mother maybe

(25:25):
saw the ad in the Princetonpacket or the town topic.
And then lo and behold, you findout the test is biased.
Of course it's biased whentrained it exactly.
So I think about the transitionto artificial intelligence a lot
like the transition from paperto um digital.

(25:46):
And it's a tool with a lot ofpromise, but we have to be
careful how we use it.
And let me just give oneimportant warning, just a real
practical tip to folks listeningwho might be physicians or
hospital administrators who areestablishing uh AI tools for the

(26:07):
medical records.
Um, once you put information inthe medical record, it's very
hard to get that informationout.
And it's also important toremember that you need to set up
your systems in a way thatyou're setting up your providers
to succeed, not setting them upto fail.
And we've seen with electronichealth records, with the

(26:27):
autofill and automaticallycreating a uh review of systems
and carry forward, that we seedocumentation of things that
didn't happen in the medicalrecord.
And you'll remember this fromone of our um OIG audits, where
we had a patient who came to theemergency room unconscious and

(26:48):
never regained consciousness.
But the box was checked in themedical record that the provider
had counseled that patient aboutsmoking cessation.
And now it's great to have yourprotocol be encouraging, you
know, no wrong door every time apatient touches the healthcare
system to talk about smokingcessation.
But when your system is set up,and that was probably an
autofill, it required theprovider to uncheck when they

(27:10):
didn't do it.
When you set it up that way,you're going to have false
things in the medical recordthat the healthcare providers
are responsible for.
So AI is a powerful tool, but itmust be used carefully.

SPEAKER_01 (27:23):
We would be remiss to not talk about cybersecurity
uh and AI in the healthcarecontext.
Yes.

SPEAKER_02 (27:31):
Yes, the stakes are so high now.
When we used to do cybersecurityaudits, you know, we'd set a van
in the uh hospital parking lotand see if they could drop onto
the system.
We were only worried aboutgathering the data, about
privacy and hacking into thesystem to gather the data.
But now with the Internet ofThings and the robotic surgeries

(27:54):
and the connected IV drips andthe little robots that deliver
the medicines, there are so manythings beyond just gathering
information, but hacking in andtaking over and causing bad
things to happen.
That the stakes are much higherthan listening to a little bit
extra Drake.
Right.

SPEAKER_01 (28:14):
So uh you earlier talked about uh bias and AI
equity.
How can it be used to promoteequity?

SPEAKER_02 (28:22):
Thanks.
I love that question because youknow, from the OIG perspective,
you're highlightingvulnerabilities.
And now I finally get a chanceto uh have a little optimism and
have some faith in the futurefor how some of these
technologies will be used topromote equity.
So I'll mention two.
Um, first of all, uh scheduling.
Um, one of the most importantfunctions of the front-end uh

(28:45):
revenue cycle management ispatient scheduling.
You can't access care.
I mean, except for the emergencyroom where you can walk in, you
can't access care withoutgetting an appointment.
And we've talked about this.
The bane of my existence iscalling the pediatrician to get
an appointment.
You wait on hold.
Sometimes their workflow is tellus what you need and then we'll
call you back.

(29:05):
Now, if you're working in awarehouse somewhere to find a
private time to be on hold for15 minutes and then a call back,
it's hard to get an appointment.
It is much more equitable if youcan privately on your own time
access the appointmentselectronically.
The other is translationservices.

(29:26):
I remember as a student beforethe class standards, um,
sometimes awful, awful practice.
You might ask a child who's withthe patient to translate for
you.
Um, AI has made translation somuch easier.
And then even for people whospeak English, uh, physicians,
we often use words in languagethat's hard to understand.

(29:49):
Um, some patients apparentlyturn to an AI summary to provide
the plan in digestible,accessible language.

SPEAKER_01 (30:00):
So, uh Julie, what are some key takeaways uh to
mitigate risk?

SPEAKER_02 (30:04):
I would say always keep a human in the loop, the
augmented intelligence notreplacing human intelligence.
And then two risks that weshould mention are
hallucinations and sycophency.
Uh, hallucination is when the AImakes things up.
And in government, at thehighest levels, people have
fallen for that and included umhallucinated uh citations in

(30:28):
government reports.
Um, sycophency is a special typeof sort of related to um
hallucination, where it givesyou more of what you want.
It confirms what you're alreadysaying.
So sycophancy is great for thoseAI friends you see advertised.
Sycopency is super dangerous foran AI therapist, right?

(30:50):
If you're asking the AI friend,should I go to a happy hour?
Great for a sycophantic answer.
If you're asking the AItherapist, should I harm myself?
Incredibly dangerous.
And then the um the next thingI'll mention is uh the
importance of complying withlaws.
It's a dynamic field, and thereare multiple authorities uh that

(31:12):
that apply, federal and state,um, the FDA regulating AI as a
medical device, um HIPAA fordata privacy, even the Federal
Trade Commission for deceptiveclaims and unfair practices, and
then states might be having umparticular laws on the uh
cutting edge is uh California,Colorado, and Utah.

(31:34):
But again, I would just say, youknow, keeping humans in the
loop, uh, humans areaccountable, and maybe um making
some efforts to uh minimizebias.
It's it's not enough to just beuh be be neutral, but um making
efforts to uh keep thetechnology improving and
minimize the bias.

SPEAKER_01 (31:56):
Uh one more question, uh Dr.
Tietzman.
What are a few key questions toask if you're thinking about
adopting a particular AIfunction?

SPEAKER_02 (32:05):
Sure.
I would say the most basicquestion is does it work?
And then I would think about doI have to disclose that I'm
using it and to whom?
Do I tell the patients?
Do I tell the payers?
Can I bill for it?
If I'm billing, how?
And then are my data protected?
And then finally, um is thetechnology biased?

(32:27):
Is it promoting equity?
And how can we make it better?
And um now a whole other podcaston just that.

SPEAKER_01 (32:35):
A lot to unpack in that, in that.

SPEAKER_02 (32:37):
But yes, go ahead.
And now, if I could ask you afew questions, let's uh let's
turn to the government oversightangle.
And if if you could talk a bitabout how technology, including
AI, is being used for governmentprogram oversight.

SPEAKER_01 (32:54):
I'm I'm really glad, Julie, that you phrase that very
broadly because in oversight uhright now, in you know, in
contrast to what's happening ina clinical setting, um, it, you
know, we are oversight isn'tsort of at that same point in
terms of deployment.

(33:15):
Um, and so it isn't so muchabout the use of AI as it is
about the broader use oftechnology and data in turn in
oversight.
Uh and so while AI is movingvery, very fast in clinical
environments, uh government umis really still in that early,

(33:35):
in the early stages of AIadoptions.
And the most meaningfuladvancements uh have come from
analytics platforms, uh, systemsthat bring together data, tools,
services, and people uh that youknow do things like support
investigative audit andmonitoring oversight work, uh,

(33:58):
that generate leads by doingthings like spotting trends and
billing and payments or othertypes of like grant drawdowns
that could indicate uh improperpayments or something, some
funny business that might behappening.
Uh so think of it more as earlydetection.
Uh, when something looks off,the system is then flagging it

(34:23):
for deeper review by a person.
Um, it really comes down to datahelping humans decide uh where
to look next.
Um, a good example uh and onethat's very, very current, I
think, is uh of an analyticsplatform is the Pandemic
Analytics Center of Excellence,or PACE.

(34:45):
And that was developed uh duringCOVID and it came out of the
CARES Act.
It brought together uh oversightprofessionals from across
government, across differentdepartments, auditors,
investigators, data scientists,and it provided a shared
environment uh for them toanalyze data in near real time.

(35:06):
And it let them linkinformation, and that you know,
that's the most important piece,honestly, across programs like
uh the provider relief fundscoming out of HHS, unemployment,
insurance out of labor, andsmall business loans coming out
of SBA.
And with that, with that kind ofplatform and shared data, shared

(35:29):
platform, you could see patternsthat wouldn't um have been
visible without within anysingle data set.
You could see if somebody um youknow was applying for all of
these different programssimultaneously and really sort
of have then um a trigger to beable to kind of monitor what was

(35:52):
going on there.
And that was an enormous uh stepforward in oversight capability.
Truly shouldn't be overstated.
I mean, it was it was it wasvery, very important.
Um and analytics platforms likePACE really are a model for
what's possible.
Um, for when you combine theright data, the right

(36:12):
technology, the right expertise,it can keep pace um pun intended
with uh fast moving programs.
And the next uh big waygovernment oversight is using um
technology is through riskmodeling.
Some oversight organizations,including uh the HHS OIG, have

(36:35):
built uh systems, and these havebeen in place for quite a long
time, frankly, um, that don'tjust look backward at what went
wrong, but try to estimate wherethe next vulnerabilities are
likely to occur and how thoseproblems, how big those problems
could be.
Um, when I was inspectorgeneral, um, you know this, I

(36:56):
consistently spoke uh about theimportance of making data-driven
decisions, uh, how we used awide range of data to zero in on
the highest risk because wecouldn't possibly audit or
evaluate everything under theHHS umbrella.
You know, at times there wasover$2 trillion in uh, you know,

(37:19):
in payments coming out of HHSand hundreds of programs.
Uh, you couldn't audit,evaluate, uh, investigate
everything.
And so um certainly for theaudit and in evaluations, risk
models helped us target areas uhuh that sort of signaled risk
and offered uh potentially thegreatest potential for positive

(37:43):
impact.

SPEAKER_02 (37:44):
Do you mean like trying to anticipate what this
season's flu strain will looklike?

SPEAKER_01 (37:48):
Exactly.
Um I do think very much of it asthe oversight uh version of
population health.
Uh for both the analyticsplatform and for risk modeling,
the value comes from connectingthe data, um, not just
collecting it, where you'rebringing together those data
points that are typicallyisolated to see if there's a

(38:12):
pattern to maybe triangulatesomething.
And in order to do that, youneed data.
And uh HHS OIG has just a lot ofdata, uh, enrollment data,
encounter data, prescription andlaboratory data, and uh, you
know, even enforcement andcompliance data.

(38:33):
And because of that richness,OIG uh can and does model what
normal looks like acrossmillions of providers and flag
anomalies and outliers thatmight signal a problem, signal a
problem, as in, you know, youneed to go behind that and
verify whether there actually isone.

(38:54):
But, you know, to signal, youknow, detect things like rapid
upcycles in billing, where aprovider's claims volume
suddenly uh spikes in ways thatdon't match patient volume, or
unusually high billing for aservice type in one geographic
area that doesn't, you know,seem to exist anywhere else, or

(39:17):
even when there are provider tobeneficiary ratios that seem
unrealistic to predict wherethere's a risk and a need for
follow-up action.
You know, it's uh a clue ifthere are, you know, if there is
uh one hospice for you know uhevery four beneficiaries in uh

(39:39):
you know in a county inCalifornia.
That's a clue.
You want to follow up on that.
So data-driven model riskmodeling um really does and did
help inform oversight prioritiesin deciding where you know that
follow-up action may be needed,audits, evaluations, or
inspections.
And, you know, really anytimeyou're cracking the OIG work

(40:03):
plan or you know, more likepulling it up on your website,
it whenever you see something inthe public work plan, it is
grounded in some sort of um riskanalysis, risk modeling.

SPEAKER_02 (40:18):
Yeah, so it sounds like uh you can use the AI to
identify the pattern, flag theanomaly, the deviation from the
pattern, but then puts on theground.
Right.
Right.
So what else is the technologymaking an oversight difference?

SPEAKER_01 (40:37):
Uh an area that I that I want to zero in on is um
uh automation and roboticprocedures, and because it
doesn't really grab a lot ofheadlines, but healthcare
oversight um includes automatingroutine monitoring tasks, the
kinds of things that used torequire teams of people running
manually, manual checks daily,monthly, or some other regular

(41:01):
basis.
And um, some of this automationhas been around for a long time.
Things like um automated edits.
Uh, you know, one example thatlisteners will be familiar with
are rules that are built intoclaims processing systems that
automatically flag or denypayments that don't meet policy

(41:22):
parameters, duplicate claims,mismatch procedure and diagnoses
codes, uh billing for deceasedbeneficiaries.
Uh, another area um, you know,uh is in IT and cyber oversight.
You talked, you talked aboutthat from a risk um perspective
from a you know clinician side.
Uh, but from a program oversightperspective, um, you know, in IT

(41:48):
and cyber, uh, there iscontinuous vulnerability
scanning of program systems todetect new weaknesses or
configuration changes thatexpose the system to risk.
Like that is an ongoing umprocess that happens.
And then um, you know, lookingat compliance monitoring,

(42:08):
continuous scanning to identifywhere there might be compliance
risk.
And I'll give a hypotheticalexample of when this could
happen, although some of this ismore uh idea than what's uh uh
potentially in practice rightnow.
So uh um when OIG resolves afraud or kickback case with a

(42:30):
healthcare entity, it sometimesoffers what's called a corporate
integrity agreement or CIA.
And that uh is essentially adeal.
The company stays eligible tobill Medicare and Medicaid by
avoiding exclusion from federalhealthcare programs, but it
agrees to strict compliancemonitoring, independent reviews,

(42:51):
board accountability, training,and uh reporting to OIG.
But every so often a companyrefuses to participate in a CIA,
and OIG publicly deems thishealthcare entity high risk and
commits to enhanced scrutiny forthat healthcare entity.
And this is where automationcould really make a difference.

(43:15):
Imagine a set of robotic toolsthat quietly keep uh tabs on
that high-risk company withouthaving a team have to do it
manually.
For instance, um it could bemonitoring uh claims and
enrollment.
Uh bots could be automaticallyflagging new billing activity
from any newly registeredaffiliate under common

(43:37):
ownership.
There uh, you know, it could bedoing corporate link analysis
where algorithms arecross-referencing business
registrations and NPI ownershipdata to see if the company
reappears under a different nameor a tax ID.
Uh, it could even be um, youknow, looking across public
records, automated searches, uh,could watch for new enforcement

(44:01):
actions, bankruptcy filings, ormergers involving that company.
In theory, all of that uh couldbe happening passively.
And uh the same type ofautomation could even uh involve
alerting OIG if there is apattern shift.
A lot of potential there.

SPEAKER_02 (44:20):
Now, talk a little more about the AI.
From a healthcare oversightperspective, how much AI do you
think is really in use?

SPEAKER_01 (44:28):
Uh that's a great question and one that I get a
lot.
And the the honest answer isthat AI and government oversight
is growing, um, but it's uneven.
Most of what's in use uh wouldstill fall under the umbrella of
what I just talked about,advanced analytics or
automation, um, rather than whatwe'd consider true AI.

(44:52):
Um, but having said that, thereare some real signals of change.
Um, for example, the one bigbeautiful act specifically
directs HHS to invest in AItools aimed at reducing and
recovering improper payments.
And that's uh that's a bigstatement.
Congress is saying we expecttechnology to help us be better

(45:15):
stewards of taxpayer funds.
And uh CMS, the Centers forMedicare and Medicaid Services,
announced that it will usemachine learning for its risk
adjustment data validationaudits, its RAD V audits for
Medicare Advantage, essentiallyusing AI to help determine
whether diagnoses that drivepayment are actually supported

(45:38):
by the medical record.
And that is truly a significantapplication of AI inside the
sort of regulatory auditprocess.

SPEAKER_02 (45:49):
Now, when you were the HHS Inspector General, what
were some of the considerationsor questions you had about AI
use in program oversight?

SPEAKER_01 (46:01):
Certainly, um, you know, starting with the enormous
potential for the use of AI,looking at it, you know, just
from the fraud angle alone.
Um, and I think this is wherethe most potential exists.
Um, agencies like HHS OIG havedecades of enforcement data that

(46:22):
can be used to teach algorithmswhat red flags look like.
And but and because thatinformation spans uh providers,
geographies, program types, it'sa really rich source for AI
deployment.
Um, you know, but I thoughtabout you know how oversight had

(46:45):
uh to walk before you know itcould run.
And when I was IG, theconversation always kept at the
fore the purpose of what we werelooking to do and transparency
around it, not just whether AIcould flag something, but why it

(47:06):
did, you know, thinking aboutsome of those inherent biases
that you were talking aboutearlier.
Um, also whether we couldexplain the reasoning, uh, could
we validate the output?
You know, um criticallyimportant, you know, were the
underlying data complete,accurate, timely?
If you look at um, you know, OIGreports, you know, probably

(47:29):
five, 10 years ago, huge focuswas just simply looking at
whether the data were there inareas like Medicare Advantage.
And so I thought thosequestions, um, which are really
seemingly simple questions, wereessential to maintaining
fairness and public trust.

(47:49):
And it is the most responsiblething to do.
Um and of course, you've talkedabout this so much, um, that
human element and how oversightwork really does depend on
professional judgment always.
And that's not, you know, umspecific to use of automation
and risk modeling and AI.

(48:12):
Um, you know, there does alwayshave to be professional
judgment.
Um, you know, when a pattern isidentified, we still need to
test it, to contextualize it anddecide whether, you know,
there's actually uh fire behindthat, behind that smoke.
And there it's no different forAI.
Those things still need to beum, the the human element still

(48:35):
needs to be there.
You know, and I realize much ofwhat I'm describing is a
governance framework, and thatwill be, I think, um, an ongoing
process, an aider processprocess for oversight entities
as they deploy AI.

SPEAKER_02 (48:51):
And what can you tell us about OIG's current work
plan?
Or is OIG zeroing in on the useof AI uh specifically yet?

SPEAKER_01 (49:04):
Uh AI specifically, you know, not much.
And and that's that's telling.
Um, if you look at their plan,you won't see many projects
focused on AI.
Uh, and that makes perfect senseto me because the rules around
AI, um, how it can or should beused in Medicare and Medicaid

(49:26):
just, you know, really don'texist yet.
Um, as far as I know, therearen't formal and broadly
applied standards that define,you know, what constitutes a
clinically valid algorithm in acoverage payment or prior
authorization context.
Are you are you aware of any um,you know, uh formal uh formal

(49:46):
standards?
I'm not.
Uh and any OIG, you know, needsto have uh a law, a rule, um,
you know, guidance with which itcan compare something against.
And when that doesn't exist, youknow, um, you're really sort of

(50:06):
at the point where you're ummaybe describing things, uh
describing things, doingdescriptive work, uh, you know,
and without sort ofdocumentation specific to things
like algorithmic decisionmaking, um, you know, that says
how a pair provider shouldrecord the use of AI when making
or justifying a decision, youknow, you're really kind of, you

(50:29):
know, oversight needs to takethat into account.
And we don't yet have guardrailson the fundamentals.
And until those things exist orare better defined, you know,
oversight is more in a in a sortof gray space.
And so out of OIG, I wouldexpect um, maybe as they're
collecting data, uh, maybe forthem to be asking if AI was sort

(50:54):
of used in clinical settings andhow, and um to show how that
algorithm, you know, um how toexplain that algorithm, uh, I
wouldn't expect it to be, youknow, for them to come out with
improper payments because AI isused because those rules just
aren't really there yet.

SPEAKER_02 (51:11):
Very interesting.
And could you close us out withmaybe three key takeaways for
healthcare entities?

SPEAKER_01 (51:19):
Uh healthcare entities um build governance
before you need it.
Uh, even though regulations onAI and advanced analytics are
still forming, um, thatgovernance shouldn't wait.
Um, organizations that alreadydocument how they design, test,
validate, and monitor AI willreally be uh ahead of the game

(51:40):
when that oversight catches up.
Um second, uh transparency uhbuilds trust.
Uh program administrators andoversight entities will be
asking you to show your work.
And whether it's AI-assisteddocuments, um uh building
automation or predictiveanalytics, um, you know, to

(52:01):
anticipate those questions abouthow those tools reach the
conclusions and actions thatwere taken.
Uh, and then you know, it can'tbe said enough, right?
Um, Dr.
Teetsman, oversight uh needs tokeep people in the loop.
So keep people in the loop.

SPEAKER_02 (52:20):
That is that is terrific advice.
Thank you for sharing thosewords of wisdom, uh, Inspector
General Grimm.
And thank you, Dr.
Teetsman.
This was a delightfulconversation.
And thank you all for listeningthrough to the end.
And thank you, BDO and AHLA, formaking this podcast possible.

SPEAKER_00 (52:46):
If you enjoyed this episode, be sure to subscribe to
AHLA Speaking of Health Lawwherever you get your podcasts.
For more information about AHLAand the educational resources
available to the health lawcommunity, visit American Health
Law.org and stay updated onbreaking healthcare industry
news from the major mediaoutlets with AHLA's Health Law
Daily Podcast, exclusively forAHLA comprehensive members.

(53:09):
To subscribe and add thisprivate podcast feed to your
podcast app, go to AmericanHealth Law.org slash daily
podcast.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.