All Episodes

March 11, 2026 3 mins

Send us Fan Mail

In January 2026, the FDA sharpened the line between helpful software and regulated medical devices. If your AI sits inside an EHR, providing "black box" recommendations that a clinician can’t independently verify in seconds, you aren't just drifting into a regulatory gray area, you’re likely standing outside the "safe zone."

In this episode, we break down the high-stakes intersection of FDA transparency, OIG inducement analysis, and the reality of clinical workflows.

In this episode, we cover:

  • The 2026 FDA Update: Why "independence" is the new metric for non-device CDS.
  • The Transparency Test: If a physician has to call your engineering team to explain a recommendation, you've already lost.
  • OIG & The Anti-Kickback Statute: How "nudging" prescribing behavior creates massive financial liability, regardless of what you call your software.
  • Automation Bias: How "fast and confident" AI leads to clinician reliance that regulators now view as a red flag.
  • The FTC Factor: Why vague disclosures and hidden logic are no longer defensible under consumer protection standards.

Key Takeaway:

Regulators don't care if the tech works; they care if the compliance story holds up. If you cannot prove your recommendations are separated from commercial influence and fully explainable, you are exposed.

Are you ready to defend your AI? Don't wait for an investigator to walk through your door.

Subscribe to the KLF Deep Dive Podcast & Newsletter to navigate these risks before they turn into enforcement problems.

Support the show

www.kulkarnilawfirm.com

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:02):
AI inside VHRs feels safe when it's labeled clinical
decision support.
The FDA's newest guidance makessomething uncomfortably clear.
That label only protects you ifyour software behaves exactly
the way regulators expect.
Most AI systems, they don't dothat.
If you're building, buying, ordeploying AI in healthcare,

(00:25):
subscribe to the KLF Deep DivePodcast newsletter.
I work with FDA regulatedclients every single day, and
this is where I break down risksbefore they turn into
enforcement problems.
So in January 2026, the FDAupdated its clinical decision
support guidance.
They drew a sharper line aroundwhat qualifies as a non-device

(00:45):
CDS.
To stay out of deviceregulations, clinicians must be
able to independently understandthe basis for every
recommendation and not relyprimarily on the software.
And the fact is, this soundsreasonable.
But now think about most AIinside EHRs.
You have black box models, youhave adaptive learning, ranked
or prioritized outputs, verylittle explainability.

(01:08):
And that's where companies startdrifting out of the safe zone.
Could a physician realisticallyexplain how your AI reached a
recommendation without callingengineering or the vendor?
Now, layer in the anti-kickbackstatute.
Once AI recommendations sitinside clinical workflows, the
OIG, the Office of InspectorGeneral, suddenly stops caring

(01:30):
whether the tool is called a CDSor is just plain analytics.
They care about inputs.
If AI nudges prescribingbehavior, if someone benefits
financially downstream,inducement analysis starts.
Tracking prescribing patterns,engagement, or outcomes only
sharpens that focus.
And that is the overlap thatcompanies get in trouble for.

(01:51):
FDA looks at reliance andtransparency.
OIG looks at benefit and intent.
AI systems can fail both testsat the same time.
The FDA's guidance explicitlycalls out automation bias.
When software is fast,confident, or embedded in
time-pressured workflows,clinicians rely on it more than
intended.

(02:12):
And that matters.
That matters if your AI rankstreatment options and clinicians
follow them by default.
I've been a clinician, I've beena pharmacist, and I see where
that's coming from.
Regulators question whether thesystem's actually supporting
judgment or if it's actuallydirecting it.
If your AI changed rankingstomorrow, could you prove that
it had nothing to do withcommercial intent?

(02:33):
Look at where the actualrecommendations came from.
You have to be very careful withthat.
Even outside FDA device rules,the FTC already expects
transparency when automatedsystems materially influence
decisions.
Hidden logic and vaguedisclosures do not hold up.
When I advise FDA regulatedclients, this is where things

(02:53):
break.
The tech works, the compliancestory does not.
AI inside EHRs can and willimprove care, but if your system
cannot clearly show how thoserecommendations are generated,
reviewed, and separated fromcommercial influence, you are
exposed.
Subscribe to the KF Deep DivePodcast and newsletter.

(03:15):
I help FDA regulated companiesnavigate exactly these issues
before the regulators comeknocking.
Here's my question for you Wouldyou rather explain your AI
system to me now?
Or would you rather wait till aninvestigator comes through?
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Clifford Show

The Clifford Show

The Clifford Show with Clifford Taylor IV blends humor, culture, and behind-the-scenes sports talk with real conversations featuring athletes, creators, and personalities—spotlighting the grind, the growth, and the opportunities shaping the next generation of sports and culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices