All Episodes

October 13, 2025 20 mins

In this episode, Kanar Kokoy, Founder and CEO of Chirok Health, shares insights on responsible AI adoption in healthcare, emphasizing governance, human oversight, and cultural change as key to driving sustainable impact in revenue cycle management.

This episode is sponsored by Chirok Health.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hi, everyone. This is Brian Zimmerman with Becker's
Healthcare.
Thank you so much for tuning in to
the Becker's Healthcare podcast.
Today, we're going to talk about the path
to responsible AI adoption and revenue cycle management
and how health systems can balance innovation, governance,
and change management to drive real impact.
Joining me for today's discussion is Kanar Kokoy,
founder and CEO of Turok Health. Kanar, thank

(00:21):
you so much for being here today.
Thank you for having me. And and to
get us going here and to to help
listeners appreciate your perspective, can you can you
share a bit about yourself and and your
work in health care?
Sure. Thank you.
So,
I'm Kanar Kokoi, obviously,
and, founder and CEO of Turok Health.
Really, my work in health care,

(00:44):
brings together the operational and the technical sides
to create real lasting solution.
That's what I've seen,
myself doing in the past twenty five, thirty
years in this industry.
I've seen myself as an innovator and a
solution maker,
someone who helps,
organizations strengthen their revenue while improving

(01:05):
the way care is delivered. I have a
strong background
on the technology side as well where I've
designed and applied tools that
improves accuracy,
efficiency, and financial performance.
At the same time,
having the understanding
of the operational
challenges
that many health care organization face every day

(01:26):
has been a crucial
aspect,
in terms of my experience
and,
you know, understanding that with many of the
health care organizations that I work with today.
And that has allowed me to,
bridge the gap between strategy and execution.
One of the most, important thing that matters

(01:46):
to me is really impact.
I truly believe provider burnout is one of
the biggest biggest issue in the health care
industry. And, my focus in the past several
years has been on building solutions
that eases that administrative and operational burden so
providers could really focus on patient care.
And,

(02:07):
lastly, really, innovation for me is not just
about technology.
It is about creating a system that
is financially strong, operationally
sound, and supportive for the people who deliver
the care.
So
Yeah. So some great points there. And that
that being financially sound too is is a
part of what makes the the rest of

(02:28):
that mission you laid out possible. Right, Kunal?
Correct. Yeah. Yeah. But it's very important to
have a lot of those,
intuitives together. Yeah. Mhmm. And we know there's
there's some numbers out there already. And, of
course, we're talking about we talk about AI
and health care all the time now.
But
I believe the stat I have in front
of me is over 70% of health systems

(02:48):
have already deployed AI pilots or or full
solutions in areas like finance, RCM, or clinical
care.
From your perspective then, what is the right
pace of AI adoption in our RCM?
Specifically, what risks are health systems facing as
they try to do this work? And and
what are the risks, I guess, of going
too fast or too slow? How do you
find that sort of that that happy medium

(03:09):
there where you're going at the right pace?
Yeah. That's a really great question.
So, you know, when we talk about the
pace of AI adoption, especially in revenue cycle,
I think the key is balance.
Too fast,
and we expose ourselves to risk around accuracy,
governance,
and unintended
consequences.

(03:30):
And if you go too slow,
I believe we risk,
being left behind, especially as I'm sure you
guys are seeing,
many of the private payers are already leveraging
AI aggressively
and often to regress reimbursement
to the mean.
So I in my opinion, the right pace

(03:50):
isn't really about speed for speed's sake. It's
more about
how thoughtful the adoption has been with guardrails
and clear strategy.
So I I just wanna, you know, break
down a few key points, what I mean
by guardrails and strategy.
The very
most important thing is regulatory

(04:11):
and audit environment,
for any ad AI adoption.
Because right now, we are in a situation
where rules are inconsistently
enforced
and audits
are increasing from the payer aspect.
What this means is that many of the
health systems are under more scrutiny than ever.

(04:32):
So if we implement AI without the right
control,
we could end up amplifying errors at a
scale. On the other hand, if you hold
back
completely,
you'll be outpaced
by payers who are using the technology to
their advantage,
which, you know, essentially puts the providers at
risk.
The second is how do we respond?

(04:54):
And that is really through innovations and technology
development or through, you know, lobbying and political
advocacy. In my opinion, I think it's both,
honestly.
The third is
the, question on who you're going to partner
with. And this is going to be really
important
because
do you go with big players or smaller

(05:16):
players? You know, obviously, if you go with
big companies,
those guys, they bring in scale some resources,
but the smaller companies usually bring agility and
laser focus on solving specific problems.
So in in my opinion, it's not really
so much about the size. It's more about
the governance. Do you have the right structure

(05:37):
in place to evaluate and monitor and hold
those partners accountable?
The fourth thing is the testing and the
transparency, and this is where the biggest gap
is right now, which is there is a
lack of standardizations
and framework around the testing.
Many of the AI models,
that I have tested personally

(05:59):
in the past,
couple of years, especially,
you know, they're against one another.
Everyone wants data,
and,
obviously, not not much has been shared. As
a result,
it's putting a lot of health systems, you
know, their testing tools are in silo,
or they don't have any visibility into how

(06:19):
a solution is compared more broadly.
And then the last thing, which brings me
to the most important,
is the human in the loop.
Unfortunately,
you know or fortunately, I should say, AI
has incredible potential.
But in health care, this is where the
unfortunate
part comes in, especially in RCM.

(06:41):
We cannot take the human out of that
process.
I do see,
many of,
AI technologies,
you know, claiming
that they'll go a 100%,
tech proof and
no human oversight.
You know? This this just cannot be really
a function within,

(07:02):
revenue cycle. Whether it's coding, whether it's documentation,
reimbursement decision,
there has to be a layer of human
expertise, oversight, and judgment. Otherwise, the risk of
errors,
the bias, and the unintended
financial consequences
that comes with that is just too high.
I think the right future is for a
collaborative

(07:23):
one.
AI does the heavy lifting, but the human
guides and reviews and ensures the output is
safe, and it is accurate, and it aligns
with both financial and
compliance.
And, again, going back to the payers creating
more scrutiny around,
audits and increase increasing,

(07:44):
those audits, this is where the human in
the loop becomes really important. So
so just to sum up, you know, obviously,
the right pace in my opinion,
in the RCM
is a measured one. Fast is not fast
enough to stay competitive, but you have to
make sure it is a careful
enough, to ensure accuracy and fairness

(08:06):
and making sure that there is a
balanced approach between both the AI and the
human in the loop. So so that way,
the outcome is really,
successful.
Yeah. And and to your I mean, some
really important points there. And and the human
in the loop, clearly, the you gotta have
that because the stakes are so high. But

(08:26):
Yeah. I wanna come I wanna come back
to what you touched on there in terms
of governance because
that is to me when when I talk
to folks where where many, many are struggling,
and I think that bears out in some
of the some of the information we have
on on on terms of of of governance.
So large scale adoption, we've seen that. But,
really, the the number I have in front
of me is 17%. Only 17% of organizations

(08:47):
report having a mature governance structure.
So, can you speak to that gap and
and what that really reveals about how organizations
are are approaching AI and what governance practices
should be prioritized? Why is this so difficult?
And this might be one of those,
a place for you to talk about the
importance of agility here in finding the right
partner.
Yeah. Absolutely.

(09:08):
Even though the AI adoption is happening at
a scale,
it it it was surprising or it is
surprising to hear about the 17%
of the organizations
say they have a mature governance structure.
And, really, to me, this shows that a
lot of organizations are jumping in enthusiastically,
but without the framework needed to manage risk

(09:30):
and ensure sustainability.
Obviously, without the proper governance, they're exposed to
what I call the AI bubble. You know?
Many of the companies that are providing AI
solution today may not exist tomorrow. So
the question for
many of the, you know, leadership is to
think about if they're relying on them without

(09:52):
safeguards or you could suddenly lose critical capabilities.
You know? Another common issue is the resilience,
you know, the operational resilience.
Many organizations don't have the archiving, the vaulting,
or the data,
portability
processes set up. So what this means, if
the AI vendor goes out of business or

(10:14):
their model changes,
you really have no way of transferring that
experience or that data to a newer system.
So, essentially, you're back to square one. You
know?
So governance is not just so much about
formality. It's about protecting your investment and making
sure that the AI continues to deliver the
value over the time and preserving the operational

(10:35):
knowledge.
Strong governance framework is what really separates many
organization,
that can scale AI safely from those,
that risk disruptions every time
a vendor changes or disappears. And so
in my opinion, having a balance between both
large and small company and creating that

(10:56):
agility, perhaps, you know, some of the smaller
companies bring in to focus
on some of the smaller solutions while the
larger companies are at a scale to deliver
at a high level,
perhaps, you know, might be the right balance.
And at the end of the day, you
know,
the AI integration

(11:17):
is just as much as about people
as it is about technology.
And,
you know, the roles like training and management
and oversight
and governance, all of that has a big
impact
as that decision is being made. So,
you know, while that number seems to be

(11:37):
low, even though, you know, the AI adoption,
like I said, it is happening at a
large scale,
but it just it it really means that,
you know, we we just need to take
a slower pace in making sure that the
governance and the structure is set up right
and that, you know, there's there's,
communications and intuitive,

(11:58):
aspect is in place for each each area.
For sure.
And and wanna get back to also the
the human in the loop comments. Right?
Because that's that's so that's so important and
obviously crucial in your in your earlier comments,
but let's zero in and talk about those
those people, the humans in the loop.
So what role does training and change management

(12:19):
play in ensuring both human teams and AI
models are working to improve work flows and
the patient experience. So can you expand on
that a little bit about how you keep
the humans in the loop, but also give
them the tools and and the training necessary
to be effective?
Yeah. I mean, one of the things that
I have heard over the past couple of
years especially is, you know, many health care

(12:40):
organizations, they talk about information
and, you know, taking information and and rendering
a judgment, then communicating that recommendation to the
patient. You know? And, obviously, there's a lot
of tools that's been developed in that.
I believe in the health care or you
know, in the health care world that we're
living in,
everything really comes down to one thing, which

(13:00):
is taking information, rendering a judgment, and communicating
that recommendation back to the patient. That's the
core of what providers do every day.
The AI and other tools can help, but
adopting them requires more than just a technology.
It requires cultural change.
For clinicians, it's about trusting and integrating these

(13:21):
tools into their workflow without feeling like the
machine is replacing their judgment.
Unfortunately,
this is one that I see from couple
of, you know, just in the past,
you know, two months, I've seen three different,
technology companies indicating, oh, you know, the machine
will be able to,
predict what the diagnosis are, and here's how

(13:44):
you have to advise the patient. And and
in my opinion,
if a physician can be replaced by AI,
then that is not a good physician.
You know? Because I do believe that patient
care is just something that cannot be replaced
by technology. You can develop the technology to
enhance the patient care, but replacing that

(14:06):
and replacing their clinical judgment, it's a it's
a tough call. And, you know, you have
other folks like administrators,
which is they need to understand how the
AI can improve their operational while ensuring safety
and compliance and quality aspect.
So, you know, at the end of the
day, one of the biggest challenge is that

(14:26):
the machine has to be taught tone.
They have it has to be taught context
and culture.
AI isn't naturally
content aware.
So this is why the human in the
loop is so important because it needs that
continuous feedback and iteration.
For example, one of the tools that we
have worked with, the initial output didn't meet

(14:49):
what the client's
expectations
were.
You know? They they continuously
saw
that same line of improvement
as to when they implemented the technology.
They needed the human expertise to come in
and review that technology,
audit it, and see how well it's outputting.

(15:10):
And based on those findings, we identified
that there was quite
a a large number
of conditions
that were being missed by the technology. And
in this particular example, it was a technology,
that was developed more on the value based
side where it predicts diagnosis and its resurface

(15:31):
diagnosis at the time of care. And, you
know, at the end of the day, what
the the outcome was, you really need to
have the human on on top of the
machine
to be that overlay
to continuously train
what the machine is outputting to cross verify
and make sure that information, that data is
accurate.

(15:52):
Is the machine,
still at the same pace as it was
when we first engaged with this customer? Absolutely
not. In my opinion, the machine has become
10 times smarter, but it needs to have
that continuous feedback for the human envelope and
the human overlay
to ensure the output is effective.

(16:13):
And, you know,
again, the implementing AI
is a responsibility
which requires constant validation.
So,
and, you know, most
often, you need that with cause analysis
when something feels off, and you always want
to ensure you're not breaking any law or
regulatory requirement.

(16:33):
So it's not you know, the AI adoption
isn't just about building the smarter model. It
is really about embedding it into the right
culture,
teaching it context, and continuously refining it to,
support the human decision.
Yeah. And to your point that, yes, the
technology has gotten so much better, but perhaps
a part of the reason that technology has

(16:54):
gotten so much better is because that human's
in the loop. The human is in the
loop that the culture is help feeding, improving
the the the technology. Correct?
Yeah. Absolutely. Yeah. And more and more I'm
seeing,
which I am glad to see this,
that, you know, more of the AI technology
companies are coming to this consensus
that I need to have the human in

(17:14):
the globe. I need to have the expertise.
And we should be all thinking about that.
Right? I think we should be
not promoting this idea that, you know, the
technology
is self aware and it can make decisions
and it can be just implemented
and not worry about it. I think,
having that type of a thought process can

(17:34):
be a bit scary,
just considering
all this scrutiny and the,
payer audits that are increasing day by day.
So
Yeah. Completely. Kunal, we're just about at time,
but wanna ask you one final question. Is
there anything we didn't touch on that you
wanna say? Or maybe there's something you wanna
reemphasize for listeners before we let you go.
What would you like to share to close

(17:55):
out here?
I'm just final thoughts is really I've just
been a big advocate in, like, coaching, mentoring,
and advising,
and even assisting all of my clients in
making sure that they understand that,
you know,
AI in health care is not just a
technology problem. It is really fundamentally

(18:16):
about people, process, and culture.
Oftentimes,
we focus on building the tool, but the
real work is how they are adopted, how
they're integrated, and governed.
And really without that strong governance and that
human oversight and ongoing feedback,
you really haven't developed any advanced tool, and

(18:36):
it will not deliver any sustainable value.
As part of Chirac, we've actually tested this
through
so far, we have
conducted
an audit
and,
verification
over
30 different AI technology.
And from anywhere small to large, AI technology,

(18:57):
whether it's been on the revenue cycle
or focus on the value based side, and
we have come to the same consistent
output.
My experience goes back to
all the way where we had computer assisted
coding and NLP.
And still today to today,
we still have the human in the loop
and the oversight for that because at the

(19:19):
end of the day, it is a technology
that you have to,
make sure that the human is kept in
the loop. The AI can accelerate workflow. It
can improve accuracy. It can reduce administrative burden,
but it cannot
replace judgment. It cannot replace empathy. It cannot
replace context.

(19:40):
So my best solution
of everything that I have
worked with,
in the past several years is really the
best outcome comes when the technology
amplifies human expert
rather than replaces.
So That's a a a great place to
land the conversation. Thank you so much, Kinar,
for for coming on the podcast today.

(20:01):
Yeah. Thank you for having me. Really appreciate
your time as well.
I wanna thank our podcast sponsor, Turok Health.
You can tune to more podcasts from Becker's
Healthcare by visiting our podcast page at beckershospitalreview.com.
Advertise With Us

Popular Podcasts

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.