Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:16):
Welcome to another episode of Boomberg Intelligence's Vanguards of Healthcare podcast,
where we speak with the leaders at the forefront of
change in the healthcare industry. My name is Jonathan Palmer
and I'm a healthcare analyst at Boomberg Intelligence, the in
house research arm of Bloomberg. I'm thrilled to welcome today's guest,
Linka Wallyiade, the CEO of AKASA. Prior to co founding
(00:36):
the company, Milinka was a partner at Adresan Horowitz or
as many know it as A sixteen Z, where he
helped build out their healthcare investment team. I'm looking forward
to learning how his time and venture helped shape the
vision for a company pioneering the use of AI to
solve financial and operational complexities and revenue cycle.
Speaker 2 (00:55):
Welcome to the podcast.
Speaker 3 (00:57):
Thank you, Jonathan. I am thrilled to be here.
Speaker 1 (01:00):
So why don't we start off with your background and
maybe set the stage.
Speaker 2 (01:05):
Give us a quick.
Speaker 1 (01:06):
Mission statement of ACASA, and then maybe let's rewind and
start how you got into venture and how you got
the origin idea or what was the origin idea for
founding the company.
Speaker 3 (01:17):
Yeah, the quick summary of AKSA is we reduce friction
in the financial back end of healthcare using AI. This
is a domain that has some incredibly challenging problems that
require combining very very large financial and clinical data sets.
But our ability to solve these problems well has meant
(01:38):
that the most progressive institutions in the world, works like
Cleveland Clinic, do extensive surveys of all the companies that
solve these problems and up picked us to be their
partners and in solving it with them. So let's redo
at a high level and happy to diminish in other
parts of your question.
Speaker 1 (01:53):
Yeah, so you know your background. You started in industry
and then you moved into venture. What drew you to
the venture to community and what maybe drew you to healthcare?
Speaker 3 (02:01):
Aditially sure, maybe on healthcare. I've always been interested in
in healthcare, initially from a more sort of biotech perspective.
Early on, we were gaining the ability to program DNA
like software right like you can you can actually write,
you know, genetic code, literally write genetic code. You can
print out sequences of DNA to you know, express certain
(02:21):
proutines and do things and selves. And that was super
fascinating interesting. What I eventually realized is you could that's
a very important set of problems, and there's a bunch
of important people solving them, but you could also tackle broader,
larger problems at a societal level in the sort of
healthcare provider pair industry using more traditional software approaches. And
(02:42):
that's what ultimately grew me into into that world. Venture
was a was not deliberate, to be honest, I sort
of was working with a really great mentor who ultimately
grew me intervention appal for some period of time. But
that is when the privilege of working at A sixteen
Z where it was incredible being there learning from some
(03:03):
of the most amazing company builders in the world, and
and and some of my time there on prior inspired
me to do the work that I'm doing today at
Picassa and to talk about what the sort of part
of the core thesis is is. I often like to
(03:25):
say American medicine is the best in the world, but
the American healthcare system is not right.
Speaker 4 (03:33):
And and it's it's we all know this, right, but
but it's it's wild because like you literally have people
from around the world coming to the United States to
get care right, and and and and yet the average
healthcare experience that a person here has is way worse
than many other places in the world.
Speaker 3 (03:53):
And it's such a weird, such a weird duxtaposition. M
and so this was this this, this is something that
I spent a lot of time digging into a sixteen
Z and you know, even prior of like why, like
what is causing this? Right? And a lot of it
funly comes down to how we pay for healthcare in
the country, and that is what is causing this friction.
(04:14):
I actually wrote a blog post on this at some point,
but the summer review of this is of why. What's
causing that is we are trying to do way too
much in terms of different types of payment models here, right.
So it turns out there's about four different payment models
healthcare payment models around the world, and what most countries
do is they pick one and they just do that.
And it's not that you know, one is so much
(04:35):
better than the others. They all have pros and cons,
but most countries just pick one and do that, and
so they get very good at doing that. And in
the United States we actually do all four of those
models at the same time at scale, right, And that
causes so many challenges trying to accommodate all of these
different things at the same time, and it's ultimately led
to this extremely complex system that we call revenue cycle
(04:56):
in the United States. The analogy I like to make
of how think about solving this is the analogy to
self driving cars, Right, autonomous cars, which we now actually
have today. Right, I have you been a seven tribe
one of the way more cars? I have?
Speaker 5 (05:09):
Not?
Speaker 2 (05:10):
How much going forward to it?
Speaker 3 (05:11):
It's really fun. I mean, it's fun, and then it
gets mundane, which is what all great technology should eventually
become mundane because it's just like there. But anyway, so
what I like to say in that world is, if
we could have built entirely new roads specifically for self
driving cars, we would have actually had self driving cars
way sooner. Right, you would have had it wasted. But
we can't do that, Right, We can't build entirely new roads.
(05:33):
We have to use infrastructure that exists because that's sort
of developed over a very long time. But with sufficially
advanced technology you can still accommodate and make and still
deliver an amazing seamless experience that is playing out now
in the self driving car world and that's I think
what needs to happen in healthcare too. Right, it's unlikely
we will fully change the entire payme on infrastructure in
(05:56):
the country, but with sufficiently advanced technology you can actually
still deliver a singless experience, and with large language models,
that's now actually possible. So that is the thousand foot
view off sort of what the how we think about
the problem, how we're solving it. There's also the one
hundred foot view, which I'm happy to get into. And
then there's a view like inches from the ground, which
(06:18):
probably not super indertioned in many people because it's very,
very tactical, But that's a thousand important to me.
Speaker 1 (06:22):
No, why don't we unpack some of that. I mean
what you said really resonates with me as a as
an analyst. A lot of times I'm spending time with
clients who maybe don't know the healthcare industry that well
and explaining and they go, why was it built like this?
You know, why do we have these structures? And it's like, well,
I think if you started from scratch today, you wouldn't
build it like this.
Speaker 2 (06:41):
But we're this is what we're stuck with.
Speaker 1 (06:42):
And there's incumbents and and that's how it just works,
and we have to work within the framework of the system.
But yeah, let's take it down from the high level
to maybe you know, the thousand foot view or the
one hundred foot view, whatever you feel like is best
best for us.
Speaker 3 (06:56):
So that was the highest level of instruction. Something slightly
closer and more tactical is why, like, what what are
we actually trying to do at a deep level in
revenue cycle?
Speaker 2 (07:08):
Right?
Speaker 3 (07:11):
Simply simply put, the health system is trying to communicate
the full patient story to the pair, right, as in
as comprehensive detail as possible. And by explaining what happened
to the patient in as much detail as possible, they
get full credit for the care that's delivered and the
pay pays them appropriately. Right. That's like at the heart
(07:32):
of what we're trying to do. We call it many
different things in our industry, we call it prior to
the coding of denials of various things, but fundamentally that's
what we're trying to do. In order to do that job,
will you actually need to deeply understand the clinical record, right,
because again, what you're trying to do is fundamentally tell
the clinical patient story to the pair. And if you
don't do that job, will it leads to a lot
(07:52):
of friction. It leads to you know, opps getting denied,
care getting delayed, you know, claims getting denied or underpaid.
And by the way, this is now more important than
ever before because very recently, with the buildouts passed, it
will be about a trillion dollars in federal cuts to
across the next decade to Medicaid, right, and like health
(08:14):
system operating margins, which are already low, will take likely
another twenty percent reduction. Right, So it's actually very important
to get this right now. The reason that friction exists
today is not it's not because of some you know,
specific gap with the revenue cycle staff. Right, Like, we
work with these people every day. They really care, they
(08:35):
really care about the patient, they really care about doing
their work. Well, it's just extremely hard work. It's extremely
hard work to give you a sense of what like
an inpatient coder does every day. Right, So, an inpatient
coder's role at a health system is to look at
an patient encountered and convert that all those documents into
a set of codes. Right, The documents that represent the
(08:57):
patient stay in total about fifty thousand words in average, Right,
fifty thousand words that is the length of the Great Gatspeed. Right,
So they have to read the equivalent of the Great
Gatspeed and then convert it into a set of nineteen
on average nineteen discrete codes. They have to do this
at least twice an how, right, it's wild, It is wild,
(09:20):
how hard this is, and we expect them to do
it with perfect accuracy, and of course they can't. Right,
it's very, very hard. But then what's happened in the
last two and a half years is we've had this
amazing new capability that we've all gained through large language models. Right.
That has been a complete sea change in what is
possible from a technology perspective. I'm sure you know you
(09:42):
probably have some large amount of lemum usage in your
daily used through chatupt anthropic or something, right, And what
woul You've probably seen is these things are incredibly good
at understanding complex language. And so now for the first time,
we have the ability for some to deeply understand a
medical record. Previously, a medical record was basically a wall
(10:05):
of text to soffer because it could even with the
best AI before, you couldn't actually understand it well, now
you actually can, and that capability has unlocked so much
for us in revenue cycle and it means we can
now fully tell that clinical patient story to the pair
much more comprehensively than before. And it is substantially reducing
(10:25):
the friction across the board across multiple products you offer
like encoding and CDI and authorized prioritization. So that, at
a high level is how we think about solving solving
this problem at hand.
Speaker 1 (10:38):
Maybe rewinding you know your companies older than two and
a half years, Where did the light ball go off?
Where did the light bulb go off? And what made
you decide that you know you were the right person
to tackle this job with your your co founders.
Speaker 3 (10:51):
Yes, that's a very good point. So we have been
working on revenue cycle AI for multiple multiple years, right,
and we have been solving important problems. And because there
are many other problems you can solve even with the
level of AI you had pre LLM, but we always
knew that there were these the hardest problems in the
(11:11):
revenue cycle. You actually need a deep understanding. But we
still built very valuable products. We got we were we
were able to work with great customers we were able
to show great, very good results, but we weren't touching
the hardest ones. When alums came out two and a
half years ago, we looked at that and we said, oh, wow,
now now we can we actually.
Speaker 1 (11:32):
So it's a sea change in terms of what you
can do. From a capability perspective, it is.
Speaker 3 (11:37):
A complete sea change, right, it is. It fully transformed.
It super charged us in our ability to create amazing
develop a tonnel value for our customers. And we were
in a great place to be in exactly, we had
a great point of time in the company's journey for
elms to come about because we already have access to
(11:59):
a lot of training data, we had a lot of
distribution with health systems you could work with. We had
talent with capital, so we were able to move extremely
quickly to adopt lems and actually incorporate in them into
our products, either upgrading our products or building new products
that were just literally not possible before. And we had
And the thing about this domain is it's extremely esoteric, right.
(12:21):
Revenue cycle is actually a very esoteric field, and we
had the privilege of having years and years of understanding
the revenue cycle workflow, the experience the data sets, and
we were able to figure out how to combine both
the revenue cycle financial data and the clinical data because
(12:42):
it's actually non trivial to do this right, to combine
them in ways that makes sense, and then train entirely
new models to do these revenue cycle functions well. And
I'm actually I'm happy to talk about how we do
that from an air perspective.
Speaker 1 (12:55):
That's yeah, I'd like to be I'd like to learn
a little bit more about that because when I think
about when I talk to people and they say, well,
the EMR was created for billing, right, you're taking it
one step further to.
Speaker 2 (13:06):
The clinical notes, to the clinical notes level.
Speaker 5 (13:09):
Correct.
Speaker 3 (13:10):
Correct. So what we do we do something very unusual
I think in revenue cycle in terms of how we
deliver our large language models. We fine team for health
system We discovered this is a thing you kind of
have to do for the very complex problems. What we
see happening a lot is people use sort of a
single large language model that they have, you know, either
developed or are you know, calling through open eye or something.
(13:34):
They'll use that for all the health system customers they
work with, right, with minimal sort of customization and the two.
Speaker 1 (13:42):
Are these your competitors who are giving you and are
trying to provide an off the shelf solution.
Speaker 3 (13:47):
Okay, we have seen most companies. I don't know if
anyone else who does it quite the way we do.
So yes, other folks typically are using either something off
the shelf or they might have developed something themselves, but
their typical using the same thing with everyone with some
sort of minimal variation. And the reality is that actually
does it can work well? Right to give credit like
(14:08):
it does it can work well. It works well for
the simpler problems. But what we're doing is we are
we're solving some of the hardest problems. Right, So in
patient coding, for example, which is one of the products
we have AI for that is it's incredibly it is
extremely hard, right, And what we found is when you
take these more general approaches and apply them there, it
(14:29):
just does not work. We would have preferred it to
work because it is easier to scale that a coach,
but what we found time and time again is that
fine tuned approach where we take an internal model that
we have and then we additionally fine tune it for
every health system, works substantially better. So with every health
system we work with, we actually build them effectively their
own custom AI model that is trained on their data.
(14:53):
And this makes sense, right, because every health system has
a lot of nuance, right, I mean, of course, I
mean when you think about it, maybe you're like or
should work better. Every health system has a lot of nuance,
and how their providers document how their pays to them, like,
all of that is different, and all that nuance is
actually captured in their historical claims data there and their
historical clinical data. It's there if you can figure out
(15:15):
how to scalably unlock it.
Speaker 1 (15:16):
Now that makes sense. I mean, I think about people
being on an instance of EPIC, but no two instances
look the same across any two providers.
Speaker 3 (15:24):
Right exactly that that's correct, and even the providers themselves
how they work with that instance in EPIC can be different, right,
And so basically we fin teing this model for them,
So then we have a new base level model for
them that then powers a bunch of our products. It
can power basically various agents that we've built for them,
so it can power our coding the copilot, a CDI copilot,
(15:48):
a priority copilot, all from the same base model, per
health system. And once we figured out that was going
to be more effective, we just figured out ways to
do that more efficiently. So now after a lot of work,
we figured out how to do that efficiently. So now
we can actually do that fine taining cell systems very
fast and it's no longer that much of an operational issue.
(16:08):
But it took some time for us to figure out
how to do all of that.
Speaker 1 (16:10):
Maybe diving a little deeper on that, you know, when
you when you sign up a new client, what does
it kind of entail to implement your solutions? And I
guess maybe what's a sales cycle like for getting somebody
on board? I know, no two instances are the same again,
but but you know, generally how long does it take
to get somebody across the finish line?
Speaker 3 (16:30):
So there's a sales cycle. Sales cycles, I mean, it's
it's the typical enterprise sales cycle. There have been They
can go from you know, six months to a year,
sometimes for very large groups longer, but that's typically that's
typically the range. In terms of how we work with
folks we first do there's sort of two layers. There's
(16:50):
the data we need for training for them, right, and
that's a sort of training data set. Then we have
integrations that we do to integrate with the EHR right
to be able to seamlessly retrieve and push information back
into the HR. So those are the two different things
we do that Historically this has not been a very
(17:13):
large lift for health systems as far as we can tell,
because we figured out ways to plug into things that
they should already have up and running. Okay, but what's
actually actually your questionnaire brings up another point, which is
it's actually not just the AI layer, right, Like, giving
a health system an AI model is not like that's
not going to do much. You actually need to have
(17:33):
a UI layer on top of that, right. Okay, we're
operating on both layers, right, Like, we are both innovating
and building our own AI models, but we're also we've
also built the actual application layer, the actual interface that
a user would use to interact with the AI. And
this is something else where we believe we you know,
(17:54):
the years and years of like deep revenue cycle understanding
empathy for the revenue cycle user helps us because we
have been able to build interfaces that these revenue cycle
staff really trust and want to use, and we've figured
out ways for them to work with the AI so
that they truly see it as a copilot. The AI
is not a black box, right. We really have invested
heavily in having the AI explain itself. It provides justifications
(18:17):
for everything it does. It shows confidence levels, so it
doesn't feel threatening to a revenue cycle staff member when
they're using them.
Speaker 2 (18:25):
That's great.
Speaker 1 (18:25):
Maybe could you walk us through an example of a client,
you know, what they were using, what they were doing before,
how the implementation worked with you guys, and then maybe
what some of the outcomes were from a KPI perspective
or saviors perspective.
Speaker 3 (18:38):
Sure, so I'll talk about two things. I'll talk about
some of our a coding AI product and also an
a Priori resident AIR product. So on the coding side,
many folks use us today as an AI auditor, right,
So they're doing their coding work, and then our AI
will basically review the work that they're doing and help
(18:58):
them make sure they're not missing stuff or coding things inaccurately.
To talk about some of the outcomes from things like
that means that you are more accurately capturing the complexity
of the care that you delivered to a patient, Right
because a lot of these health systems deliver very complex care,
but sometimes they don't fully represent that in the ultimate
(19:19):
claim that's sent out, like they might just miss certain
things that happen. And so doing that correctly means meaningful
improvements in both quality scores clinical quality scores for them,
as well as making sure they get full credit for
their care delivered through improvement and reimbursement. Right So this
can mean at some places literally tens of millions of
dollars of additional correct reimbursement that they should have gotten
(19:41):
that they are getting now, as well as substantial improvements
in clinical quality skies that go into you know, where
they rank on various lists right use news rankings as
things like that. And the reason they're able to do
these things better is because with the AI is remember
that very long. There's fifty thousand word long I just
talked about, it's very hard for human to do that,
(20:02):
and AI passed through all of that in under two
minutes with basically full reading comprehension. Right Like, when it
goes through it, it stitches together an internal clinical picture of
what happened to the patient. It does that in under
two minutes in a high level of detail and then
can help the human figure out what they're missing on
the prior outside. Similarly, one of the core things you
(20:24):
are trying to do is validate to the insurance company
that the procedure you're going to do is legitimate. Right,
say this is yes, the patient needs this, I need you,
the payer to authorize this because this is the patient's
of clinical history. And what the payer often do is
is ask them questions, right like why are you doing
this here? Like did you try this or this other thing?
(20:45):
And we've built basically clinical research assistant here to help
the staff member answer those extremely quickly, and it's been
highly accurate. We in one of our most recent customers
that has been using this, we have the human basically
rate the AI in terms of accuracy. It was a
(21:05):
accuracy like the time the human agreed with the AI
on the AI's suggestion. And actually in some cases when
we did some head to heads in the past, we've
actually seen, encluding for example, the AI be better than
human on a bunch of cases. That's why you can
serve as an auditor.
Speaker 2 (21:21):
Right.
Speaker 1 (21:21):
That makes a lot of sense intuitively, just given the data,
the size of the data set, so you described it
kind of a qa QC process almost where the AI
is assisting the human coder. Is there a piece of
the business that's automated, you know for and I imagine
you know, you run the gamut of well acuity cases
where automation probably works very well, and maybe that higher
(21:43):
acuity is where where you need that that human touch
as well.
Speaker 2 (21:48):
Is that the right way to think about it?
Speaker 3 (21:50):
It is a good way to think about it. We
are working towards fully autonomous coding and other activities for
the work that we're doing. Bear in mind, though the
work that we're doing is the hardest work, right, Like
the coding work we're doing is not really touched by
anyone else in the industry. No one else is even
(22:10):
trying to autonomous of code that it's formal complex like
people that are trying to do, for example, physician coding
or automation coding, which is very simple types of coding,
we don't really do those. So we are actively teching
the hardest stuff where there is a very heart threshold.
Like the lower security stuff that we're doing is far
more complex than the higher security stuff that anyone else
is doing. I see, we are taking a very thoughtful
(22:33):
approach to automation, we think there'll be Internally we're doing
a lot of ridy against that. We're seeing areas where
we can actually fully automate. But there is also a
social element to this, right for very complex cases, even
health systems have a higher threshold, and so we're we're
working with them to get to that point. But that
is absolutely the end state.
Speaker 2 (22:53):
Got it? How does it work?
Speaker 1 (22:55):
And this is where I get out of my comfort zone.
But as I think about your solution, an AI overlay
er platform on the provider side, when it meets the
AI solution on the payer side, is.
Speaker 2 (23:08):
There a friction?
Speaker 1 (23:09):
There is? There can there be a vicious cycle, you
know between the two different systems.
Speaker 3 (23:14):
The output that we produce should be more substantially more
compliant than what humans do. I see, so it should
actually be better. Right. The way I think about it is,
you know, when you have let's see, one hundred human coders,
each of those hundred human coders is in the sort
of individually making decisions based on their own personal experience,
(23:35):
how their day is going, like their training, right, unlike
they will maybe that output is so varied and random
versus a single AI that is trained on the most
up to date information, that's always doing things correctly and
is always justifying itself. That's going to be far more compliant.
Speaker 2 (23:53):
I see.
Speaker 1 (23:53):
And then are those all are those outputs all customized
based on who the payer might be and their their requirements.
Is it trained? Is it trained on knowing that I
don't know? ETNA wants this and United wants that.
Speaker 3 (24:08):
It depends on the domain. There are certain domains in
running cycle where you're not supposed to make changes based
on pay like you're just supposed to do it a
consistent way. So it depends on the type of product.
But where it is necessary we can do that, and
other is we don't because we shouldn't.
Speaker 1 (24:25):
Okay, maybe at a high level, can we talk about
the revenue model. You know, how do you guys make money?
And you know, how do you scale?
Speaker 3 (24:34):
So we have two models. We have both subscription and
performance based models. And in the performance based ones, we
basically say, look, we we are taking all the risk
we are and we shood that because we're confident enough,
we will show you these types of results, whether improvements
in inoccuracy or things like that. And if we if
(24:57):
we do that, then we will get paid. These are
There are other models where we have a subscription fee
that is based on some unit, usually based on the
size of the facility. But those are the two models
that that we bring to bear.
Speaker 1 (25:10):
Is there any preference in the marketplace between one or
the other or or do you are you seeing a
move from one to the other.
Speaker 3 (25:16):
We it's actually it's interesting. We we we lead. We
lead with the performance space model because and it's very
compelling for a lot because there's there's zero risk for them,
right they don't have to make a bet that something
is going to work the way it work the way
that we say it will work. We literally just say,
(25:37):
look you, if we don't do these.
Speaker 2 (25:39):
Targs, we don't deliver.
Speaker 3 (25:40):
We don't deliver, like we're putting a money where amouth is.
And we say this because we're so it's also an
interesting indication of confidence in the product. So that's typically
typically what we lead with and typically what people find compelling.
And then some folks maybe want more predictability, and if so,
then you know, we we have the other model.
Speaker 2 (25:59):
Got it.
Speaker 1 (26:00):
How do you build the customer base over time? Is
it just prospecting? Is there is there something that you
eat in with as a tip of the spear to
get to get people on board, you know, maybe one
module or one pain point and then land and expand.
Speaker 3 (26:14):
Yeah, I'll specifically answer that question. But but the higher
level thing there is like the way you grow in
our industry is just delivering great products. I mean it's
actually it's actually basic, right, It's like, if you deliver
a great product, your customers will basically sell the I
mean they will just like them talking about how how
happy they are with something. Is the best marketing actually, right,
(26:35):
And and and that's what you should do as a company,
build great products, and and you know the rest of it.
With a little bit of help, will we'll we'll work
out now more specifically to your question, we have a
module that is an extremely easy add on for people
that are doing, for example, coding work or something. Right,
(26:57):
there's a way we can have it just review what
the human tuders are doing without the upfront workflow. And
so it's become a way to very easily plug in
create value. Should give them a taste of how powerl
it is. And that's been a very good tip of
the sphere.
Speaker 2 (27:15):
Got it?
Speaker 1 (27:16):
Maybe going back, you know, one of the things you
mentioned was was kind of building building product.
Speaker 2 (27:23):
If we go back to your.
Speaker 1 (27:25):
Founding and you know you've been around for I guess
what five or six years now, you know what have
been the key milestones I guess in your product journey
over those five or six years. And as you think
maybe a couple of years down the line, if there's
anything you can share from a roadmap perspective, you know
what's the next thing?
Speaker 3 (27:43):
Sure? Key milestones. I mean the single biggest thing is
the is the sort of advent of large language models
and how we incorporate that is like fundamentally changed everything.
I mean not just for us, but for everyone everywhere.
Basically there are company millslones that are fun like your
first like your first check. I remember, I remember for
(28:04):
I remember the first time the customers paying us, they
you know, they were just going to like do a
back deposit or something. I said, can you can you
send us a physical check please?
Speaker 1 (28:14):
And you didn't ask for like a contest one or
a price is right one?
Speaker 2 (28:18):
Did you a big check?
Speaker 3 (28:20):
No? We got actually that's a good ideas. We did
get that. That is still and then I found out
it was for is our very first thing. It was
for fifty thousand dollars or something. I found out that
day that you can mobile deposit fifty thousand dollars because
I didn't want to give our check to the bank
and lose it. So I was like, I'm gonna see
if I can just take a picture of it the
mobile deposit. So you can, and so we still have
(28:40):
the check and we have it framed in an office.
Speaker 2 (28:42):
So that was that's excellent.
Speaker 3 (28:45):
But but no, those were I mean some of the
big ones are product mothstones, are you know the first
There is a big hurdle in figuring out how to
integrate with an EHR effectively right. And I think this
is something that a lot of people outside of a
lot of technologists outside of healthcare miss who are getting
(29:09):
into healthcare, which is you can build a great product,
and if you haven't figured out how integrate into HR,
it's it's you know, do it right, Like it's like
it's not going to do well because they expect you
to integrate without a ton of lift for the health
system i T team and for you to be able
to have the conversation or to help your system i
T teams so you can guide them and so, you know,
(29:32):
getting getting to the level where we can effectively, extremely
efficiently build confidence well s some I T teams. Now
we know exactly what we need to do and here's
here's the steps you take. That was a big, big milestone.
We talked about the adoption of elements and then just
the it's it's also really cool seeing expansions of platforms, right,
(29:53):
So what we're building is really a platform approach, right
where we have a single sort of AI GENII platform
that can power multiple modules effectively. And seeing the platform
theisis work out where the thing you were started with
actually helps you do the second thing and that combination
gives you one plus one equals three. Is very cool.
(30:14):
And so those those are some you know, fun monthstones
along the way. And then you talked about what we
want to do and it is actually just continuing on
this platform approach. We found that there is a very
virtuous positive cycle in the incorporation of the various products
that we have, right because for a lot of house systems,
(30:36):
they want to solve problems in a lot of different areas,
and we actually have products in many of these different areas,
and most of the products talk to each other, and
so we can build we can deliver in the revenue
cycle a very dotful package starting with the mid cycle right,
mid cycled encoding CDI you know, we can be an
AI partner to solve problems across their biggest areas in
(30:59):
one shot.
Speaker 2 (31:00):
Got it?
Speaker 1 (31:01):
Maybe just going on a tangent here, can you talk
about the team a little bit? How did you and
your co founders kind of co esque and how did
you want Did you always want to be a CEO?
Speaker 3 (31:11):
So it's an interesting question. I knew I wanted to
solve this problem for quite some time.
Speaker 2 (31:18):
Mm hmm.
Speaker 3 (31:18):
I'm not sure why, but it's naturally was like, yes,
I should, you know all I'll drive The co founders
were folks that were in my network through as as
either friends or friends of friends, who were talented people
that people spoke highly of. And they'd all run into
these healthcare problems through different paths of life. And that's
(31:44):
how that came together. And you know, it's fun to think,
right at some point you were just a couple of
people working in a we work equivalent and then you know,
just growing from there to the team we have now
has been It's been a fun. It has been a
really fun journey.
Speaker 2 (32:01):
Has it been interesting? You know?
Speaker 1 (32:02):
I think about your background from from the venture side,
where you're you know, this is very simplistic, but you're
writing the check and helping, you know, provide a network
and give ideas. You know, how is that transition from
from that role to this role?
Speaker 2 (32:15):
Ben?
Speaker 1 (32:16):
And then whose leadership style do you do you model
your own after?
Speaker 3 (32:20):
What I'd say is the as a founder, you feel
everything so much more right, the highs of high but
the lows of lower right. And you can literally have
a single day where you could start the D feeling
amazing because you close some big deal, and then you
could end the D feeling terribly because you you know,
lost a recruiting candidate or something right, And like, you
feel all of this so much more deeply. But I
(32:44):
love it right, Like it's it's like because.
Speaker 6 (32:45):
You feel it more deeply, Like it really is, Like
you feel so much more ownership of what you're doing,
and you're so much closer to the actual problem you're
trying to solve, right, because you're actively solving the problem,
you're living it.
Speaker 3 (32:57):
You're living it right, versus you're observing other people solve it,
which is also interesting and with Asiss in particular, it's
been fun working on both sides. Right. I was there
on the sort of investor side, and now you know,
privilege of continuing to work with them on the other side.
They have been investors in US across a few rounds,
and they are a great partner to work with. But
(33:17):
I will say it's it's it's while I was an
amazing experience. I truly love what I'm doing now like
I would not, I would not be doing anything else.
Speaker 1 (33:28):
Maybe on that you know, you've you've done a couple
of rounds, you know what, and I think I believe
it's been a couple of years since you've raised any capital.
Thinking about that last round, what did you use the
capital for and and kind of what are the requirements
for the business going forward?
Speaker 3 (33:42):
Sure, we have continued to use it for R and D, Right,
So that was we used a lot of that capital. Two,
try to incorporate large language models, right, because the work
that we're doing with that MS is not it's not
cheap either, right, it requires access to you know, expensive
and large sets of GPUs. And that has been so
(34:07):
that was a lot of it. Obviously, sales and marketing
types of things as well. It's a it's a fairly
efficient business. Uh, it's a fairly efficient business. We will
we haven't thought about capital for a whiles because we
continue to be fairly well capitalized. And so my main
focus right now continues to be continue R and D.
(34:29):
Like there's still a lot more we can build because
it's very it's still a fairly green field in these
very complex problems that no one else has figured out
how to solve. I feel like we have a fairly
meaningful headstart because like, if you are a company, it's
actually interesting thinking about this story, if you're a company
that started just as lllm's came out, it would actually
(34:53):
be very hard to adopt them meaningfully because you need
actually some baking time as a company to even understand
the types of problems that exist in this domain right
to get executed that makes customers to test these things
with and do all of that. We were able to
do move extremely fast to do all these things, and
(35:16):
there were a lot of companies I think in that
era that didn't actually make the changes rapidly. And so
something I'm really proud of our team for is the
ability to like really lean in and and do that
quickly and really lean into it, and that has led
to some remarkable, remarkable outcomes.
Speaker 1 (35:39):
When you say R and D and new products, is
that human capital that's required or is that buying new
data sets or expanding the aperture on the models? Can
you just walk me through that process a little bit?
Speaker 3 (35:52):
Sure, it is all of the above, It's okay, capital,
it's you know, it's it's uh, all of these different things.
But yes, it's basically, how do we get more accurate
and faster in the in the scale of data that
we have to process for a single task is extremely large.
It's larger than it's larger than most most tasks that
(36:15):
people are doing at the elblems today. We actually, if
anyone's interested, we're recruiting for AI. I have to say it.
Speaker 1 (36:23):
We can make this a commercial recruiting commercial.
Speaker 2 (36:25):
That's fine. Commercial.
Speaker 3 (36:26):
Yes, if anyone is listening and it is interested in
types of publisher solving, we actually just publi our Melting
just published a blog post about a week ago about
how we are using cutting edge element techniques to solve
h AI for understanding very very long records, right, which
(36:47):
is very it's actually some very interesting problems for research
scientens so solved.
Speaker 1 (36:51):
What's an example of one of those longtudinal records. Is
it somebody who's had multiple instances of cancer or is
it just you know, somebody who's been actually in the
same system, which is rare I think, but for maybe
twenty thirty years.
Speaker 3 (37:06):
It's actually typically in long impatient stays. Right, so you
have a you know, ten twenty day stay that is
a very long set of procedures and like a lot
of different things have happened to that patient, very complex
story creates a ton of documentation which you have to
make go make sense.
Speaker 1 (37:27):
I know, I'm thinking about every aspirin and bandage and infusion,
those sorts of things. Yeah, and each one of those
creates a digital digital record.
Speaker 3 (37:36):
Correct, there's a lot. It's it's actually, I mean, there's
there's records that we've seen which are I mean, I said,
fifty thousand average, like three hundred four hundred thousand words, right,
It's it's wild how long these things can be. And
even today, most off the shelf things literally cannot even
(37:57):
incorporate them, and so we have to figure out ways
to do that, like scaley and we have.
Speaker 1 (38:03):
So you're on the front lines of this, you know,
really see change in the use of AI. You know
what gets you excited that's maybe outside of the four
walls of your company, when you when you look out
at the landscape and see what's being done and are
there any technologies or or problems that are being tackled
that you really think are going to be fundamental changes
(38:24):
to the system.
Speaker 3 (38:25):
There's interesting work happening at the pair side. We don't
we don't work with pairs, but it's it's very interesting
seeing people in corporate approaches to I mean some of this,
I'm just speaking from personal experience. I would love them
more up there. The provider directory is very annoying to
(38:49):
There's very interesting price transparency products being built right like
that help you, that help you more accurately predict you know,
what someone's supposed to pay for something right that I
think is very interesting. The companies that are doing sort
of ambient listening is are interesting. There's other companies that
(39:10):
are doing you know, they're installing cameras in like operating
rooms and things to be able to use those to
provide guidance or track what's happening through computer vision. Those
are all really interesting things that are happening.
Speaker 1 (39:25):
When you touch your customers that because those are all
great examples, you know, whether it's ambient or automation or
I'm sure there's a host of others that we.
Speaker 2 (39:32):
Didn't touch on.
Speaker 1 (39:33):
Are your customers inundated with these choices? And how do
you think? And I know you're I'm putting you in
their shoes, but you know, how do you think they
think about or how do they tell you they think about,
you know, ranking which solutions to implement now, maybe which
are on the back burner for a couple of years.
You know, do you have a good sense of their
(39:54):
thought process? And I know everybody's different.
Speaker 3 (39:58):
Yeah, I mean, for sure are getting in a day
like that is one hundred percent the case, And I
don't envy them. It is hard. It is hard for
them to differentiate because you know, they shouldn't have to
be experts in AI to sort of tell the difference
between things. The way we think about it is so
many health systems will have an AI expert right to
(40:21):
help the business owner make decisions, and we love it
when that happens because unfortunately, in our world there are
a lot of people calling everything AI when like it's
really not right, it really isn't and it's very hard
for business owner. It's sometimes sell a difference. And so
when there is a like a true person who understands AI,
(40:43):
it's we find that refreshing. We can actually go extremely
and our recommendation is like, if you have that person,
actually have them go a few layers deep and see
if the vendor can can do it, because if they
can't go more than one layer, like, it's probably. But
then beyond that, it's actually just a business results, right,
Like that's fundamentally what matters. Just going back to comminator,
the most effective way to do that is just to
be delivering great results to your existing customers and then
(41:05):
have them connect that that truly is the best thing,
right it's hard to if you cannot fake that, right, like,
you actually have to be delivering good results. If someone
is interested, just tell them to go talk to these
people and that is the most effective thing. So that
is what I recommend to folks. Just talk to existing
customers if you and if you want to do more,
actually going to the AI if you aren't convinced it's real,
(41:29):
but those are doing some combination of those things. Will
should read out a lot of a lot of stuff.
Speaker 1 (41:34):
No, well said, you know, if we think about where
a CAUs is going to be five years from now,
that's just an arbitrary time point, you know when we talk,
when we talk in five years, let me put it
that way.
Speaker 3 (41:45):
Yes, where do you want to be?
Speaker 1 (41:49):
Where do you want the company to be? And I
guess when you think about some of the milestones in
the future, you know what are they? I mean, I
can imagine there's some financial milestones or product milestones, but
you know, as you think about the future, what are
you really excited about for the company?
Speaker 3 (42:04):
I truly do envision the world right where we are.
You know, we don't just have the best medicine in
the world, but also the best healthcare system in the world,
like we should. We have to correct that. What that
means is communicating the clinical patient story as quickly and
as comprehensibly as possible to the pair in as close
(42:25):
to real time as possible, right like that. Getting to
that will solve a lot of description that we have
today and historically like we just did not have the tools.
We just did not have the tools for this. Now
we do, so I think it's just a matter of time,
and I think we will be one of the core
folks that are a partner to any health system that
wants to get to that point right that has close
(42:48):
to real time communication between the provider and the pair,
so that there's extremely information rich streams going between them.
That is I think what will will will happen against
just a matter of time and what ultimately means is
it much less confusion for the patient and even the
(43:09):
provider in as they go through their healthcare journey. So
that that is I think where we're headed and what
we're doing, we should be a major part of that.
Speaker 1 (43:17):
One thing you said there, you know, thinking about that
that journey for for your customers, it made me think
about kind of the volatility we're seeing from a policy perspective.
Is there anything that you guys are paying a lot
of attention to from a regulatory standpoint or a policy standpoint,
And I don't know, just spitballing here, is there is
there something that Washington can do to to help facilitate
(43:38):
some of the work you're doing.
Speaker 3 (43:40):
There is, there's what I would like there is what
you know, it's like, what's you know, realistic problem? One
thing that happened recently that for sure will is going
to have an impact. Is the you know, massive medicaid reduction, right, that.
Speaker 2 (43:54):
Is just going to get right.
Speaker 3 (43:55):
We touched on that exactly, and for example, the prior
outside right, it's very opaque, and I understand why pairs
do this, right, but they you know, they want to
make sure a procedure is bring done because it's actually
necessary and not frivolous, right, And I get it. It
doesn't feel great obviously on the patient's side to feel
(44:16):
like you have to get permission, but we can use
AI to help with that. One thing that would help
a lot is if the pairs were a bit more
were more transparent about what those policies work. It's strangely
hard to actually get access to what do you as
a pair, you specific pair think is clinically necessary, right,
If that could be a bit more transparent, it would
(44:38):
help along the entire chain. Currently, you have to wait
till the point in time when you're trying to do
something when they'll tell you, which is not great. Now.
Our product solve so that because it at that point
in time makes it super easy to do. But if
you could get access to that earlier you could, you know,
you could you can move the substream. So that's what
it likes.
Speaker 1 (44:57):
Yeah, what are some of the other ones that are
maybe top of mind? Or are there any rules or
regulations coming out of CMS or or the like or CMMMI,
I think that you know, might catalyze this.
Speaker 3 (45:09):
There is there has been actually activity on prior op
for a little bit. There was actually a red that
was passed. Actually, another thing also in the prior outside
is making it easier to communicate information between providers on pairs, right,
So weirdly there isn't a very effective way to do
(45:29):
this communication between a provider and pay out. So claims
are actually, you know, relatively efficient. There is actually a
pretty good, I mean reasonably good system.
Speaker 2 (45:38):
The pipes are pretty clean.
Speaker 3 (45:39):
Those are good for that, but for other things like
prior OP and I guess it makes sense because product
is a newer thing, right, you know, first the claim's
core claim payment had to happen, and that's good, and
then later prior of became a thing, and I don't
think people really figured out how to make that super efficient,
but today it still works a lot through facts. Fax
is a very common mode of exchanging information prior off. Yeah,
(46:01):
it's so. But one of the wildest things for a
help tech person is like you could be in a
meeting where you're talking about some very sophisticated AI to
solve something, and then your next meeting is okay, So
what is a facts integration? It is equally important, actually so,
But anyway, it is a very common mode of information exchange.
Another one is people call a lot. Still, like literally
(46:24):
people will do stuff over a phone call, which is
also crazy, and then stuff on a website. These are
like common modes of transmission which and none of which
are very efficient. It would be great if you could
switch to an API based approach, right, Like if there
were just APIs that peers exposed saying here is how
to communicate with us, Like that's how many other industries work, right, Like,
(46:44):
just call this API and exchange information through that.
Speaker 1 (46:47):
That would drastically if it was mandated even right, yeah,
or I.
Speaker 3 (46:51):
Mean yes, that that would exactly if that were mandated
to expose APIs, that would be that would be great.
It would you know, massively reduce the inhibition to that.
Same goes for denials actually, right, even exchanging information and
denials is a very it's most of those other methods
(47:12):
of describe facts call website. Right, if every single mode
of communication could get API five, that's the word that
would be amazing.
Speaker 1 (47:25):
Well, we'll have to wait for that one. I'm not
holding my breath that we're going to do away with
the fat machine, but I'm hopeful. Yeah, Milica, we're coming
to the end of our time. And one of the
ways I like to wrap these conversations up is just
to maybe dive a little bit deeper on a personal front.
And I always ask everybody if there's a life lesson
you know, either from your personal life or something you've
earned in your professional experience that really drives your day
(47:50):
to day.
Speaker 2 (47:50):
Does anything come to mind?
Speaker 3 (47:53):
Yes, there's a couple of things. One is an amazing
book that everyone should read is The Hard Thing by
Hard Things by Ben Horwitz is one of the coot
founds at A sixteen Z. It's just a great book
for any founder to read it because it's very real
and raw and it will help especially, you know, especially
(48:14):
when you're going through you know, particularly tough time or
trying to figure out some heart decisions. The other things
I figure out, like what what are the things I
look for most in candidates just through experience. Now is
it's you know, there's like a lot of generic stuff
like you know, has to be a hard worker and
new assholes and stuff like that. But but at a
higher level, the three things I think about the most
(48:34):
now are conviction, taste, and grit, right and the reason
being conviction being the first one because everyone to be
effective has to make decisions. You have to make a
lot of decisions, and you need people to be able
to get conviction on a decision quickly and actually make
it because a lot of people get stuck in analysis proalysis.
(48:55):
The taste one comes in because yes, we want you
to make this decisions quickly, but also generally make good decisions,
and you have to be able to make net net
You have to make good decisions. You do not have
to be perfect. You will make like part of the
trade off and having high conviction fast decisions is you
will make some bad calls. Of course, on average, you
should be you should be you know, doing making making
(49:18):
good decisions. And then finally the grit thing I'm realizing
is like actually one of the most important things because
let's say you do things one and two, the reality
is even for decisions that are objectively good decisions to
have made, you will end up with outcomes that were suboptimal.
Like a bunch of the time, you could have done
the right thing, but the actual outcome may be bad
(49:39):
and you need to have the grit to work through
that and like get into the mental state and still
like repeat the cycle.
Speaker 2 (49:46):
Right.
Speaker 3 (49:47):
And that last one I think is especially as an entrepreneur,
is probably the most important thing off the set. But
those have sort of crystallized things that I think about
a lot.
Speaker 2 (49:57):
No, that's great. I like both those.
Speaker 1 (49:59):
I mean, that's a great and I can foresee maybe
some T shirts for your team with the CTG on
it or something of the like. You'll have to come
up with a moniker or something. So Malika, thank you
so much for your time today. You really shared a lot.
We've covered a lot of ground and shared a lot.
I appreciate it, and I guess we'll kind of wrap
(50:20):
up here, So thanks again. And that's malinka Wala Liadi,
CEO and co founder of a Kasa, Thank you so
much for joining us on our latest episode of Bloomberg
Intelligence Fanguards of Healthcare podcast. Please make sure to click
the follow button on your favorite podcast, app or site
so you never miss a discussion with the leaders in
healthcare innovation. I'm Jonathan Palmer. Until next time, take care.
Speaker 5 (51:03):
Passes US bases.