Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to another episode of the Vanguards of Healthcare series.
My name is Matt Hendrickson, the medical technology analyst at
Bloomberg Intelligence, which is the in house equity research platform
at Bloomberg LP. We are pleased to have with us
today Ross Upton, founder and CEO of Ultromics, a privately
held medical device company that is applying artificial intelligence to
(00:21):
diagnose complex cardiac diseases through routine echocardiograms. Ross, thank you
for joining us today.
Speaker 2 (00:28):
Thanks for having me Matt and Russ.
Speaker 1 (00:30):
You know, I noticed you were trained as a sonographer
during your times at University of Oxford as you were
getting your Masters and your PhD. How did that experience
lead you to ultimately develop ultronomics.
Speaker 2 (00:46):
Yeah, absolutely so. When I trained as a SNOGFA within
the NHS in Oxford, I was doing different clinics. One
or two of those clinics were a stress eco clinic
to check for current re art disease, but also a
cardiomopathy clinic to check for a condition called hypertrophic cardiomopathy.
(01:07):
And what we were seeing in clinic is after you
did the echo, it wasn't clear often what the diagnosis
was at that time. We would only learn maybe a
year afterwards after the patient had a cardiac event. Only
then we'd know what the actual diagnosis was. And the
problem with echo is it's often very subjective, and so
(01:30):
during that experience, what I wanted to do was build
something to make it less subjective and really try to
get to the diagnosis at that point, this was maybe
in twenty fourteen when I kind of came to this
idea and we wanted to use AI and clinical data
(01:50):
as a way of achieving that, and then that ultimately
became ultronics.
Speaker 1 (01:55):
Which is interesting because in twenty fourteen AI wasn't as
big of a buzzword back then, what was actually the
know how? Was AI different back then too it is
now because it's all kind of just big machine big
data learning. Is it kind of the just of it
that way back in twenty fourteen.
Speaker 2 (02:17):
Yeah, I mean we've had a form of machine learning
around for a long long time, and they've been mainly
statistical based approaches, and so back then it was very
much trying to manually derive features out of an image,
feed it into a rules based algorithm against a specific
(02:40):
outcome that you want to achieve. That's how we did
it back then it was called supervised machine learning, and
it was using kind of rudimentary machine learning algorithms like
decision trees. Nowadays AI has come along long, long way,
and you're using more advanced AI models like convolutional neural
(03:04):
networks and even more recently the generative models that are
coming through a way more advanced than what we had
back in twenty fourteen. It's come a long way in
that decade, It sure has.
Speaker 1 (03:16):
You know. One of the other things that's come a
long way in the last decade or so has been
just the understanding in just the treatment paradigm for heart failure.
What is your view of the current state of that
of heart failure in general.
Speaker 2 (03:33):
Yeah, so it's a big sort of growing area and
growing burden of disease. It's becoming the leading cause of
readmissions in the US. It's projected to cost the US
seventy billion dollars in hospitalization costs by twenty thirty six
(03:54):
million people in the US suffer from heart failure as well.
So it's becoming a huge and that's predominantly because it's
caused by things like obesity, diabetes, They all contribute to
these to heart failure and that's becoming a growing problem
in the US particularly.
Speaker 1 (04:15):
Yeah, and so once it's diagnosed, let me ask it
a different way. Is it a case where sometimes this
the heart failure isn't diagnosed soon enough to begin the
treatment process. And then also with the treatment process, is
it a cure or is it just slow in the progression.
Speaker 2 (04:37):
Yeah, it's a good question. So I think, first of all,
if we break down heart failure into its two forms, right, So,
heart failure just generally can be defined as a that
the heart can't pump enough blood to meet the demands
of the body, and not to be confused with chromiaity
disease or heart attacks, which often heart failure is confused with.
(05:00):
There are two forms of heart failure. There's heart failure
with reduced ejection fraction which is HEF REF, and heart
failure with preserved ejection fraction, which is HEF PEF. And
those two forms are both forms of heart failure, but
very distinct phenotypes of heart failure. So heart failure with
(05:20):
reduced ejection fraction is where the pump literally fails, your
heart stops pumping or stops pumping as effectively, and heart
failure with preserved dejection fraction is where not enough blood
gets into the pump in the first place, that the
pump doesn't feel properly. It's that second one heart failure
with preserved ejection fraction that is the one that is
(05:43):
often missed because it's very difficult to see that visually
or on noninvasive tests, and so it's often missed. Around
sixty sixty four percent of the time we miss HEF
PEF and so it's a it's a massively underdiagnosed disease.
Speaker 1 (06:04):
Interesting and then you know that we can kind of
jump right into the echo go platform that we kind
of talked about at the very beginning of the call
or the beginning of this episode. What is echo go
h F And then basically you're talking about the miss signals.
How are the inputs from the echo cardiogram able to
(06:26):
pick up those mixed those miss signals with kind of
your AI you know model.
Speaker 2 (06:32):
Yeah. So echogo is the platform we've built to use
AI to analyze echo images to try and detect disease.
Echogo Heart failure is specifically targeted at trying to identify
this phenotype of heart failure that is difficult to diagnose,
which is heart failure with preserved ejection fraction. What it
(06:54):
does is we use a AI model called a convolutional
neural network, and it's trained to analyze millions of pixels
over the course of several cardiac cycles, and it's trained
against an outcome, a known outcome of patients who had HEFPEF.
And what it's doing is is can it pick up
the signals from all of those pixels, So pixels that
(07:17):
perhaps and patterns of pixels that perhaps humans can't receive
or perceive that are signals of HEFPEF. And it's trained
to pick up on those signals and detect HEF PEF
more accurately than we can using the current clinical riskales
for example.
Speaker 1 (07:34):
Interesting and then you know one of the things as
well is that within it we're talking about echo cardiograms.
There's kind of a different variety of cardiograms. The it's
like the traditional cardiograms echo cardiograms that I get at
my annual physical which is kind of the leads that
are on my chest and arms. But there's also the
trans thoracic echo cardiogram and the trans ephigial echo cardiogram
(08:00):
which echo which are those are being used as like
the kind of input data for your AI model.
Speaker 2 (08:08):
Sure, So the ECGs or the electric cardiograms are the
ones that kind of pick up your heart rhythm. They'll
look like they look like the squiggly lines across the paper,
and that's the ones with the electrodes. They're they're typically
used to pick up rhythm abnormalities. And then you've got
echo cardiograms, which are cardiac old sound videos of the heart,
(08:32):
and they're the ones that it's the most common non
invasive test used to image a heart to check for
functional diseases like heart failure, and that's that's the one
that we work on. So we feed data from the
cardiac old sound video into our AI model to try
and detect heart failure essentially.
Speaker 1 (08:52):
Yeah, and it's interesting because it does look like various
as you said, a non invasive echo cardiogram, because it
looks you know, when I'm looking at the images of
it, it looks like the same sensor that is used for
pregnancy on the stomach, but exactly this time and it's
not invasive like going through the esophagus to get into
the heart and everything.
Speaker 2 (09:13):
So yeah, yes, exactly the same. There are about thirty
million echo cardiograms done in the US every year, so
it's a highly highly VOLUMEUS modality and that's why we,
you know, part of the reason we selected it as
one of the best areas where we can address this
(09:33):
gap of underdiagnosis of heart failure patients.
Speaker 1 (09:38):
Yeah, and so what is the actual form factor of
the platform? I know we're on an audio only podcast,
but is it just simply a software that you can
apply to the computer or is there like a hardware
that is utilizing the AI model.
Speaker 2 (09:56):
Yeah, it's just a software model, string of software models
that help kind of process pre process the images through
a pipeline and then find it gets the actual AI
model that does the heart failure detection and then it
produces a report. And so the workflow is we can
plug into any hospital system where the images are stored.
(10:20):
We can bring the images into this AI pipeline, and
then we can send back a report which detects heart
failure back into the hospital system where the images are stored.
So clinicians don't even realize it's happening necessarily, it's happening
in the background. They don't have to log into a
user interface to use it. They just after their skin
(10:45):
is done, they see the echo go report in their
system and it's got to report as to whether the
patient has heart failure or not.
Speaker 1 (10:52):
And it sounds like therefore that it's an agnostic software
tool that can be used with any of the imaging
devices that are in the market currently.
Speaker 2 (11:01):
Exactly. Yeah, we're vendor neutral and we can work with
any ould sound hardware device and any pack system which
is where they store the images.
Speaker 1 (11:12):
Interesting and so then all with this technology, and just
with anything in medical devices or medical technology, there's always
a need for the clinical data to support the technology.
What clinical data do you have that kind of supports
the use of echo go for heart failure?
Speaker 2 (11:30):
Yeah, great question. So we have several publications now which
have externally validated our device. Our pivotal trial though, was
our original publication which was published in Jack Advances. That
was a trial with thirteen hundred patients across eight hospitals
across six different states in the US, and we compared
(11:54):
our device performance against the current clinical riskores. In that trial,
what we found just generally about the device performance was
it had a eighty nine percent sensitivity and eighty six
percent specificity, so really high performing device, very very accurate
at detecting hefbeth. But crucially, when we compared it to
(12:16):
a clinical risk score, we found the clinical risk scores
were not able to diagnose about sixty percent of patients,
so they were coming up as kind of indeterminate or
intermediate scores, and we were able to successfully classify all
these patients that the clinical risk scores couldn't diagnose. I
(12:39):
think this is one of the biggest problems with heart failure.
It's often an unknown diagnosis, and I think that's where
our technology can really help.
Speaker 1 (12:46):
Yeah, it's interesting that when you say that you know
sixty percent of the time it's not getting or having
a kind of indeterminate result, kind of to paraphrase anchormn.
It sounds like it works sixty percent all the time.
In this case, it works. The control group looks like
it works all the time only forty percent of the time.
Is that the right way to think about that?
Speaker 2 (13:08):
Yeah, exactly. So, the current clinical riskores in our trials
and also in other trials we've seen work, are only
able to classify about forty percent of patients, and they
do really well on that forty percent, but they miss
sixty percent, and that's where our technology can come in.
In comparison, we're able to diagnose ninety three percent of patients.
(13:30):
So there's a massive difference in what our algorithm is
able to do versus the current clinical risk ors. So
clinicians can be more confident after the echo has happened
as to whether the patient is showing signs of HEF
PEF on it.
Speaker 1 (13:43):
Yeah, and it's that And even then if, like you said,
if they're indetermined sixty percent at the time, they got
to do a second kind of analysis or a second
diagnosis somewhere along the ways. Actually, I'm curious though, is
what is what is your thoughts about why those kind
of traditional ways of diagnosing that miss or indeterminate sixty
(14:05):
percent of the time. Is it just because it's more
qualitative kind of assessments versus quantitative or is there something
else there that's kind of driving that kind of high
interminate rate.
Speaker 2 (14:15):
Yeah, I mean, I think it comes down to the
disease state. HEF PEF is a very difficult disease to
visualize and diagnose from an imaging modality. The gold standard
typically for diagnosis of HEF peth is actually a cardiac catheterization,
which is an invasive procedure where you measure the pressure,
(14:36):
where you measure the pressures of the heart, and so
without doing that, and you can't cardiac cat everyone that
comes in with shortness of breath. It would be as
it's such an invasive procedure, you have to kind of
assemble all of the non invasive measurements together to try
to understand whether a patient has HEF PEF. And it's
(14:57):
kind of like this constellation that you're try kind a
piece together and that this is why it's missed so often.
And so the things that you're trying to put together
are the patient history, the patient symptoms. You know, there's
blood buyer markers, there's echo findings, and you're trying to
assimilate all of those pieces into a clinical picture, but
often they don't always correlate with each other, and you're
(15:18):
left with this, I don't know what the patient has,
which is why a simple you know, an AI model
like ours that can say this patient is suggestive of
healthpeth or not, or they've got a probability of zero
point seventy five of healthpeth is really useful to clinicians
when they're trying to assemble this information.
Speaker 1 (15:38):
Yeah, and So with this net, with the clinical data,
with the FDA approval for the echo go heart failure,
what has been the commercial strategy for bringing that to
the heart failure choreologists in the field.
Speaker 2 (15:52):
Yeah, good question. Look, we're sort of fully implemented now
with sites across the US, so you know, a lot
of ali companies in this space are not have not
yet implemented the solution. So I think one of the
things that kind of sets us apart is and the
stage that we're at is we've actually implemented it in
(16:15):
workflows in hospitals. Our device is actually being used, which
means a few things. We're obviously FDA cleared, we have
reimbursement codes, but we've also figured out the way to
distribute the device and fill it into clinician workflows. So
to get this to clinicians, the workflow was really important
(16:37):
and is a really important part of the commercial model.
We've able to We're able to integrate it into the
workflow so that clinicians don't have to really do anything.
We're able to pull the scans ourselves, they don't even
have to send it to us. We're able to send
back the reports without them having to press a button,
and we're able to do it in a timely manner
so that within six or seven minutes, the report has
(17:01):
run through our models and is back for the clinician
in their system, and they don't know any of this
has happened in the background. And so getting that workflow
right where it truly doesn't disrupt anything the clinician does
has been really important to the commercial model. And the
other key components of the commercial model, which I briefly
touched upon obviously, is having the FDA clearance. You can't
(17:25):
do anything until you're FDA cleared, and we are fortunate
enough to get a breakthrough device and the FDA clearance,
and also the reimbursement codes, which is very important here
in the US.
Speaker 1 (17:35):
Yeah, and you know that's the thing about the reimbursement too,
is I noticed that you have both, you know, a
new technology code for the inpatient setting as well as
you know CPT three code for the outpatient setting. It
makes sense conceptually to have you know, a parallel path.
(17:55):
I guess one of the questions I have first is
I would assume it's more of an outpatient setting, where
is the analysis has been done? Is that true? Or
is there enough of the inpatient where it's kind of
almost fifty to fifty.
Speaker 2 (18:08):
Yeah, I think that the best use case of the
technology is probably in the outpatient setting, and that's because
this is where patients first present with dyspnea usually, which
is the most common symptom, and you really want to
pick them up in the outpatient setting before they become hospitalized,
(18:29):
because that's when the therapies can be more effective, is
if you find the patients earlier. That said, we still
do have effective therapies for patients after they've been hospitalized,
and it is still important to pick these patients up
in the impatient setting because it's still a difficult diagnosi
to make even in the impatient setting. But I would
say the majority of our use case is more outpatient
(18:52):
than impatient.
Speaker 1 (18:53):
Yeah. And so with then the outpatient and you have
the CPT three code, which is not the the full
permanent code yet, what are the steps to get to
a CPT one code? Is there going to be additional
data that you're going to need to present? I mean,
you did have a recently published article in the Nature Journal.
(19:15):
Is that enough to potentially get you to a permanent
code or is there going to be other work that
needs to be done.
Speaker 2 (19:21):
Yeah, So to get a category one code, the criteria
are you need at least five publications, one of which
has to be a prospective randomized control trial, and then
you need to have widespread use of your technology. That's
the two kind of two main criteria that is set out.
We obviously have enough publications now with our external validation studies.
(19:46):
We do have a prospective r CT underway at the
moment here in the US, and so we're hoping that
completes this year. And then the widespread use criteria is
you have to be in enough hospitals processing enough volume
for CMS to think it's warrants a category one or
(20:09):
them to think it warrants a category one CPT code.
Speaker 1 (20:13):
Okay. And then with that randomized trial, when you say complete,
is it going to be enrollment is going to be
complete this year or do you think the full data
will be collected by the end of this year? And
that maybe I'm just spitballing here. You know, ACC twenty
twenty six you can maybe have enough data to be
able to present there or am I getting am I
getting too excited about? You know?
Speaker 2 (20:33):
The strategy? I think so that's great. So, yeah, I
think we will target the ACC twenty six. The goal
is to have the primary readout by the end of
the year and target that ACC conference hopefully as a
late breaker for the initial results readout. So we're excited
(20:56):
for that.
Speaker 1 (20:56):
Yeah. And then you know, talking about the Nature journal
that was just published recently as well, you know you
talk about you know, it was a propensity match score
type of comparison. Is it going to be the same
type of clinical data that you're going to try to collect,
but just in a randomized form? So I mean they
(21:16):
were talking about, you know, classification scores, clarification scores, are
those kind of be the metrics that are going to
be in the randomized trial.
Speaker 2 (21:25):
Were certainly going to compare to kind of a clinical
standard in the control arm. The RCT does go a
step further though, and it tries. What it tries to
show is can the aultronics algorithm drive patients to the
right treatments earlier than the current clinical standards. So that's
(21:47):
really what we're trying to aim to achieve, because ultimately,
when you create an AI diagnostic device, the whole goal
here is can you find the patients and get them
to treatment, because if you can't get them to the treatment,
then there's probably no point in making the diagnosis initially.
So the RCT is really going to be geared at
(22:09):
that next step.
Speaker 1 (22:11):
And you know, we've been talking about the FDA approval
and the commercial strategy for heart failure portion, but we
also have FDA approval for a different cardiac disease, cardiac
and leodosis. It really hasn't received much attention compared to
heart failure, either with diagnosis or treatment, but there seems
(22:35):
to be more and more interest in this disease state
right now.
Speaker 2 (22:39):
Yeah, it's certainly the disease of the moment in cardiology
right now. But previously, even ten years ago when I
was attending cardiac conferences, when I went to my first
cardiac conference, there's almost no attention on it. And that
was because it was thought to be a really rare
disease and almost no one had it and there was
no treatments available. But now fast forward ten years, we're
(23:02):
actually finding out it's not a rare disease at all.
In fact, anywhere between ten and twenty percent of heart
failure and HEF PETH patients potentially have cardiac amyloidosis, which
is a huge number of patients and there is now
treatments available, which is very very exciting. But the disease itself,
(23:23):
the reason it's garnered so much attention as well, is
because it's a very malignant disease. If left untreated, the
prognosis of one type of amyloidosis is less than six
months survival, so it's a very very severe disease. But
now we have treatments that are very effective, and that's
(23:43):
really exciting.
Speaker 1 (23:44):
That's very interesting. And then it's interesting that you talked
about the ten to twenty percent of HEF PETH patients
could have had cardiac amilodoses beforehand, so it sounds like
it's a potential progressive nature. So if you can catch
you know, the cardiac amiliod dose beforehand, you may be
able to stall the progression until it gets to FPETH.
(24:05):
And then you also talk about treatments. What type of
treatments are available right now?
Speaker 2 (24:11):
Yes, there's kind of two classes of treatments currently available.
You have stabilizers and two examples of those that have
now been FDA approved are phizors to famidis, which was
the first one through and then more recently, Bridge Bio
came through with acaramidis, which is very exciting. These two
(24:32):
therapies sort of stabilize the protein and stop it turning
into amyloid. And then the most recent approval was an
actually for a silencer and that was from our Nylam
Pharmaceuticals and a drug called vutrisaran, and that silence is
the production of the protein in the liver. And so
(24:56):
these therapies are hugely, hugely effective, some of the most
effective therapies we have in cardiology. So it's an extremely
exciting time to be an amyloidosis because we can actually
do something for these patients.
Speaker 1 (25:10):
Well, you know, it also sounds like there's almost almost
a symbiotic relationship where we have the treatment now now
we can go out and aggressively start looking for it more.
And in order to more aggressively look for it, you
would want to have a better diagnosis or a diagnostic
tool to be able to find it. Is that kind
(25:32):
of the right way to think about it. It's almost
like a positive feedback loop at.
Speaker 2 (25:35):
This point, exactly exactly. And you know, we've said how
hard HEFPEF is to find on non invasive imaging, but
amloidosis is probably even harder, particularly because clinicians aren't necessarily
even thinking a patient could have cardiac ambloidosis. And I think,
if I'm remembering off the top of my head, on average,
(25:58):
it takes five a patient to have five echo cardigrams
for amloid dosis to be even be picked up. So yeah,
I think about that they have this extremely malignant disease
with a very bad prognosis, and it's taking five echoes
on average to be picked up. And that's where obviously
AI models can have such a big impact. And this
(26:18):
is why we've built echogo amloid dosis because we can
find these patients and particularly now they've got therapies, they
can do something to help stabilize or reverse course of
the disease.
Speaker 1 (26:30):
Yeah. Yeah, absolutely, And I'm going to sound like a
broken record again as we talked about with heart failure,
but you know, what is the clinical data that's showing
a potential benefit of using the echog emiliodosis versus kind
of as you were saying, five echo cardiograms type of
traditional approach.
Speaker 2 (26:49):
Yeah. Absolutely, So we actually have just had one of
our biggest publications to date in the European Heart journal
showcasing our pivotal trial with Echogo amloidis. This was a
huge study, large multi center international study with twenty seven
hundred and nineteen patients across eighteen different hospitals, and the
(27:12):
performance was we were able to have eighty five percent
sensitivity and ninety three percent specificity, so very very accurate.
Again and crucially against kind of the clinical risk scores.
What we found was the Echogo algorithm would identify thirty
six point four percent more patients than the current clinical standard,
(27:34):
So these were patients that would have been missed and
could have been put on treatment. And this is the
gap that we're trying to fill here.
Speaker 1 (27:42):
Yeah. Absolutely, and just refresh me for a moment. My
understanding is the sensitivity rate is you're able to accurately
find a positive result, and specificity is you're successfully able
to determine a negative result. Is that the right way
to think about it?
Speaker 2 (28:01):
Yeah. So I think sensitivity and specificity I think always
causes kind of confusion. So if you're not into in
the field that I'm in, it's always difficult to kind
of wrap your head around which one does what. But sensitivity,
in simplest terms can be thought of that as the
true positive rate, so of all the people who have
the disease, how many did the tests correctly identify? And
(28:23):
so really, if you have a high sensitivity, you're reducing
the amount of false negatives. And then specificity is the
opposite of that, which is the true negative rate of
all the people who don't have the disease, how many
did the test correctly identify. And so with high specificity,
you're reducing the amount of false positives you find. So
(28:45):
depending on the device, it's important to kind of tailor
it to either have a high sensitivity or a high specificity,
or balance them so that they're they're equal.
Speaker 1 (28:56):
Yeah, because you don't want to have false positives or
else that's going to start gaining them into more invasive
diagnostic tests or even you know, medical or like pharmaceutical
type of treatments that maybe they don't need. So it's
important to have that speecifically too as well.
Speaker 2 (29:11):
Well. Exactly exactly, I think anyone could make an algorithm
with a high sensitivity. All you have to do is
make it say yes to do everything. But the bad
side of that is you're going to have a bunch
of false positives and overcoller disease that.
Speaker 1 (29:24):
Is also FDA approved. Where are the same thing commercial
strategy for getting that out there? I'm assuming it's in
more of its you know, the early innings of a
lunch there and then just also the path for reimbursement
for that test as well.
Speaker 2 (29:44):
Yeah, so we were quite purposeful in building out going
from Echogo heart failure to Echogo amyloidosis, and they fit
very neatly together as part of our platform because you know,
when you diagnose someone with he FPEF, then one of
the next questions you've got to ask yourself is is
(30:06):
there signs of amloid doses? So the two work really
work well together in a clinical workflow. And so the
idea is that every time we run heart failure, we
find a positive patient that will trigger amloid dosis to
rule in or rule out amloid dosis for that patient.
So the commercial strategy with amloid is very much tied
(30:28):
to echo GO heart failure, and the two kind of
go hand in hand together. Reimbursement wise, I think that
we have a code for heart failure with preserved dejection fraction.
The majority of amloid doses patients fits under that umbrella
and so I think we're quite comfortable with the Echogo
(30:49):
Heart failure code being able to cover the majority of
use cases of amloid dosis. There is a question, you know,
for the for the ones that don't, will we pursue
a set code, and we're looking into that, but I
think the current strategy is the Echogo Heart Failure Reinbursement
code would act as an umbrella that Echogo amoidosis could leverage.
Speaker 1 (31:10):
Yeah, and so yeah, kind of consolidated altogether, you have
a lot of developments on both sides. You also just
announced that a Series C fundraising, which probably is good
timing given to the you know, all of these projects
that you're working on. What are the proceeds of this
financing going to be used for?
Speaker 2 (31:31):
Yes, So Series C is typically a very kind of
commercial scale up round, and that's kind of exactly what
we're using for the serious C. So we're very grateful
to have just closed our Series C round. The round
what is fifty five million dollars in size, and it
was co led by Legal in General, Allegious Capital, and
(31:56):
light Rock, and then we've got continued support from our
aren't investors such as Oxford Science Enterprises, Google Ventures, Blue Cross,
Blue Shield Ventures and Oxford University. And we also got
a couple of major US health systems that also invested,
which is part of the story that I really like
because it shows that, you know, the technology must be
(32:18):
valuable if you're having the health systems investor as well,
and they are University of Chicago and UPMC. So yeah,
we were very thrilled to raise it. The proceeds are
going towards both scaling up, particularly here in the US,
but also to the product roadmap as well and building
(32:41):
out the future modules for Eco Go.
Speaker 1 (32:44):
Yeah, and you know what are you know, let me
I'll say it this way. You know, you're building out
for that future. When we're thinking about heart failure five
to ten years from now, and we're thinking about ultraumatics
five to ten years from now, how are those going
to develop together?
Speaker 2 (33:02):
Yeah? I think it's a really interesting thought exercise to
sort of think about what heart failure and specifically what
and how AI plays a role in heart failure in
five to ten years time. You know, for us, I
think we would hope that at the level of an echo,
(33:22):
when a patient comes in for a routine outpatient echo,
we'ld be able to one help with diagnosis and reduce
the misdiagnoses at that point number one, but number two
really accurately phenotype them, and so the clinician would understand
after the ECHO has done, what treatments or what next
(33:45):
diagnostic tests the patient needs, and they would understand it
there and then without having to go through and try
and figure out the piece of the puzzle. So I
would hope in five to ten years time what you're
seeing is a extremely accurately diagnosed and phenotyped patient at
the level of the ECHO, which is one of the
most common non invasive tests being done, and that these
(34:07):
patients can be put and guided to treatment straight away
without having to wait six months, a year or even
being missed entirely. And that would be my hope.
Speaker 1 (34:20):
And this is I'm going to talk about a positive
feedback loop again. But is this going to be a
case where as you have more data in your AI
model going to be more accurate. With the more accuracy,
you're going to be able to expand use of the model,
which then gets more data. Is that going to kind
of help that process of getting to then the specific
(34:42):
phenotypes to then almost have a personalized treatment strategy for
those patients.
Speaker 2 (34:49):
Yeah, exactly that, exactly that, And heart failure is a
very active space with new therapies coming through, so we're
going to have to continually respond to the new therapy
coming through. There's new medical devices potentially coming into heart failure,
and so the platform is going to need to be
able to figure out which patient has what phenotype and
(35:11):
which patient responds best to which therapy, and so the
clinician can have all of that information there and then
at the ECHO. So it's kind of this precision phenotyping
finding the right patient for the right therapy at the
level of the ECHO, and that will be constant innovation
on our part to continually update our AI models to
(35:33):
be able to do that. So it's a really exciting
space to be in at the moment, particularly for AI.
Speaker 1 (35:38):
It does sound like an exciting time. And I'll just
close out this episode. I tend to like to ask
my guests if there was a certain book or books
either you read during your college days or you read
recently that has kind of left an impression on you
as you kind of go through the next few years
(35:59):
of this expansion and development of you know, Echo Girl,
And I'm just curious if there was anything specifically that
I can add to my reading list that you know.
Of course, my wife is going to yell at me
for getting another book in my library, but yeah, I'm
always curious.
Speaker 2 (36:14):
Yeah, absolutely. I mean, I've got two books that kind
of help with general first principles thinking if you like,
and help me kind of more generally on a day
to day basis, but it does feed into innovation and
how to think through the product roadmap. And then another
one more specific to startup. So the two books that
(36:34):
are my kind of absolute favorite everyone should read general
purpose books are Poor Charlie's Almanac, which is the will
and wisdom of Charlie Munger and Full by Random Full
by Randomness by Nason Teleeb, two books that are fundamental,
I think to many different careers. And then the startup
(36:57):
specific book that's really helped me kind of build the
startup over the last eight years and go from zero
to Series C, which is where we are today, is
The Hard Thing About Hard Things. Is a classic kind
of startup but so many lessons in there that I've
been able to use in my journey.
Speaker 1 (37:14):
That's very nice and ross. Thank you so much for
joining us. Today. I appreciate you coming in.
Speaker 2 (37:20):
Thanks Matt, it's been a pleasure.
Speaker 1 (37:22):
And thank you to our listeners for tuning in today
and we hope you join us for future episodes. If
you'd like to stay up to date, you can click
the subscribe button on Spotify or your favorite streaming platform.
Take care,