Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Hey everyone,
fascinating discussion today as
we dig into AI-poweredhealthcare, with a real
innovator in the field ofdigital diagnostics Mark, how
are you?
Speaker 2 (00:13):
Hey, doing great Evan
.
Speaker 1 (00:16):
Thanks for being here
Really intrigued by the amazing
work you're doing and we'regoing to dive right in with Irma
from Avira Health, maybe withthe first question and topic.
Speaker 3 (00:29):
Yeah, so obviously
today we're happy to have Mark
on the program we are going tobe diving into the iPower
revolution transforminghealthcare, and the topic
specifically is digitaldiagnostics, which happens to be
a name of your company.
Digital diagnostics, whichhappens to be in the name of
your company.
You have FDA cleared system toautonomously detect diabetic
(00:51):
retinopathy.
So can you step back and walkus through the founding vision
of digital diagnostics and howLuminetics Core brought that
vision to life?
Speaker 2 (01:02):
Yeah, absolutely.
Our founder, dr Abramoff, hasactually been working in the
field for a really long time.
He actually had the idea fordoing what we now call
Luminetics core way back in the80s when he was in grad school,
and he's been working on it eversince, waiting for the
technology to advance as thefield has come forward.
If you look at his earlyresearch papers, he's done so
(01:24):
much in the field, from justcomputer vision and some
fundamental computer visionlibraries all the way through to
what we have now withLuminetics Core.
So his whole vision the wholetime has always been that the
computer has the ability to seeand sort information much
quicker and faster and morereliably and more consistently
than a person does.
It doesn't get tired.
So how can we use that to helpdeliver this eye care to people
(01:45):
who desperately need it?
Because the problem that we seeand that we're going after is
there's a huge lack of access tocare because there just aren't
enough ophthalmologists.
Or even if you're in a placethat has enough ophthalmologists
, are you able to get to theophthalmologists, are you able
to see them?
And so using technology to helpaccelerate access to care that
was the vision for the companyand really help bring that care
(02:07):
equitably to as many people aspossible at an affordable price,
and that's really what we'vebeen setting out to do and
working on ever since.
Speaker 1 (02:14):
Wow, wonderful
mission.
So let's dive in.
Tell us about your core AIalgorithm.
How does it work, how is itmaybe different from other
similar systems and what makesit unique?
Speaker 2 (02:27):
Yeah, so Luminetics
Core is a software-only medical
device that's designed todiagnose more than mild diabetic
retinopathy.
So we work in combination with adevice that's called a fundus
camera, which is a special kindof camera that can take a
picture of the back of your eye,the retina.
And the reason you need to usea special camera is that the way
the eye is designed to bringlight in it doesn't make it easy
(02:48):
to actually see what's going inthere because of the way your
eye is designed.
That's what makes it work sowell, but it also makes it hard
to look at.
So we have to use this specialcamera to take a picture of the
back of your eye.
But what so much cool stuff wecan do with it.
And so what we do withLuminetics Core is we look for
biomarkers, which arespecifically markers in the
(03:10):
image that indicate somethinghappening biologically in the
eye.
So there's all these differentfeatures that you can see on an
image, just like on an x-ray.
You can see where a break is ona bone.
On a picture of the back ofyour eye, you can see all sorts
of different things happeningthere, and so what we've done is
we trained our model to look atthese biomarkers in the eye,
the same way that a physiciangets trained when they're going
(03:31):
through school.
And so we actually are lookingfor different biomarkers
specifically and then inaggregate, after the image looks
at all, after our machinelearning model looks at all
these different biomarkers, wecome up with a determination
based on these biomarkers andwhat we know from the ETDRS
scoring table.
This is whether or not you havea diagnosis of more than mild
(03:51):
diabetic retinopathy and werefer you out to a physician to
get treated.
Or it looks like you're good,you don't have DR, we'll see you
back here in a year.
And so fundamentally, thismachine learning algorithm takes
pictures and then it appliesthe algorithm to determine sort
of the outcome of what we thinkof happening in those pictures,
and then, once we get thatresult, it goes into the
(04:11):
patient's chart, but it's alsodelivered to the patient right
at the point of care.
And that's really maybe themost interesting thing to me
about this device is that when apatient comes in, they get this
picture taken of their eyes andwithin about a minute they can
get a result right there, at thepoint of care, and we found
that that makes a hugedifference in terms of how
people react to this informationand what they do about it in
(04:33):
terms of their outcomes, and sowe actually have some cool
papers that show patients whoget these results at the point
of care, when they get an image,actually will make
interventions.
They'll go get that follow-updone to protect their vision.
Some patients will even go sofar as to start addressing their
underlying issues with A1C anddiabetes.
So it's a hugely transformativething to get access to that
(04:53):
information.
What normally would happen isyou'd have to go to your primary
care.
They would refer you out to aspecialty care doctor and then
you'd have to go to thatspecialty care doctor, get the
images done, wait another coupleof days or even weeks to get
the results back.
People get lost in that process, and so that at the point of
care has a huge impact andthat's maybe the last
interesting piece is that wefocus on doing our screenings in
(05:16):
the primary care setting atyour regular provider that you
see for your, rather than goingout to all these specialty
clinics.
Because that's really theproblem is, if you could go to
the specialty clinic, youprobably already being there,
but you can't.
It's there, just aren't enoughof them, or people aren't able
to access them.
So by taking changing where wedo the exam, we're able to
access much more people, andthen that really helps close
(05:36):
that care gap that is sointeresting.
Speaker 3 (05:40):
I've certainly had
that picture of the back of my
eye taken, but I had to go to aspecialist to do it.
I didn't even know that kind ofaccess exists and I didn't know
what was happening behind thescenes with all that data and
how it's being processed.
So let's talk about what'shappening behind the scenes.
(06:01):
We hear a lot about AI beingused the right way, happening
behind the scenes.
We hear a lot about ai beingused the right way.
So, um, from your perspective,how do you bake the ethics and
equity into the development anddeployment of your technology?
Speaker 2 (06:12):
I mean, you started
touching on this, but tell us
more, yeah no, I mean, you cansee right in the, in the company
tagline, right, ai, the rightway.
It's fundamental to everythingwe do and really the whole um.
It starts at the very beginning, before the product even exists
.
The whole process has to bebuilt around this ethical
framework.
So we have there's this paperthat our founder, dr Abramoff,
(06:34):
helped contribute with thisgroup called the Center for
Collaborative OphthalmicInnovation, the CCOI, and the
CCOI is really a group ofphysicians and industry experts
and patients and advocates thatall get together to try to come
up with ways to make thingsbetter for everyone, and part of
one of the processes that theycame up with was this total
(06:56):
process lifecycle about how todo things equitably, from
bioethical principles, and sofor us, that starts from this
idea of the data we use.
We know where the data comesfrom.
It comes from patients who haveconsented to be involved in the
process and help us do ourmodel training or model
validation, all the way throughdoing analysis and understanding
of the disease state andhistorical access, and all the
(07:17):
way through the process from thedata gathering and the design,
then even as we go intodesigning trials and how we pick
trial sites, and all the waythrough into post-market
surveillance, where we make surethat we're monitoring the
system so that it's still ableto be used safely and that it
still performs in the field thesame way it did back in the lab
when we designed it and in ourtrials when we tested it with
the FDA.
(07:37):
So there's a wholeprocess-based piece about the
ethics that we really think isimportant all the way through
the life cycle.
It's not just one thing you doonce, it's not just a checkbox.
It's really a big part of ourculture, from gathering the data
to building the product fromthe whole way through.
Speaker 1 (07:53):
Incredible.
So, as you know, we're in themidst in the US of a diabetes
crisis.
Yep, probably globally.
In some places, absolutely Athird of children might be
pre-diabetic.
It's shocking.
So what are the biggest hurdlesto implementing your solution?
Autonomous DR screening inprimary care?
(08:15):
I think you have some greatexamples.
Ohio Health, I see on yourwebsite, has been an amazing
rollout, but tell us more.
Speaker 2 (08:25):
Yeah, I mean that's
really what we've been looking
to partner with is the folk.
There's a couple of differenttypes of providers we were going
for.
Academic medical centers areanother big provider, that group
that we work with because,again, a lot of times the
problem there is they haveenough physicians, they have
enough cameras, but they don'thave enough time to do it all.
In the bigger academic settingsthey have a lot of other cases
(08:47):
going and you get this hugebacklog of screening exams.
We talked to a large academic inthe mid-Atlantic region and
that's one of their problems isthat the ophthalmologists have a
couple hundred DR screeningexams that they have to read
every night while they gothrough their Epic inbox, and it
creates delays in patient care,creates huge dissatisfaction
for the physician and you justgoing there clicking no, no, no,
no, no over and over again istedious, time consuming.
(09:10):
It's really not the best use ofall their skills and training.
We really think that the bestuse of physicians' time is with
the patients not staring atpixels on the screen, and so
that's really where we found alot of success is in places for,
like those larger centers,where they just have a backlog
that they can't fill or then theaccess piece to the other, like
primary care settings Like SSMis a great group that we work
(09:34):
with in a bunch of differentstates in the Midwest.
They have care settings allover the place and again, like
that's a great app in terms ofaccess to more rural locations
or places if you're not in thebig city, like an EMC you know
we have success stories withboth.
Speaker 3 (09:48):
Yeah, speaking of
success stories, you have global
presence and obviouslyhealthcare systems are different
around the world.
So tell us what you've learnedfrom international deployments,
for example, your partnershipsin Saudi Arabia.
Speaker 2 (10:04):
Yeah, it's definitely
a very interesting process.
There's a completely differentjust from the commercial side of
things.
It's a totally different salesprocess.
Regulatory is more similar thanyou maybe think in terms of.
If you have your FDAcertification approval, then you
can usually use that to getinto those markets with a lot
less effort than starting fromzero.
(10:26):
But that's because we'vealready gotten the FDA approval.
So one of the things that'sreally interesting too is the
focus on integration and scaleis very different.
So for a lot of customers likein Saudi is a great example they
have huge access issues interms of huge patient
populations with no physicians.
So there they might be able tostaff up 20 or 30 rooms to do
(10:46):
Illuminati's core exams, butthere won't be necessarily
physicians on the receiving endof a referral to actually treat
the patient, so that theproblems are a little bit
different.
Like in the US, if we can catchthe disease, we can typically
route you to a provider to getyou treated.
In Saudi, one of the issueswe've seen is they don't
necessarily have.
Just because we've been able todiagnose you here, now we have
to take you to another place tofigure out the treatment.
(11:08):
So, there's different piecesthere as well, but I think we're
still kind of a little earlydays in some of those other
markets as well.
We're largely focused in the USright now, but it's really
exciting to see the appetite andthe engagement for folks in
different markets like Saudi orUAE or other places in the
Middle.
Speaker 1 (11:26):
East.
Amazing, and can you share anyclinical outcomes or metrics
from real world use cases?
Speaker 2 (11:34):
Yeah, I mean we have
a bunch of papers on our website
so I don't want to misquotethem from the top of my head and
get any of the numbers wrong.
But I think the biggest thingthat I always get excited about
in my favorite story is we'vegot a case study about a patient
who sort of changed theirbehavior after they got the real
world exam, and that one Ialways personally love.
The other one that has that'sreally important is looking at
(11:56):
real world outcomes of patientsthat are treated with the AI
versus patients that are not.
Again, having a point of viewresult, you get better follow
through in terms of actually thelong-term patient outcome, and
that's that, to me, is probablythe most exciting piece is just
recognizing that if you haveyour exam interpreted by the
machine, we can prove withscience that you actually will
(12:16):
have a better outcome thanwaiting for a physician to do it
.
So I think there's a lot ofopportunity there and a lot of
we're really early days in termsof gathering evidence.
There, too, we have some earlydata.
But if we wanna look at thingslike visual acuity or other
longer-term diabetes things,we've got research projects that
we're working on, collaborationwith some academic partners to
(12:37):
look at more of this real-worldevidence and to keep generating
that Because that's one of thebiggest things we've seen is, as
we build this evidence, then weget more trust from physicians,
and then it creates thisvirtuous cycle of more adoption
and then more data, and then wecan keep the snowball rolling.
Yeah.
Speaker 3 (12:53):
Speaking of virtuous
cycle, I know you're exploring
other disease areas likeglaucoma or age-related macular
degeneration, evenneurodegenerative diseases.
What application is next kindof on your pipeline and why?
Speaker 2 (13:27):
Yeah, so there's a
couple different things we've
got going on.
I think the one thing that'sprobably the easy, obvious one
is we're currently our device iscleared.
I can't stick in a giant devicein a little closet.
So one of the things we'reworking on right now is getting
access on more cameras andhandheld cameras so that we can
again that access piece is sucha key piece.
We think that getting beingable to diffuse what we have out
(13:50):
just to more people is hugelyvaluable.
And then, in terms of some ofthe oculomics pieces, we're
doing a lot of data gatheringand research right now with
academic partners to figure outwhat are these other diseases
that we can detect from the eye.
Are we going to actually beable to make claims for and then
go through with a new devicesubmission?
There's a lot of cool researchon neurodegenerative diseases,
cardiac diseases, kidney diseaseall sorts of cool stuff that we
(14:12):
can do, being able to say, allright, well, what benefit can we
give to the patient?
Where is there a workflowproblem that we can solve?
That was part of why we pickeddiabetic retinopathy as our
first actual disease was that weknow that there's a problem
that this automation caneliminate.
We don't want to just pick anew disease like AMD or
something where maybe, dependingon what kind of AMD you have,
the right 90 degree is puttingan AI that can detect that
(14:35):
they're useful, compared tosomething like a cardiology
where if?
I say, hey, it looks like maybewe need to get you on some
statins.
That'd be a lot more compelling, a lot more valuable for
patients.
So we're trying to evaluate,based on sort of both the
reliability of the system andthe data and also the potential
impact for patients andproviders, because I think
that's really a lot of AIsystems that I've seen in other
(14:58):
specialties Radiology is a greatexample.
They're helping the providersdo things.
They're not necessarilythinking about the patient or
that part of the workflow, butwhat we've seen as a major
differentiator in stickiness ishelping the patient, not just
helping the providers, like it'sassumed that the providers need
help, but if you keep thepatient centered, it typically
everything else flows muchbetter than if you're trying to
(15:18):
just, you know, use technologyfor technology's sake, which
some of these AI models arereally nifty but are you
changing and helping or are youjust putting a hat on a hat,
like it's you can see?
Speaker 1 (15:31):
from the outside.
Yeah, makes sense.
You've also cracked the code onFDA clearance and reimbursement
models, which is a huge deal.
Any advice for your peers orother innovators who are going?
Speaker 2 (15:46):
through that at the
moment, early on, right?
So let's say you wanted to do ascreening thing for glaucoma.
Glaucoma screening is definedin the Social Security Act and
it's defined really explicitlythat it has to be a human
physician that does thescreening if you want to get
reimbursement from the USgovernment.
(16:07):
So you need to get laws changed.
Sometimes you need tounderstand what insurance
companies and providers want todo.
So we have a whole team thatnetworks with providers, that
talks to CMS, that talks to allthese different groups, because
the other thing to think aboutis when you're designing your
clinical trial and your evidence.
The clinical, the evidence youneed for the FDA is to show that
(16:27):
your device is safe andeffective and secure, which is
different than what you need toshow an insurance provider that
your device is better than analternative and is cheaper than
an alternative treatment, right.
So you have these sort of notexactly conflicting but not
purely aligned goals and all ofthat needs to be contemplated up
front when you're coming upwith your device design and idea
.
So you have that buy in fromprovider, from payers, that
(16:49):
they're actually going to beinterested in reimbursing your
service, because the worst thingin the world you see a cool
breakthrough device and then noone can get paid for it and then
it just sort of dies on thevine.
We did a lot of work upfront onthat reimbursement piece and
maybe the last piece about theFDA and the regulatory front is
it's a very dynamic environmentright now.
I think that the way we'vegotten cleared and the feedback
(17:11):
we've seen in subsequentsubmissions and what we hear
from peers in the industry isthat things are changing a lot
right now and really the bestadvice I have is to get a really
strong regulatory partner whounderstands the landscape, who
understands the relationshipsand the things that matter and
then follow their advice.
Because a lot of times what I'veseen from other folks,
(17:31):
especially in the startup spaceor the health tech innovator
spaces they look at regulatoryas a checkbox thing you do at
the end of the process asopposed to a integral part of
the whole thing that you reallyneed to think about your
destination before you evenstart your journey and if you
don't do that, it can become areal hard problem trying to fit
a square peg in a round hole inthe last mile.
So doing that upfront planningfor both the reimbursement and
(17:54):
the regulatory is hugelyvaluable in their intertwining.
You got to think about themtogether, even if you're going
to have to meet them withslightly different goals and
objectives in your trial.
Speaker 3 (18:04):
Yeah, I like your
thoughtful approach ahead of
time to think through all theseconcerns, and I love how you
said earlier, you put patientsalso at the center of kind of
adding value using yourtechnology, not just providers,
and that's how we move thiswhole ecosystem forward.
(18:25):
So maybe a little bit of acontroversial question.
You touched on this, but let'saddress it directly.
Some people are concerned thatmaybe autonomous AI could
threaten radiologists andophthalmologists, and critics
are concerned about displacingphysicians by computers.
Clearly, you already laid outhow this is an adjunctive
(18:48):
process.
This is a helpful tool, somaybe just explain from this
specific perspective augmentingdoctors, not replacing them 100%
.
Speaker 2 (19:01):
Our goal is to help
doctors spend more time with
patients, not less time, becausethe real value for the
ophthalmologist is to sit therewith the patient and talk to
them, look at their eye and beable to explain what's going to
happen to them or how they'reable to treat whatever disease
they have, and be able toexplain what's going to happen
to them or how they're able totreat whatever disease they have
.
Sitting in the back office andlooking at a computer screen and
typing up a report doesn'tcreate value for the patient
(19:26):
Really and for the physician.
They're doing it becausesomeone needs to make the record
, not because it creates value.
So I think that that's reallythe big opportunity is to take
things that don't create valuefor the physicians off of their
plate.
Not that what the task is isn'tvaluable.
Interpreting images is a hugelyvaluable task and even with
some of the multimodal modelsthat I'm seeing in the radiology
space, there's a lot ofcomplexity.
(19:46):
I mean, if you look at what AIclearances radiology devices
have, there aren't autonomousdevices there.
I don't know if there will beautonomous devices, at least not
in the current regulatoryframework.
And even if you look at expertgroups like the ACR, acr is not
in favor of autonomy.
They don't think that that'stheir end goal either, and so I
tend to agree that having anautonomous sci-fi kind of device
(20:08):
, I don't think that that's thereal North Star.
I don't think that that's wherewe're going.
I think we're going to havedevices that take painful,
tedious things out of the mix,just like spreadsheets took away
all the manual paper processfor accounting and let people be
speculative and do things.
That AI is going to be verysimilar.
It will take some of thecognitive burden and workflow
burden off of physicians, butultimately that will allow them
(20:31):
to spend more time with theirpatients, whether that's doing
treatment or explaining things.
I think that's really where thevalue is.
I see some of the AI tools thattry that are maybe more patient
focused, and I always worryabout if I'm a patient, I want
to talk to a person.
Like talking to a machine isnot going to reaffirm me when
I'm going through a medicalevent and that, in fact, I kind
(20:51):
of want to talk to someone who Ihave trust with, and as much as
I trust the machine to hit thebrakes on my car, that's not the
same as telling me how to feelabout a diagnosis or what my
outcomes might be.
And that's where the machine,when it really works well.
It takes those distractionsaway from the provider and it
can just be two people having aninteraction.
But we have a long way to go,with a lot of the technology, to
(21:11):
get there, and I don't knowthat everyone agrees with that
vision.
Speaker 1 (21:19):
So lot of the
technology to get there, and I
don't know that everyone agreeswith that vision.
So we see a lot of interestingthings out in the field.
Well, interesting things indeed, and amazing work.
We're all rooting for you.
Many of us have friends, familymembers, with diabetes.
It's such an important mission.
What's up for the rest of thesummer and fall?
What are you guys up to?
Where can folks meet you or seeyou out and about?
Speaker 2 (21:35):
um, we've got a
couple different conventions and
things coming up.
I actually I don't know off thetop of my head what our
marketing stuff is.
I'll have to, I'll have tofollow up on that, but I think
we've got um.
Arvo is always a big show forus in the fall.
Uh or um sorry not about a inthe fall.
So we'll definitely be there um, and I think there's a couple
other trade shows coming upbetween now and then too, but
(21:58):
AAO in November, I think, willbe the big one.
Speaker 1 (22:01):
Good luck with that.
Speaker 3 (22:02):
Can people find you
online?
Speaker 2 (22:04):
Oh yeah, definitely.
We're on all the differentsocial things, linkedin and our
website, digitaldiagnosticscom.
Speaker 1 (22:11):
Well, thanks for
joining, really appreciate the
insight and the work you'redoing, and thanks everyone for
listening and watching andsharing this episode.
And be sure to check out ournew TV show, techimpact TV, now
on Bloomberg and Fox Business.
Thanks everyone.
Thanks Mark, thanks Irma,thanks Mark, take care.