Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Dr Andrew Greenland (00:02):
So welcome
to Voices in Health and Wellness
.
This is the podcast that shinesa light on the innovators
transforming patient care andclinical outcomes through
thoughtful technology andstrategy.
I'm your host, andrew Greenland, and today I'm honoured to
speak with Dr Glenn Loomis.
Glenn is the founder and CEO ofQuery Health, an AI-driven
platform designed to streamlinepatient interviews and deliver
actionable clinical analytics inreal time.
(00:24):
With over two decades ofleadership in healthcare,
ranging from runningmulti-speciality practices to
serving as a chief medicalofficer, glenn brings a unique
blend of hands-on clinicalinsight and executive experience
.
Glenn, thank you so much forjoining us.
It's a real delight to have youon the show this afternoon.
It's my pleasure to be here,thank you.
So maybe you could start at thetop and maybe talk a little bit
(00:45):
about your background and whatinspired your transition from
family medicine into healthcareinnovation.
It'd be really interesting tohear.
Dr Glenn Loomis (00:53):
Sure, well, I'm
a family doc by background,
started off in the military andthen was in teaching for about
10 years and then the last 20years of my career was really
running medical groups andhealth systems.
And then, about five years ago,decided that I'd sort of burn
(01:14):
out on health systems for awhile, and technology has always
sort of been my side gig, ifyou will.
I've always been the Epic guyor the Cerner guy or you know
whatever system we were using,I've always been the Epic guy or
the Cerner guy or you knowwhatever system we were using.
So I decided to try somedifferent things.
So I joined a startup, workedon an AI scribe for a while
before LLM so it wasn't quiteready for primetime yet and then
(01:37):
worked for a digital healthcompany for a couple of years
running their medical group andthat was very insightful.
And then decided I reallywanted to go back to what I'd
always wanted to do, which issort of build a digital doctor,
and when LLMs had sort of comeout and it looked like it
actually might be possible, Istarted my own company last fall
and I've been working on thatever since.
Dr Andrew Greenland (02:02):
Amazing.
So 20 years in clinicalmedicine.
I'm just trying to get anunderstanding of what kind of
made that made that switch foryou, because it's quite a.
I know you had a big interestrunning all the way through, but
I'm just very curious to knowwhat was the thing that drove
you.
Was there a particular gap inyou spotted and you thought you
have to get into this, or wasthere some other kind of thing
that uh helped you to to switchjump?
Dr Glenn Loomis (02:22):
no, that's a
great question.
I actually I've always been atechie kind of guy, but I was
working with the AmericanMedical Association I was
chairing their council on longrange planning and really got
into AI, really startedunderstanding almost 10 years
(02:43):
ago now that it was gonna reallychange healthcare maybe not
immediately but certainly in thelong run and wrote the first
primer on AI for the AMA.
And as I did that I really gotinvolved with some other folks.
So I started working with IBMon a project looking at what we
(03:04):
could do with watson.
Could we actually make it sortof think and act like a doctor?
And it really wasn't up to thetask yet, but that really sort
of inspired me to keep up withwhat was going on in the ai
world and as I saw that therewere now tools to be able to do
what I wanted to do, thatinspired me to take the leap and
(03:26):
actually go out on my own.
If you'd asked me, you know,when I was in medical school or
residency, if I thought I'dstart a tech company someday,
I'd have said you needed yourhead examined.
But here I am.
Dr Andrew Greenland (03:40):
Amazing.
So was there a particular gapthat you saw in patient care or
data that really stood out toyou in this sort of point of
transition?
Dr Glenn Loomis (03:48):
Yeah, there's
really two things that have
inspired me.
One is in the US, we reallyhave a crisis in terms of our
ability to actually get patientsin to see providers.
Access is a real problem for usand I know sometimes in Great
Britain it's a problem as wellbut we don't have enough primary
(04:15):
care providers specifically,but all providers really, and so
I was looking for a way.
I've been running medicalgroups for a very long time, and
so I was looking for a way.
I've been running medicalgroups for a very long time, and
so I was looking for a way tomake doctors more efficient, and
when you look at what AI can do, a lot of what it can do are
things that could offloadcertain tasks that physicians do
(04:36):
and maybe make physicians moreefficient.
So that was one.
And then the second thing is asa chief medical officer and as a
president of a medical groupover the years, I get to see all
the worst things that happen inmedicine.
Right, I see all the mistakesthat get made and people that
don't follow the guidelines andthings like that, and so, again,
one of the things that AI doesreally well is it follows the
(04:59):
rules.
It goes out and finds theguideline and says this is what
you ought to do, right out andfinds the guideline and says
this is what you ought to do,right.
And so if I could put those twothings together and make
providers both more efficientand more effective, I think
that's a winning strategy, right.
And if I could do that in a waythat also made patients feel
more heard, that's an evenbetter strategy, and so that's
(05:23):
really.
What I set out to do is createan AI agent that will actually
offload different parts of thesort of physician work stream if
you will workflow and do thatin a way that makes them more
efficient and also, hopefully,more effective from a quality
and patient safety standpoint.
Dr Andrew Greenland (05:43):
Amazing.
So how has your clinicalbackground shaped the way that
you approach tech development,because you're probably a rarity
coming from medicine into techcompared to people that go
straight in and probably 20years of experience in what
you've done clinically surelyhas shaped the way that you
approach this.
Dr Glenn Loomis (05:59):
It's funny on
LinkedIn yesterday there was a
post about clinician buildersand how we approach things
differently than perhaps the youknow typical 20-something MBA.
And so I think we do approachthings differently.
I like to say, you know,there's a lot of people that
have sort of dashed themselveson the rocks of trying to reform
(06:24):
healthcare right, or transformhealthcare, because healthcare
is a monolith and it's highlyconservative by nature, and for
good reasons, right?
I mean, our first maxim is, youknow, first do no harm, right?
That's what we learned asdoctors from the start, and so
you know we're very conservative, and so if you don't understand
(06:44):
how medicine works from theinside and where the levers of
power are, where the levers arethat you can push to actually
make change, you get veryfrustrated very quickly.
And so I think that's where I amdifferent.
I've run big systems, I've beena part of organized medicine.
(07:06):
I know where the levers ofpower are and how we can change
things for the better.
And I believe that we canchange that from the inside and
make it better for both patientsand for physicians and all
providers by doing it sort ofwithin the system rather than
trying to revolutionize.
(07:28):
You know, create a revolution,I'm not sure revolutions will
ever work in healthcare.
Dr Andrew Greenland (07:34):
Okay, and
from your vantage point, what
major shifts are you seeingright now in healthcare delivery
or health tech?
Obviously, we're in a very fastmoving situation with AI, but
what are?
Dr Glenn Loomis (07:43):
you seeing?
Yeah, I think the first one isjust the use of scribes, the
digital scribes that listen andtranscribe what you're saying
with the patient.
I think that was the sort offirst use case for AI, and it's
done a lot of good forphysicians' sort of mental
health, if nothing else.
(08:04):
A lot of good for physicians'sort of mental health, if
nothing else.
Unfortunately, if you look atthe studies, they would say it
doesn't really save time formost providers.
If it does, it saves just asmall amount, and so we need to
actually go beyond that to othertools that are actually going
to take out parts of the visit,take out parts of the time that
patients are spending withphysicians, or physicians are
(08:25):
spending on paperwork, et cetera, and be able to actually give
physicians back some of the timethat they've lost with all of
the administrative sort oftrivia that we have to do
nowadays.
Likewise, I feel like patientshave gotten squeezed right.
We physician visits have gottenshorter and shorter over the
(08:46):
years, and because of that,patients oftentimes feel
shortchanged, like they don'thave time to tell their story,
and so, again, if we could useAI AI doesn't care how long you
talk to it.
You can talk for an hour or 10minutes or two minutes it
doesn't care right and so we cangive patients, I think, time to
(09:07):
tell their story in a betterway, gives us better data as
providers and at the same time,then allows the provider to save
a large amount of time in thevisit because they don't have to
do that.
Data extraction Data extractionfrom patients takes us about
50% of our time.
About 50% of a visit is takenus trying to pull that out.
If we could get that done by AIahead of time, you could make
(09:29):
us much more efficient, and sothat's really sort of where I've
been focused.
Dr Andrew Greenland (09:35):
Cool, and
in terms of the clinicians that
are using your tools at themoment, how are they finding
them and how are they sort ofmost benefiting from the AI
power that you have behind them?
Dr Glenn Loomis (09:45):
Sure, I mean,
we're still sort of in beta with
it, and so we've been doing alot of testing, refining,
working a lot on our sort of EQ,if you will, on the bedside
manner of it, so that you know,the AI learns to take turns in a
(10:08):
proper way and talk to patientsin a way that's very reassuring
and here's when the patient isfrustrated, and those kind of
things and so we've been workinga lot on that.
In terms of the actual justsort of data piece, data piece,
I think all the providers thathave used it have been really
amazed at how good it actuallyis, even though it's our first
(10:30):
generation of it and it'llcertainly change and improve as
we go along.
Out of the box, I'd call it agood, solid A-.
Dr Andrew Greenland (10:41):
Brilliant.
Compliance and data privacy isa huge thing around AI.
How are you balancing theusability of the tools that
you're involved in creating withthis whole issue of compliance
and data privacy?
Dr Glenn Loomis (10:54):
Yeah, that's a
hard one, right?
Because this new world, there'sa lot of data going in a lot of
directions and so trying tomake sure that we very much
honor the patient's need forprivacy.
One of the things we've done isactually put all of the
patient's data in the patient'shands.
(11:15):
So part of our app is apersonal health record that they
get all of their data, theyhave access to it.
They have control over who isgonna be able to access that
data.
So if a new physician wants toaccess their data, we send a
two-factor authentication to say, hey, this person is asking to
see your data.
Do you want them to have accessor not?
(11:36):
Things like that.
So we're trying very much tocomply with things like, you
know, hipaa and GDPR and makesure that patients feel like
their data is protected.
At the same time, we need thatdata to be able to make our
application better all the time,to train the application.
So we've done a good job, Ithink, of trying to de-identify
(11:59):
the data so that we can use itin the background to make our
application work better.
But it's a dance for sure, youknow trying to make sure that
patients feel their data issecure, protected but at the
same time, use the data in a waythat is effective.
Likewise, when we send the dataout to the large language
(12:27):
models, to the AI, we strip allthe identifiers out so that it
can't be traced as well.
So yeah, very much a dance, buttrying very hard to protect
patient data, because that'sextremely important.
Dr Andrew Greenland (12:39):
It's very
important.
It's a very sensitive area.
I think most clinicians it'sone of the things that they fear
most about the whole AI thing.
Very important, it's a verysensitive area.
I think most clinicians is oneof the things that they fear
most about the whole ai thing.
Um, what about the, the toolsthemselves?
I mean, is there a learningcurve to these things or is, if
you try to make them or yourpeople who are behind them feel
as intuitive as possible forbusy providers, you just want to
basically pick things up andrun with them, without getting
bogged down in sort of manualsand training and all this sort
(13:01):
of thing.
Dr Glenn Loomis (13:03):
Yeah, we're
trying to make it as low lift as
possible, you know, so that youbasically can pick it up as a
patient and just use it.
We will have a couple ofscreens that say you know, push
here to do this, push here to dothat, but it's pretty intuitive
On the provider side.
What we are attempting to do ismake it so that you can
(13:25):
basically the patient talks toit.
You come in the office, youtalk to the patient and then at
the end it's all.
The whole note is available foryou.
We'll actually have the scribepiece of it done so that you can
(13:45):
just keep going with that samevisit that the patient did with
the agent.
And then, as a provider, theonly time you need to touch the
EHR will be to actually enterorders and push send.
That really is our goal is toget physicians off the keyboard
as much as possible.
Dr Andrew Greenland (14:06):
Our goal is
to get physicians off the
keyboard as much as possible.
Cool, and you mentioned thepatient experience.
Well, what is the patientexperience?
Do patients have a reluctanceto be talking to bots and robots
and AI tools?
How do they feel about all ofthese things?
Dr Glenn Loomis (14:19):
It's funny, you
know, the data would say that
patients actually prefer talkingto bots and agents over humans,
unless you tell them it's a botor an agent, in which case then
they say that they prefer itless.
So it's a.
You know, human psyche is aninteresting thing.
(14:41):
We often are able to foolourselves into thinking one
thing or another, depending onsort of what the setup is, but
in general, there's a lot ofdata that says, for example,
that patients will tell a botmuch more than they'll tell a
psychiatrist, for example, abouttheir mental health, because
(15:04):
they feel less judgment, youknow, et cetera.
And so we're trying to makesure that this is as close of a
encounter as they would havewith a physician.
We're trying to make it asclose to that experience as
possible while at the same timepreserving sort of that no
judgment and sort of ease ofinteraction for the patient, so
(15:28):
that they feel like they can sayanything, because that's the
important thing, right?
I mean, as a provider, we needall of the information if we're
going to make great decisions,right?
And so if things are hidden orforgotten, or just people don't
want to say them because theyfeel bad, that really impedes
(15:50):
our ability to make greatdecisions for the patient and
with the patient.
Dr Andrew Greenland (15:56):
And on the
other end of the spectrum, the
clinician experience.
Is all the clinician experienceso far positive?
Or are people skeptical?
Are they worried about theirjobs being replaced at some
point?
Where do the clinicians kind ofsit with this?
Dr Glenn Loomis (16:08):
Yeah, I think
the clinicians are skeptical at
first until they sort of sit andsee how it is.
It's interesting in ourapplication they can actually
see the summary that's generated.
They can actually see all ofthe back and forth that the
patient and the agent has.
So they can see the wholetranscript if they want to read
(16:29):
it and they can go back andforth between the two so they
can see oh, this is where thiscame from.
And, yeah, that patient did saythat we also are in the process
of instituting that the patientwill sign off on the summary
before it goes to the provider,and I think that's also gonna
help help providers really feela sense of of calm that yes,
(16:55):
this is what the patient wastrying to convey to me, because
they signed off on it at the end.
Um, but there is a skepticismthere.
So far, most, most physiciansdon't feel like it's going to
replace them, um, but I think asthey learn more and more about
what it will be able to do, whatai is going to be able to do,
(17:15):
there's going to be some of that, because it can do a lot of the
functions that we have normallydone.
Personally, I think at least inthe US we have such a, and
actually around the world, andespecially in sort of developing
countries, there's such apaucity of physician access that
(17:38):
I don't I'm not too worriedabout anybody losing their job
right now at least you know, Ithink for the foreseeable future
there's, there's plenty of needto go around.
Dr Andrew Greenland (17:51):
Cool, I
think I might know the answer to
this, but I'll ask anyway.
I mean, do you think AI isbeing overhyped at the moment or
underused, particularly inhealthcare?
Dr Glenn Loomis (18:01):
I think the
answer to that is yes.
I think there is an overhypegoing on a little bit, that you
know.
Everybody is saying that youknow, well, my thing has AI, you
know, you know.
So, like every everybody istrying to jump on the AI hype
train.
When you really get under thehood, there's very few sort of
(18:25):
truly native AI applications,and so, yes, there's overhype.
I think in medicine, though, orin health care, it's the
opposite.
I think we, as we always are,we have been slow to sort of get
on the train, other than a fewsmall things, and so I think, in
(18:45):
healthcare, we actually arebehind where a number of other
industries are, and I thinkyou're going to see that sort of
explode over the next two years.
So, yeah, yes to overhype, butyes to underhype, I guess.
Okay.
Dr Andrew Greenland (19:01):
Following
on from that one, what do you
think is the biggestmisconception that you hear
about?
Ai in patient care?
Dr Glenn Loomis (19:08):
Oh, that's a
great question.
I think that probably thebiggest misconception is that
there's a lot of that.
It makes a lot of errors, thatit makes up stuff.
Yes, llms do make up things,but you can control for that
(19:32):
right.
So we have, for example, ifwe're looking at facts, we have
an agent that checks the agentright and says you know, does
that guideline actually existand did they actually look at it
correctly?
So, like you can really reducethe hallucinations that the LLMs
(19:53):
have?
And even so, it's still at avery small rate anyway.
And so if you look at the amountof errors that happen in
healthcare by humans, thequestion is going to come down
to one of and I don't know theanswer to this, I'm just going
to state it the question isgoing to be which is worse a
(20:17):
human that didn't use AI andtherefore didn't use the right
guideline and you didn't get theright care, or an AI that most
of the time gets it right but onthis particular occasion, made
an error and you got the wrongcare?
Like which is worse?
Right?
Well, I'd argue they're bothbad, but I can show you many
(20:38):
papers that would say that, onaverage, the AI gets it right
more often than the human, andso like is it most important
that we get it right most oftenor is it most important that we
never allow AI to make an error?
Don't know the answer to that.
I'm sure the courts are goingto weigh in on that.
(20:58):
I'm sure you know theregulators are going to weigh in
on that.
But for me, I would tell youthat I can see already that we
are going to be able to use AIto give physician the tools so
they don't forget things.
They don't make an errorbecause they just forgot about
that guideline or that diagnosisthat wasn't at the top of their
(21:20):
list, it was down two or three,and they just forgot to even
consider it, or they forgot towork it up, or they forgot to
add that one lab test right, andthat is where I think AI is
going to make a huge differencefrom a quality perspective.
But that fundamental questiondoes exist Is it right to be
most often right or is it mostimportant to never be wrong?
Dr Andrew Greenland (21:45):
Yeah, it's
a good way of putting it, and I
suppose the question thenbecomes who takes responsibility
for the error made by AI?
Dr Glenn Loomis (21:52):
Yes, and I
think at the moment it's still
our license as the providerright, it's still our license on
the line.
So you know, you always have tobe diligent and make sure that
you're you know that you'rewatching over it and that you're
you know we still have humansin the loop, we still have
(22:12):
physicians in the loop for thereason right.
We need that check and balanceand I think if we use that sort
of pairing correctly, whereyou've got AI sort of surfacing,
you know good differentials,good treatment plans, but maybe
with the occasional error, andthen you've got physicians
(22:33):
checking that very quickly, youcan do it very quickly.
It doesn't take you very longto read that stuff and saying,
oh well, I'll pick that onebecause this doesn't make any
sense today, you know.
Then I think we get the best ofboth worlds right when we get
it right most of the time and wedon't have those errors.
But I think eventually you'llsee AI going direct to patients,
(22:55):
as you know, in a sort of basicprimary care way at the very
least.
But I think that's a few yearsoff.
You know regulations, never.
It takes them a long time tocatch up to where the, where the
tools actually are.
So you know, I I don't like topredict, you know that they I
don't remember who said it, butsomebody once said you know, we
(23:17):
always overestimate whattechnology is going to do in the
short run and underestimatewhat it's going to do in the
long run.
And I think that's where we areon AI, for sure.
Dr Andrew Greenland (23:26):
Cool.
So what are you seeing in termsof clinical operational
outcomes so far with the toolsyou're implementing?
Obviously, you must be lookingat the outcomes at the other
side.
We know these things can dowonderful things.
What are you seeing in terms ofoutcomes?
Dr Glenn Loomis (23:39):
Yeah, I think
in terms of outcomes, what we're
seeing is that it can decreasethe time that providers spend
with patients without makingpatients feel bad about that,
that actually the patients feelas good or better.
Still preliminary in our datahaven't published anything yet,
or really, but that's really.
(24:00):
That's what we believe at themoment is it will reduce the
time spent significantly and, atthe same time, allow patients
to actually feel even more heardthan they feel now.
We haven't gathered a lot ofquality outcome data yet because
we're not.
That's sort of in our nextrelease is this is the clinical
(24:22):
decision support piece of this,and so that part.
I can only tell you what's inthe literature.
I can't tell you what we'reseeing yet.
Dr Andrew Greenland (24:29):
Okay, and
what metrics do your clients use
to track the success of this?
Dr Glenn Loomis (24:36):
Yeah, so
obviously we have to look at
financial metrics, because youknow, otherwise I'll be out of
business.
We are looking at patientsatisfaction metrics and seeing
you know whether patients likethis or don't like this.
We're looking at physiciansatisfaction metrics as well,
and then we're looking at sortof time spent metrics.
In the US measuringproductivity metrics like number
(25:01):
of visits or number of relativevalue units that we use here.
Those are the big ones.
We also look just at sort ofhow much time does it take for
the AI to do a visit with apatient?
Time does it take for the AI todo a visit with a patient?
(25:22):
How many back and forth does ittake on average for that
conversation to happen?
Again, there's some data out ofStanford on that from before
and we're trying to validatethat that actually holds true.
Yeah, those are the sort of bigthings that we're looking at.
We have a list of metricsthat's, you know, as long as my
arm, but those are the sort ofthe big ones that I care about
(25:43):
the most.
Dr Andrew Greenland (25:44):
I mean, are
you seeing anything around
patient retention or staffefficiency at all?
Dr Glenn Loomis (25:49):
I have not
looked at that yet because we're
not quite far enough down theroad, but that will definitely
be something that we will lookat.
Yeah, good points.
Dr Andrew Greenland (25:59):
And what's
next for Query Health?
Have you got any upcomingreleases or strategic goals in
your company?
Dr Glenn Loomis (26:05):
We do.
So our next release for thefall will be adding the sort of
scribe portion for the in-personpart of the visit.
We also are adding the clinicaldecision support in terms of
differential diagnosis althoughwe actually have that now, but
adding to that the treatmentplanning portion, so allowing
(26:28):
the physician to say yes to thisdiagnosis and then it spits out
here's a proposed treatmentplan.
And then our winter releasewe'll actually be looking to
begin to allow patients to talkdirectly to this, to ask it
questions like what you know,what does this term mean?
Or my doctor says I havecongestive heart failure.
(26:51):
What's congestive heart failure?
And have it explained?
So again, trying to take sortof the best of webmd and you
know, uh, and and highly tailorthat to a patient and their
actual medical record and theiractual experience, rather than
just, uh, sort of at a very highlevel, um, so yeah, that's kind
(27:13):
of where we're headed over thenext uh, six to 12 months.
Dr Andrew Greenland (27:17):
Nice, and
are you looking to partner with
anybody or serve more deeplyover the next year in your
development?
Dr Glenn Loomis (27:25):
We are.
So we've had a couple ofinsurance companies reach out to
us.
We've had a couple of actuallycountries reach out to us about
potentially taking thisinternationally, and it's
interesting it may well be thatour tool can leapfrog in certain
(27:49):
countries sort of where we'veall been in, the sort of
developed world, if you will.
Because you know, if you takesome of the countries in Latin
America or Africa, they don'treally have good medical records
or medical record keeping, butalmost everybody has a phone.
So if I can give them an agentthat can take their history, put
(28:10):
it all in the phone and theycan always have that with them,
then suddenly they can go fromprovider to provider or when
they go into an emergency roomand they can actually have their
record with them in a way thatthey just don't now and so and
we can do that very cheaplyActually looks like we can do
(28:31):
that for less than $5 a year perpatient, and so that starts to
look pretty appealing in certaincountries where everybody has a
phone but nobody has a medicalrecord.
So we're looking at possibleoptions there and seeing whether
that's going to be somethingthat happens.
But I'm excited about thepossibilities of serving others
(28:57):
in a larger way, outside of just, you know, serving the
physician community that I liveto serve here in the US.
But I have a heart for servingpatients in places where they
don't have access, and it may bethat, you know, that's the
first place where we'll actuallybe able to use the sort of
direct patient primary care,because they just don't have
(29:20):
access to anything.
So perhaps access to an AIagent is better than access to
nothing, and so we'll see.
I'm excited about thepossibilities, though I like to
dream big.
Dr Andrew Greenland (29:33):
Amazing.
What about wearables?
Do you see wearables beingintegrated into the work that
you do?
Dr Glenn Loomis (29:39):
Absolutely so.
Our goal is to integrate AppleHealth and Android into our fall
release and certainly by thewinter release, when we're going
to integrate you know as manywearables as possible, because,
again, there's a lot of datathere that really could inform
(30:01):
the provider's decision making.
You know, we, our patients, areusing these things.
You know, I mean, I have anaura ring that I wear every day
that captures yeah Right, itcaptures everything, right and
so, but my providers don't knowanything about that.
(30:22):
You know that my sleep was badthis week, or you know that my
you know heart rate is, you know, always fast or you know
whatever Like.
And so I think there's a way tocapture that, summarize it and
give it to the providers in away that actually is useful,
because right now, as a provider, I don't know what to do with
(30:43):
that data, because they give mesome data point here, data point
there.
But if I had all of it and Iknew it was summarized in an
appropriate way, I think I couldstart to use that for the good
of my patients.
Cool.
Dr Andrew Greenland (30:59):
Biggest
challenge, biggest bottleneck in
doing what you do.
Dr Glenn Loomis (31:03):
The biggest
challenge by far has been LLMs
are an amazing tool, all right,and they really are great at
doing differential diagnosis anddoing treatment planning and
actually the scribing part isn'tthat hard because you've got
both people talking.
But teaching, building an agentthat actually thinks like a
(31:24):
physician, takes the history inthe appropriate way, does it in
a standardized way, doing it forall the different visit types
that we all have, right, thathas been an amazing challenge,
but we have.
We have done it and we've we'vemade it work and it's working
well now, and so I think that'sa a big moat for us, because it
(31:49):
turned out to be way moredifficult than we thought it was
going to be.
Number one and number two oneof the things that we learned is
engineers and coders areimportant in this process, but
actually having a big group ofclinicians to actually help us
create the prompts, test theprompts, make sure this thing
actually works the way we thinkit's going to work, and then
(32:12):
continually recycle that thathas been extremely important,
because coders and engineers arenever going to be able to write
the prompts that you need.
Uh, I think for any agentactually I think cysts, you know
subject matter experts are thekey to creating great agents
(32:33):
great.
Dr Andrew Greenland (32:33):
Final
question what advice would you
give someone launching ahealthcare startup today, based
on your experience, have areally rich grandparent or
something.
Dr Glenn Loomis (32:43):
I think that
would be my biggest advice.
Raising money is the hardestpart, but I would just say, if
you have something that you'repassionate about and that you
love, take a chance, do it,because it's important that we
get technology into the hands ofpatients, into the hands of
(33:07):
providers that's actually goingto make a difference and not
going to slow us down the waysome technology has in the past,
that we need technology that'sactually gonna make our lives
better and the lives of ourpatients better.
Dr Andrew Greenland (33:23):
Glenn,
thank you so much for your time
this afternoon.
It's been a really illuminatingconversation.
Ai is so huge and topical atthe moment.
It's really interesting to hearyour insights in it from a
medical perspective and alsofrom a tech perspective.
So thank you so much forspending your time and answering
and having a conversation thisafternoon.
Really appreciate it.
Dr Glenn Loomis (33:40):
Andrew, the
pleasure has been mine and
you're a great host and easy totalk to.
Thank you.