Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Gaurav Parchani (00:00):
A normal person
would think okay, so you're
fine, and then you slowly deterit, and then you further deter
it, and then you go to ICU.
Right, it doesn't happen thatway.
You'll start breathing fasterand heavier Heart rate,
respiration rate, blood pressureand pulse, oxygen saturation.
Sanjay Swamy (00:13):
Today we are 2
million nurses short in India,
right, what's possible now?
That wasn't possible earlier.
Gaurav Parchani (00:18):
We got the data
to the nurse instead of the
other way around, and healthcareis definitely due for a big
tech revolution.
I would say so that we canidentify close to 91 to 92% of
patients eight hours in advance.
So Dozee is the world's firstcontactless remote patient
monitoring solution, along withan AI-powered early warning
system, because every patient'sbody is different, everybody's
(00:39):
baseline is different.
A sensor under my mattresscapturing all the vibration can
actually capture that dimensionas well.
We are present in UAE, we arepresent in Africa and, more
importantly, we are present inthe United States as well.
Right, but non-contact.
We are the first one in theworld to do.
Sanjay Swamy (00:55):
All right, I have
my good friend Gaurav Prachani
here, one of the co-founders ofDozee and notably, according to
the other founder, the smarterof the two.
So the audience will vote afterthis.
But, gaurav, it's been a greatjourney, you know, working with
you guys and, over the past fouryears, really interacting
(01:16):
closely.
So welcome to our podcast.
is sort of at the forefront ofartificial intelligence and
it's used in, used in uh in thehealthcare field, particularly
around patient monitoring and um, early detection of uh, uh and
hopefully saving a lot of lives.
So, uh, welcome to our show.
(01:38):
We'd love for you to share alittle bit about your background
to start with uh, and then youknow how you came across this
idea for and you know then wecan dive into more details.
Gaurav Parchani (01:48):
Sure.
Hi everyone.
As you already introduced me,my name is Gaurav.
I come from Indore, my fatherworks for Department of Atomic
Energy and in fact I grew uparound construction sites and
these are not your everydayconstruction sites.
Uh, industry, which is asia'slargest particle accelerator,
(02:10):
was a few kilometers from whereI stayed, so I've seen it being
built and that's how I think Igot curious about engineering,
mathematics, problem solving, uh, and that's how I started kind
of looking into a lot of and wasalways interested into
mathematics.
I got selected into iit indore,again close to my home.
(02:32):
So everything in indore so far,first batch of iit, indore,
mechanical engineering, andthere I got a lot more into
automotive engineering, like gota lot more interested into that
, particularly a lot ofsimulations around automotive
industry.
So your computational fluiddynamics how does the fluid or
airflow around a vehicle affectsits drag or crash dynamics, how
(02:54):
, when a vehicle crashes, whathappens exactly?
Right, and how do you make safevehicles.
And right after college Ijoined this company, an American
company based out of Bangalore.
They have a big developmentcenter in Bangalore where I met
Mudit.
Again, our team was simulationengineering plus a lot of
sensors, developing prototypesso that we can make cars go
(03:17):
faster, be safer as well, andalso the product lifecycle
management is lesser as well.
Right, like the lesser cycle,so you can develop cars faster
and cheaper as well.
We worked for a few years overthere, but then after some time,
we always felt that whatever wewere doing and this I'm
(03:39):
speaking more for myself nowwhatever I was doing was amazing
, as in these were amazingproblems to solve.
An always requires a big, agood problem to solve.
More complex the puzzle.
More fun.
It is uh.
But at the other end of thespectrum is also that it should
also create a real lifedifference as well, right?
So yes, in this case there wasa difference.
We were interacting with a lotof clients ferrari, mercedes,
(04:01):
porsche, toyota all of thesewere clients of the company and
we were interacting with these.
We got to see what was going tothe market and how our
softwares and our simulationsolutions were affecting it.
But it's not that large of adifference in terms of impact to
society.
Mudit came back from a trip, Ithink from Germany, and this was
(04:27):
the time when an unfortunateevent occurred in his family,
right, and we started thinkingthat it's been like, if you look
at the last two decades, everyother industry has completely
been revolutionized by data andby tech.
Right, I know the LLMs are thecraze nowadays, but, but even
before that, we started almostnine years ago, and at that
point of time, we analyzed, twodecades ago, imagine how did you
(04:49):
get your advertisements?
Maybe radio, tv, newspaper,roadside hoardings?
That's it.
Right Now, each one of us getspersonalized ads served to us
based on what we click.
The industry has completelychanged Healthcare.
However, however, we follow thesame protocols and that were,
decades ago, set up.
Uh, whether it, whether it isfor care, whether it's for
(05:13):
patient safety, whether it's formonitoring patients, keeping
vigilance on them, right, uh,and this is something that, uh,
both of us uh that this is hightime now that somebody should
work in this direction, andhealthcare is definitely due for
a big tech revolution.
I would say so.
Sanjay Swamy (05:31):
One of the reasons
and this is we are talking, you
guys are like 23, 25, that 23around.
Gaurav Parchani (05:37):
Yes, I think I
was 23, he was 24.
Sanjay Swamy (05:39):
I was just trying
to reflect on.
What I was thinking of when Iwas 23 was not about why
healthcare is lag lacked forsure, so that's pretty cool so,
uh, and this is this is where,uh, we started analyzing why is
that so?
Gaurav Parchani (05:51):
right, like, uh
, why is healthcare?
It requires much more patience.
It requires time.
Uh, you have to.
These are not only softwaresolutions that you would fit in.
Right like, you have to beready for whatever is required.
If we didn't want to build thehardware in the first go itself,
right, uh, but we figured out.
There's no other hardware thatcan give us data on patients,
(06:14):
right, like.
The idea was, yes, e-commercecompanies nowadays can figure
out and even that point of timewhen you're going to order your
next toothbrush right, uh, butwhy can't we figure out, uh,
when a person is going to orderyour next toothbrush?
Right, but why can't we figureout when a person is going to
crash next hour, next day, nextweek, next month?
Right, that sort of riskmodeling on a continuous basis
did not exist, and for us, themajor reason for this was was
(06:36):
lack of data.
I think we have amazingengineers throughout the world
that can forecast almosteverything.
You just need to have the dataand a good amount of data for it
.
Right, and here the fundamentalproblem was there's no data
continuously available forpatients, whether they are at
home or whether they are athospitals or wherever they are,
and healthcare particularly,being slow, primarily because
it's a very highly regulatedindustry, in my opinion.
(06:58):
I think I would say it is thesecond highest regulated
industry after space tech, andrightly so.
You're dealing with patientlives, patient safety but it can
be difficult for somebodywithout a lot of highest
regulated industry after spacetech uh, and rightly so, you're
dealing with, uh, patient lives,patient safety but it can be
difficult for somebody without alot of backing, funding,
experience, uh, to startsomething and survive in this
particular industry.
Uh, I think, with a lot of, Ithink, a little bit of luck,
(07:19):
with a lot of hard work andamazing partners such as
yourself, I think we've beenable to beat the odds there.
Sanjay Swamy (07:27):
But, yeah, it's
been a very rewarding journey
and get to the start line right.
I mean, the journey is ahead ofus.
Gaurav Parchani (07:32):
in that sense,
yeah, I think nine years now,
but I think it still feels likewe've just begun right.
So I think it's still there's along, long way to go, but yes.
Sanjay Swamy (07:43):
So awesome.
I think that's a great, uh youknow introduction to your
background and uh, you know thewhy uh thing.
So tell us a little bit aboutwhat is and maybe you know just
a couple of uh, like a twitterstyle, uh, the original twitter
style responses for for gettingour audience little oriented.
Uh, what's the problem?
(08:03):
What is ?
And you know where have you allreached today in this journey?
Gaurav Parchani (08:10):
So is the
world's first contactless remote
patient monitoring solution,along with an AI-powered early
warning system.
What's the problem that we'reessentially tackling?
Sanjay Swamy (08:22):
Can you just break
that down?
That's like a mouthful initself.
Gaurav Parchani (08:25):
Okay, Right for
people to be able to understand
each of those yes, so isworld's first uh remote uh
contactless remote patientmonitoring solution and an ai
powered early warning system.
Uh monitoring traditionallyhappens with a lot of these
contact-based probe-basedsystems.
If, unfortunately, somebody'sbeen in ICU, they would know
(08:46):
about it.
You have ECG for getting yourheart rate or your rhythm.
You have a cuff-based bloodpressure for getting your blood
pressure.
You have a nasal calunia putinto your nose to capture your
respiratory cycles, right.
You have an oxygen saturationprobe that's put on your finger
and so on.
There's so many wires andeverything put on.
So for current standard of care, if you have to be monitored in
(09:09):
a hospital, you have to be puton basic minimum of these four
to five probes on your body.
We can go more invasive if thepatient is more risky in ICU,
but outside the ICU the patientsare not that risky.
So you have to put all of theseon.
And these are all bedsidemonitoring.
So there's a screen there wherethe data remains next to the
bedside.
So the nurse or the healthcareprofessional or the doctor has
to come to the patient, to thebedside to actually look at the
(09:32):
data, right.
There are major reasons wherethis doesn't work.
First, outside the ICU,patients are ambulatory.
What I mean by that is patientsget up, move around, right.
They may go to the washroom,come back, or they may be
scheduled for an x-ray.
They'll go for an x-ray andcome back right, and if we ask
the nurse again and again to putthese probes back on the
patient, it's going to be anoperational nightmare.
(09:57):
Today we are 2 million nursesshot in India, right?
Imagine now I'm asking you morenurses for operationalizing
monitoring particularly.
Second problem directly relatedto nurses data is where on a
screen where the nurse has to goto the data, right, and this is
exactly where, outside the ICUwith because of how WHO
prescribes it, we need to haveone is to four, nurse to patient
(10:19):
ratio.
But in the best of hospitals,you will see one is to six, one
is to eight, and as you keepgoing down to tier two towns, as
you keep going to publichospitals, you will see one is
to six, one is to eight, and asyou keep going down to tier two
towns, as you keep going topublic hospitals, you will see
one is to 10, one is to 15.
Personally, I've seen one is to30 as well in the country.
Now imagine being that nurseand looking at 15 screens at one
time.
Yes, you could do it in an icu,because icu is one is to one,
so you are associated with onepatient for six hours so you can
(10:41):
take care of that patient.
But imagine being associatedwith 20 patients, right, and
looking at 20 different screens.
Not possible.
So that's why, at Dozee, wedecided can we rethink
monitoring?
And in that can we get rid ofall the wires?
And wherever we cannot, we'llmake it wireless.
But just let's get rid of allthe wires and in fact let's get
rid of contact itself.
(11:01):
Right, can we make monitoringas easy as being part of the
furniture or something that isvery passive in nature?
Usually when people are makingproducts, they think of an
active engagement and so and soforth.
Our thought was completelyreverse.
We want to automaticallycollect data.
We want to be as passive aspossible.
The patient should not evennotice, right, their experience
should be so good.
(11:21):
And this is how we solved bymaking the sensors contactless.
And I'll explain about thesensors.
We get data continuously.
All the patient has to do islie down on the bed, no wires to
be connected.
If they go to the washroom,come back those 10 minutes later
or recording automaticallystarts again.
Second, we got the data to thenurse instead of the other.
(11:42):
Right, so we connected all ofthis to the cloud and gave
nurses access on their nursingstations on their smartphones.
Even doctors, while they areoutside at their home, or maybe
in their procedures or in theirOPDs, they could actually get to
see.
Okay, I have 10 patientsadmitted.
These two are at very high risk.
There's an alert on one of them, right, so we got data out from
the bedside from so we got dataout from the bedside from all
(12:03):
of the patients to thehealthcare professionals.
These two are the majorparadigm shifts, but this is
where the major, this is wherean issue that we can generate
out of this Imagine again beingthat nurse, just to back up.
Sanjay Swamy (12:15):
So you said two
big things.
One is, instead of sensorsbeing fitted on the body of the
patient, the ideal situation isthere is no sensor fitted to the
patient and that's what makesit contactless.
Yes, in a few situations youmay still need to have contact,
but it still doesn't need wires.
It'll be wireless, but it mightstill be a patch or something
(12:36):
like that is installedabsolutely attached to the
patient absolutely so.
Gaurav Parchani (12:40):
We get three
vitals, contactless uh your
respiration rate, heart rate andnon-contact blood pressure and
for oxygen saturation andtemperature and an ECG rhythm.
We have three other separatemodules that will be wireless
completely without compromisingpatient experience and
compromising patient safety aswell.
Sanjay Swamy (13:00):
And all the data
comes to the nurse station.
So if I'm a nurse, I've gotthis.
You know cockpit, so to speak,or you know control center, so
to speak, and I'm monitoringeveryone, but I may not even be
physically in the hospital,right?
So a doctor could be monitoringAbsolutely.
Gaurav Parchani (13:14):
This has given
rise to and this is something
that I didn't even think of whenwe started dosing nine years
ago but this has given rise tocommand centers now completely
based on this Right nine yearsago.
But this has given rise tocommand centers, now completely
based on this right and commandcenters remotely managing
multiple hospitals in one shotright.
So there is a multi-layerescalation system where the
nurse gets some data she has totake some action on.
If they miss it, then ahospital level RRT rapid
(13:36):
response team gets it, and ifthey miss it, then there's a
command center level RRT rapidresponse team that essentially
get to it.
So RRT rapid response team thatessentially get to it.
So your your rest assured thatyour relatives or whoever is in
the hospital right is wellmonitored, is being under
constant surveillance and isgetting the best care that they
deserve so for us cricket fansis like having a third empire
sitting, yes, somewhere else,not at the stadium and making
(13:58):
decisions yes.
So now imagine again being thatnurse.
Yesterday you had four vitalreadings per patient, right?
Why?
Because of all the reasons thatI spoke about, right,
monitoring is not possibleoutside ICU.
So today, the current standardof care, and just to give you
some numbers, in every country,including India, close to 90 to
(14:18):
95% of hospital beds are non-ICUbeds.
So we're talking about 2million hospital beds in India
and 1.875 million beds of themare non-monitored outside the
ICU.
Now, on all of them, suchmonitoring is not possible or
continuous monitoring is notpossible.
So what we end up doing and thisis what I meant by last two
decades or three decades we'vebeen doing this spot checks.
So nurse has a round scheduleevery four hours to every six
(14:42):
hours, depending on the hospitalprotocol, depending on the
patient condition, they'll gonext to the patient.
Take all the readings, fivereadings.
Put it on a chart paper.
When the doctor comes aroundthey'll get to see four dots
connected with lines.
Is that nearly enough to getthe trends or the, the picture
of the patient right, like whyget four picture, four images of
the patient when you get a fullhigh definition video right?
(15:02):
And this is what leads to theneed of monitoring.
But imagine again being thatnurse.
You, yesterday till yesterday,you had four values per patient
per day.
Today you have almost forhundreds of patients, a value a
minute, right?
So much of data it's.
It can be overwhelming.
And this is where ai comes inbecause, uh, in order to make
(15:25):
risk stratification, in order totriage patients, who is at
higher risk, who needs attentionfirst, right, uh, where you may
need urgent care, that sort ofrisk stratification is something
that ai does and that is whatwe call early warning system.
And there are multiple types ofearly warning systems around
the world, but they are allstatistical in nature.
They all have to be handcalculated because they've made
(15:45):
easy, uh, for somebody tocollect these vitals quickly,
calculate it and, on the back oftheir envelope, and be able to
kind of take decisions.
Basis that.
But with ai, we, we starting.
There we're, we're exactlymimicking what early warning
scores and systems have beendoing across the world, and nhs
and uk being one of the leadersthat they've made full protocols
around the system and it's astandard process that they
(16:07):
follow.
But this is where the futurelies.
We can go way beyond that.
Why be limited by humancapacity to consume five numbers
or four numbers right, when wehave a stream of vibration data
coming from the sensor which isat least 250 to 500 samples a
second A second, not a minute.
So we are seeing much moredimension data and much, many
(16:28):
more dimensions than a human isgetting to see, and this is
where we have the opportunity tonot just replace what humans
have been doing, but actually goone step beyond as well, and
that's where the future lies ofearly warning systems, and
that's what I'm particularlyvery, very excited about.
Sanjay Swamy (16:44):
Wonderful.
So what you're saying isinitially, you know, there was a
lot of manual effort in gettingthe data.
Now you've automated theability to get the data and by
making it contactless, it'seasier for people to actually do
it, to capture the data, etc.
Bring it.
But now you've created a newproblem because there's too much
data and the nurse that wasstruggling to get the data, et
(17:06):
cetera.
Bring it.
But now you've created a newproblem because there's too much
data and the nurse that wasstruggling to get the minimum
amount of data is now suddenlybeing overwhelmed with a lot of
data.
And that's where the analysisthat you're doing and extracting
the alerts plays a big rolehere.
So all this is to try to dowhat they were already doing,
but making it possible to do itfor a larger group of people and
more consistently.
(17:26):
And now you're saying there'sthere's something beyond what
they were currently doing.
So tell us more about that um,so yeah uh, what's possible now?
that wasn't possible earlier?
Yeah, I guess that's thequestion yeah, absolutely so,
it's.
Gaurav Parchani (17:41):
It's almost
essentially when a person
decompensates right and when aperson is basically going
through a cycle of healthdeterioration, whether you're
coming from whichevercomorbidity, let's say if it's a
liver patient or whether it's akidney patient or whether it's
whatever neurological patient is.
Whenever a code blue isannounced in a hospital and I'll
(18:02):
define a code blue usually,code blues essentially mean
emergency events where a patientrequires emergency care and
they're either shifted to an icuin an emergency or,
unfortunately, the patientpasses away then and there
itself, and this is where youregister a code blue.
Every hospital has a procedureto do that because it's required
by regulations and complianceswhere they essentially record
(18:26):
and they're essentially they.
They need to have processes.
When this happens, who will getcome and administer care?
How will you triage?
How will you diagnose?
How will you give, give patientbetter care?
Now, what we've seen in researchright, there are signs uh which
deteriorate at least four toeight hours in advance.
(18:49):
At least it can be even more aswell for some people right,
where you would essentially seethere is a clear deterioration
in cardiopulmonary uh systems,so insufficiency of
cardiopulmonary systems.
What that means and if Itranslate it to normal English.
It's essentially no matterwhich comorbidity you're coming
from, whether you're coming fromliver, whether you're coming
(19:11):
from neurological disorders orcardiovascular health or
whatever.
Finally, at the end of the day,the code blue happens when you
crash or when a person crash.
It only crashes when eitheryour pulmonary system is
crashing, so your lungs areeither filled with fluids and
you cannot breathe properly oryou're going breathless, or it's
essentially that somethinghappens to your heart and your
(19:32):
heart stops at some point oftime.
Right, so it's either the heartor the lungs, so
cardiopulmonary insufficiencyinsufficiency meaning they're
not able to perform the systemor the the, the output that they
were supposed to give in termsof blood or in terms of the
oxygen that you need to get.
You're not getting that.
Now, there are four vital signsheart rate, respiration rate,
(19:54):
blood pressure and pulse oxygensaturation.
They have a clear relation tothis particular event that
occurs.
This has been going on for somany decades, but it's so
unfortunate that we don't havemajority of data around these
processes on how the cycles ofdecomposition happens.
Right, so a normal person wouldthink, okay, so you're fine,
(20:15):
and then you slowly deter it,and then you further deter it
and then you go to ICU?
Right, it doesn't happen thatway.
There are cycles of thesecompensatory mechanisms that the
body induces itself.
So imagine in order for you tofunction, you require a lot of
oxygen.
Right, that oxygen is going toevery cell in your body.
Who is taking that oxygen?
To your every cell in your body?
The hemoglobin in the blood.
(20:35):
Where is the blood getting it?
Because you're inhaling it andyour lungs are actually
transferring the oxygen there.
Now, if you require more oxygen,right, and it's not reaching
your peripheries, or it's notreaching your other part, other
body parts or other organs, forexample, right, the first
compensatory mechanism of thebody is to increase your
respiratory rate.
It's the easiest thing the bodywill do, and so you'll start
(20:56):
breathing faster and heavier.
That's the first mechanism, andthen you'll start feeling fine,
a little bit right, but this isnot the cure.
It's not a cure, right, again,it's.
It's going to go, go out ofhand.
And then what essentiallyhappens?
When it goes out of hand, youwill.
The other mechanism that thebody has is the blood.
So if, if I'm not gettingenough air in the blood, I'll
send more blood, that will reach, uh, the body as in faster, uh,
(21:20):
so your heart rate would go up,or your blood pressure would go
up as well, right, uh, and thenfinally, everything, nothing
works out.
Your oxygen saturation willdrop, right, and these three or
four vitals keep going incombination, up and down, and up
and down and finally, after acyclical nature, there comes a
point where you say that, okay,this person is not able to
compensate at all, and then theycrash.
(21:40):
That is where you announce acode blue.
So these abnormalities in vitalsigns can actually be picked up
with proactive alerts when youset up continuous monitoring
around these vitals.
Now, this is the statisticalway to do that, right, but these
ways and this is quiteeffective, by the way.
So in fact, we ourselves, usingthese techniques, have proven
(22:00):
in clinical studies that we canidentify close to 91 to 92% of
patients eight hours in advanceby these abnormalities in these
vital signs.
But these vital signs areessentially now they're
effective 92, as I said.
The dark part behind that isthat they're very, very
sensitive, which is amazing,that they are capturing all
patients, but they are not very,very specific.
(22:22):
What that essentially means isthat, yes, I'll get 10 alerts
for a patient who has to go toICU, right, but I might also get
one or two or three alerts fora patient who is actually doing
okay or is doing moderate andthen is recovering faster and
then going home.
So it's actually increasing alot of alarm fatigue.
So that's one problem withthese vital early warning
(22:42):
systems that are completely-.
So a lot of false alarms also orthese vital early warning
systems that are completely alot of false alarms also, or at
least in some cases.
So you will definitely catch apatient who requires care, but
you will also get alerts onpatients where you they do not
require care.
And that is also primarilybecause every patient's body is
different.
Everybody's baseline isdifferent and if associating one
early warning score to uh alleverybody, in terms of just
(23:04):
vitals right, is going to giveyou, is going to give you this
yield only, so you are sayingthat the alerts should also be
sort of personalized as much aspossible.
So that's the next step fromthis, but what is the step one
step beyond that as well?
Right, you must so.
Heart rate you would get oneevery minute.
Respiration you would get oneevery minute.
(23:24):
Blood pressure one, you wouldget every few minutes.
Sputum you would get one everyminute.
Right.
Now why are these four values?
Only four minutes, every fourminutes?
Right, only four values.
Because what you're essentiallydoing is you're averaging all of
this out and you'rerepresenting this is how the
body is performing every minute.
Right, and you're losing a lotof information in that.
And this is where the nextgeneration of AI comes in, where
(23:45):
I don't want to lose all ofthat information.
For example, my respiratoryrate being 35 does not tell me
at all whether it's shallow ordeep, whether I'm breathless,
not breathless.
Have I increased my effort ofbreathing or not?
Right, but a sensor under mymattress capturing all the
vibration can actually capturethat dimension as well.
So I'm getting the entiresignal, I'm counting the number
(24:06):
of respiratory cycles, reportingit to a doctor and throwing the
other information out, becausethe doctor cannot understand all
of that, or the healthcareprofessional cannot understand,
can understand, but cannotconsume all of that information.
Right?
Imagine for thousands ofpatients and hundreds of
thousands of patients.
Right, it's very difficult toconsume that information, but
machine has no problem inconsuming that information.
So now, with all of theseclinical studies, all of the
(24:26):
feedback that we get fromhospitals, we have an amazing
database of patient journeyswhich patient came at what
position, what was theircomorbidity and at what point of
time they crashed or at whatpoint of time, opposite, they
recovered well and went home aswell.
That's equally important as well.
And now we can train machinesto learn patterns in these
hidden dimensions at the samplerate of 500 samples, a second,
(24:48):
1000 samples, a second right,where you have enough
information there in the data toactually differentiate between
this patient is different thanthis and this patient requires
urgent care right now, and itcan actually stratify there.
And that is where I feel nohuman can actually do that part,
because it's too muchinformation and too much
mathematics to do where you havevery less time and this is
(25:09):
where.
But I don't feel this is goingto replace humans at all.
Right, it's not going toreplace nurses at all.
I don't feel this is going toreplace humans at all.
Right, it's not going toreplace nurses at all.
It's going to generate an alertwhich has to be verified, which
has to be understood and whichhas to be acted upon by a
healthcare professional itself.
And this is where I feel theycan come together, work together
, where we can take the mundanepart of calculations from the
human.
Sanjay Swamy (25:27):
That, okay, I am
good at the machine is good at
calculations, all of that letthe machine do all of that, but
physically checking the patientright.
What's wrong with them, arethey responsive enough or not in
any case there's such a hugeshortage, yeah, and the best
this can do is sort of approachthe desired ratios, so to speak.
Right, I think you knowexceeding it is a long ways off.
Gaurav Parchani (25:50):
Yeah, and the
next to next generation.
That next generation isdefinitely personalization, as
you said, but the generationbeyond, that is what I spoke
about.
And, uh, again, because it'sour passion, we've already
started working on it and we'veseen some phenomenal results.
Already.
I feel we're less than a yearaway uh, I think I would say at
least six months away fromactually piloting it in
(26:13):
production settings, inhospitals.
We're giving this additionallyto vital alerts.
See, the vital alerts are verysensitive and they're already
there.
Imagine in partnership.
There are some other alertswhich are very, very specific.
If that alert has come,definitely something is going to
happen.
Imagine them working together.
I'm covering sensitivity withone set of alert, but I'm also
covering specificity andprecision with respect to the
(26:35):
others as well.
Right, uh, and this is where Ithink, within six months or so,
we'll be able to pilot it forsure this is like breakthrough
stuff, right.
Sanjay Swamy (26:43):
This is not like
something is being done in other
parts of the world.
Uh, you guys now recently gotfda approved and are in the
piloting Dozee in the us as well, um, but coming back a little
bit to just as a matter of, uh,the industry's readiness to
accept some of this stuff, right, because it's also ultimately,
(27:04):
you know, you're dealing withpeople's lives in the more
literal way than, say, infinancial services and fintech
industries, where, okay,somebody didn't get a loan, you
know it may have a financialimpact on them and their income
generation might be curtailed.
But here you're actually talkingindustries where, okay,
somebody didn't get a loan, youknow it may have a financial
impact on them and their incomegeneration might be curtailed,
but here you're actually talkingabout their lives itself, right
, literally.
So, you know, and and plus,this is not an industry that has
(27:26):
been great at adopting tech inits core right.
It is adopted tech in itsoperations, but not really in
the product, in the end serviceitself.
So what has the and not tomention this entire, you know,
fear of AI and things like that,which is, you know, core to
what you do at Dozee, right?
So how has the industry been,you know, open, or willing to
(27:52):
try some of this out, or to help, you know, co-create it in some
ways?
And are they seeing this with,you know, with slanted eyes,
with suspicious eyes, or arethey saying, wow, this could
actually really work.
Gaurav Parchani (28:03):
So uh, I can, I
can, I can say.
I can say that with a littlebit of personal experience as
well, because I have a doctor athome, my wife's a doctor.
Uh, doctors in general are moreskeptical than normal, like
than other professions.
Uh uh, they also have a veryless amount of time, uh, to
actually engage in any sort ofuh uh conversation.
(28:27):
That may require a little bitof depth, right uh, and that is
not from their own field, forexample right, they.
They have a lot of medicaleducation going on, even in,
even like.
I've seen doctors with 20 yearsof experiences attending CMEs
and learning something new, sothat learning component is
always there, but something thatis alien to them.
Technology, for example, right.
Or AI, for example, right.
(28:47):
They approach it with a lot ofskepticism.
It's been hard working in suchan environment where you get
such a less amount of time, uh,and you have to convince
somebody that at least give it ashot, right?
Uh, they're not going to adoptany new thing in one shot, and
this is where what you have tofigure out is you have to give
them experience, demos enoughtime, show them data on their
(29:08):
own patients, right?
Or in their own practice, uh,and that is when they start
looking at okay, this seemsinteresting.
Let me take more interest in it.
However, I think the holy grailof adoption in healthcare and
tech adoption, or any sort ofadoption, is real world evidence
.
Peer-to-peer learning isamazing in this particular
industry.
As I said, doctors with 20-30years of experience are still
(29:32):
sitting in continuous medicaleducation cme events is as they
call it and they're learning newstuff, right, whether it be new
implants or be new diagnosticsor whatever, right?
Or even with Dozee, we do a lotof comes for them now and, like
any other industry, continuousmedical education so it's
essentially their acronym forongoing classes or ongoing
(29:54):
education events, where they getto learn new stuff and which is
usually taken by another fellowhealthcare professional.
So usually a doctor who'sexperienced a particular
solution or a tech or somethingelse, has some experience with
this, has enough confidence thatunderstands it and can speak
about it.
There are many cmes thathappens in workshops that happen
(30:15):
.
Sometimes it's within thehospital, where they do it every
week or every month, andsometimes it's intra-hospital,
and a lot of events andworkshops that essentially
happen.
So this is a very good forumand, like every other industry,
you have early adopters here.
Now these early adopters arethe ones who are very interested
in technology.
Now they also approachtechnology with skepticism, and
(30:36):
with AI my experience has beenthey're super interested With
all the buzz around, whetherit's from a point of fear or
whether it's a point ofexpecting that it will do
everything Like I've seen theentire spectrum right.
One was we were working on aresearch.
Obviously, I wouldn't takenames, names, but I was working
on a research project and I gottold by a very senior doctor
(30:59):
that why do you need to dofeature engineering, right?
Why just give it to the model.
It will figure out on its own.
And that case I wanted to saythat then there's no need for me
here.
You have the data, you have thepatient, there's the model,
just give it the data, it'llwork on its own right.
But that's that's the level ofand they're amazing senior
doctors, right?
I would much happily give trustmy life with them, right, and
(31:20):
again, in the same sentence, I'messentially talking about their
understanding of technology.
To be that shallow, however,when we checked with them and a
lot of doctors that would you beinterested in doing a little
bit of deep dive, not from theperspective of that you start
coding from next day onwards,but from the perspective of one.
You start understanding itright and second, you start
(31:42):
understanding how to evaluate itright.
There are so many journalpapers.
There are so many papers thatare coming out on AI, coming out
on AI in healthcare.
Not everybody is following thebest practices.
Right, as in, you keep aseparate testing set, then a
separate validation set, right.
Your testing set should neversee your model, for example,
right.
Or a data center should alsonever see it right.
(32:03):
Uh, basics related to it,basics related to bias, right as
the data set well roundedenough or not, right?
What sort of precautions havebeen taken for that right?
Is this tested thoroughly ornot, right?
Uh, it's very easy to get to 99accuracy when you're not
following the best practices andpublish a paper.
And this is where doctors arevery interested to understand to
okay, what is good, what isactually not good, right?
(32:25):
So when we check with them,they were super excited about it
.
Yes, we would love to do that.
So we actually developed acourse for them, an extended cme
, so almost like a six-hourcourse including a.
So the course essentiallycovers the end-to-end
development cycle of ai inhealthcare and we've taken an
example from their own fieldwhich is nothing to do with
dosing.
So we've taken a single edcgelectrocardiogram and then we've
(32:49):
shown how we can detect afib,what apple does, did from their
apple watch, right, and weshowed how we can get to six,
ninety six percent accuracy injust six hours as an.
Obviously we've rehearsedeverything.
We have the model ready andeverything, but I'm sure, like
it took us like, I think, notmore than 20 hours for our
engineers to actually developthe entire course and everything
, um the material to the graphs,to everything.
(33:12):
So we cover that end-to-endlife cycle with that example and
with each point in the lifecycle we show them.
See, this is how you clean data, this is how you remove biases,
this is how you select models,this is how you evaluate.
Actually, this is where themodel is doing good.
This is where the model isdoing bad, where it could go
wrong, what to do when it couldgo wrong?
Right?
so you are running this coursenow as a program for doctors and
(33:35):
yes, so us, along with Dozee,along with iit indore, which is
my alma mater, proudly speakingabout that, uh, we've combined
together, coincidence, yes, so,uh, a couple of professors from
there, uh, and our, our indianengineers.
Sanjay Swamy (33:49):
We've collated the
course together, uh, and we're
going to do the first course thedoctors who did not pass je and
ended up becoming doctors arenot going to get an education
from it anyways, so I last to.
Gaurav Parchani (34:00):
Last week I met
a very, very senior oncologist,
uh, who pitched his idea to melike a new idea, and it was so
amazing to have a reverse pitchthat I have this idea how, what
would it take to build this idearight?
Or how, what, like, what kindof technology would it take?
Or something like that.
And it was amazing to engage onthat, uh.
And he mentioned that, uh, I aman, I was wanting to be an
(34:25):
engineer, accidentally became adoctor like 30 years ago or
something like that, and he wassuper excited about the course.
And imagine, like at the age of60, uh, being excited about
something new that you want tolearn from scratch, being such a
well-respected surgeon, so whatyou?
Sanjay Swamy (34:41):
are trying to do
also is sort of demystify it to
them, right?
I mean say, look, this isactually science, this is not
magic, it's not artificial, it'sreal.
Gaurav Parchani (34:48):
So our hidden
agenda there, in which we wrote
in one line and it's not reallyhidden, we actually talk to them
about it is that skepticism isgood, keep it.
But you want to turn skepticisminto curiosity.
When you are skeptic about it,you essentially reject it at
face value, but when you'recurious about it, you ask the
right questions.
Right and never.
Obviously you should neveraccept something, especially in
(35:09):
healthcare, without being sureabout it.
But that's the differencebetween skepticism and curiosity
.
And we want, with this courseat least 80 to 90 percent of
people are taking that course wewould like to turn that
skepticism into curiosity.
With a young startup inhealthcare approaching that
doctor, the next time willactually get more bandwidth and
more uh, uh as an interest uhfrom them when they are actually
(35:31):
pitching their product that'svery cool.
Sanjay Swamy (35:33):
Very, very, very,
very cool it both is.
You know, I think it'simportant for success, but also
it's very important for theindustry, right?
Because this is the future andand there's just more and more
going to be coming at them.
And if you don't understand thebasics of it, then you will.
Gaurav Parchani (35:48):
You will just
approach everything with
suspicion If you look at what'srequired to build a good model
right, or what's required tobuild a good AI model.
Yes, there is the engineeringcomponent of it modeling,
hyperparameter tuning and all ofthose things right but a lot, a
lot, lot of it depends on thekind of data that you're
capturing right, and the kind ofvariability in that data, the
(36:08):
kind of diversity in that data.
We as a country are at anamazing place where we have 1.4
billion people.
We have 2 million hospital beds.
I don't know how many millionopd patients visit every day to
different hospitals.
I can name a few hospitalswhere they have 12 000 footfall
per day, right, uh, my wifeworks at a hospital where
(36:29):
fellows from uh belgium, fromitaly, from uh lithuania are
coming and visiting and in threeweeks they are looking at
surgeries that they look at oneyear there the number of
surgeries.
So we collect and we dohealthcare at crazy scale.
If only we had the way toactually format the data or
actually streamline datacollection in a way that we
(36:50):
digitize it, we tag it properly.
It could power so much of nextgeneration of health ai models
actually coming from india notjust monitoring, not just early
warning system, but imagineimaging.
There's no reason why, uh,indian startups or indian
companies per se cannot build.
Uh, because we have themanpower, we have the engineers,
(37:13):
we have the doctors were very,very amazing.
Indian doctors are very famousacross the world, by the way.
Uh, we also have the manpower,we have the engineers, we have
the doctors, who are very, veryamazing.
Indian doctors are very famousacross the world, by the way.
We also have the large, diversepatient population as well.
What we don't have is theframework and the structure to
actually have this data together, and we also don't have the
skepticism around, like you canalways de-identify patient data
and actually contribute it for,uh, generation of these, a lot
(37:36):
of ip, and that ip will comeback and help us itself, right,
uh, and this is where we lack.
So, if you compare us to somesomething like a mayo clinic or
cleveland clinic or emoryuniversity, uh, they have large,
large databases of millions andmillions of patients that have
been with them.
Now, right, so somebody whopartners with a Mayo Clinic?
now yes, somebody who, yeah, so,from birth to death.
(37:59):
Every test that you've done,everything that has been done to
you, every surgery that hasbeen done, every report that has
been an imaging report or ablood test report or something
like that, is actually a part ofthat, de-identified, completely
Personal information has beenremoved from it and it's now
available for people to build.
So imagine somebody partneringwith a Mayo Clinic starts at a
much higher advantage.
We have all the ingredients,but we don't have the advantage.
(38:24):
And this is where my vision isthat at some point of time,
somebody should and with thehealth stack and everything
coming along right, I reallyhope that large scale data
models are available for peoplefrom india particularly, and
preference is given to indiancompanies to actually uh, give
it one shot to buildingsolutions for india.
And when we actually build forindia, we've shown that we build
(38:45):
for the world, right, evenright.
We've launched uh beyond india.
We are present in uae, we arepresent in africa and, more
importantly, we're presentingpresent in the United States as
well, right.
So it's not only for India thatwe've developed this right,
it's for patients around theworld, whoever needs care.
We are there for it.
Wonderful.
Sanjay Swamy (39:03):
No, I think the
key point you're making is just
the combination of thewillingness to adopt new
technology, the scale at whichwe need to solve these problems
and the cost structure at whichwe need to solve these problems.
And the cost structure at whichwe need to solve these problems
are all coming together andthat can serve high value, low
volume.
That can serve low value, highvolume in terms of monetization
(39:27):
capabilities, but solutionscoming from a very low volume,
high cost sort of framework arevery hard to adapt, whereas the
other way around is possible.
And we're seeing this in otherareas as well, right Right from
Aadhaar and UPI and all of theseas well, absolutely Great Look.
We can go on and on.
One quick thing I wanted totouch upon, at least for viewers
and I'd like you to maybe givea 30-second view of it is you
(39:57):
have to do fundamentaldevelopment of an idea to a
technology prototype, show it topeople in the healthcare
industry who are not really, youknow, used to anything other
than certified products, findpeople to part of it to do
clinical trials here and thenactually publish papers on it.
You know, get the productcertified, then get into, you
know, large-scale deployment,which might, in our case include
some, you know, manufacturingand then continuous improvement,
(40:19):
right?
So this whole cycle.
Like anytime, you have a newfeature, one of which, for
example, is this non-contactblood pressure, right?
Maybe we should also talk alittle bit about what that is,
because that's also abreakthrough here.
You know, how long does thatentire cycle take and how have
you sort of navigated this thingof getting some early adopters
to you know, help you with youknow, trialing all of these
(40:40):
things?
Gaurav Parchani (40:41):
um, yeah, so I
have.
In life sciences, uh, there isthis full timeline of trl1 to
trl9.
Trl1 is when you start with theidea, then you have prototype,
then you build uh around it,then you test it.
So these trls and technologyreadiness- stands for technology
readiness levels okay and these.
These have explicit definitionsof where, what you do, what.
(41:03):
Uh, we're not this, we're notbound by regulations to follow
any sort of naming conventionthere.
But I think largely what yousaid.
These are the phases thatsomebody goes to on an average.
In my experience it generallytakes three years from an idea
to actually getting to finallarge-scale deployment including
incremental new ideas.
Yes, oh, incremental new ideascould take.
It depends on the idea.
(41:24):
So we'll talk about non-contactblood pressure, uh.
But it depends on if the idearequires hardware.
It's significantly longer time,right.
If it's just software, it'sslightly more easier.
If it's AI, then it can also belonger, because regulations are
still developing and catchingup for AI, right.
In fact, if I'm not wrong,between 2011 to 2023, the FDA
(41:45):
did not give more than 40approvals for AI 40 in those
many years, right for ai 14,those many years, right, uh.
So, uh, in all in all, it's it'sa hard and it's a long patient
cycle that somebody needs to bevery patient about, both the
founder as well as the, as wellas the uh, the partners and the
investors that are participatingin the company.
(42:06):
They need to be well aware thatit's going to take time.
Um, so, yeah, end to end on aroughly on an average to large
scale deployments, you couldalways do, uh, alphas, betas,
paid pilots, paid commercialpilots and all of those things
before that, depending on yourreadiness, but roughly I've seen
close to three years is a goodtime.
If I knew what all I knew today, I'm sure we could have shaved
(42:26):
off at least a couple of yearsfrom 's timeline as well, uh,
I'm sure with mudat as well.
Right, both of us what we knowtoday, uh, and that's why you
would see the incremental peoplewho start that sentence.
Sanjay Swamy (42:35):
If I knew then,
what I knew now, what I know now
, ended with saying I wouldnever have ventured into this.
Gaurav Parchani (42:42):
I'm glad you're
saying this I think at any time
I wouldn't change anything atall in terms of doing it.
Sanjay Swamy (42:48):
I think we will
just be able to do it faster,
super, so so let's spend alittle bit of time before we
close on this incredibleinvention of yours, which is
non-contact blood pressure right, which doesn't exist, has never
existed, and it's such abreakthrough.
Gaurav Parchani (43:05):
Tell us a
little bit about it A little bit
of like if, if, if you have acouple of minutes, a little bit
of history of blood pressure,right, so, uh, blood pressure is
not that new of a concept inthe large history of time that
essentially you think about.
Right, uh, in 1700s or so, itwas first time there was an
english clergyman who was verycurious about why is the blood
(43:27):
flowing like?
What is the pressure difference?
Is there some mechanicsassociated to it?
Right, and out of his curiosity, there was a horse dying.
Uh, he wanted to like, he hadsome ideas around it.
So in the artery next to the,the jugular artery, he put a
nine foot, nine foot tube, aglass tube.
Right, and the blood rose in itagainst the atmospheric
pressure.
Now, with every heartbeat itgoes up and down.
(43:49):
This is your diastolic andsystolic cycles.
It keeps going up and down.
Right, if your blood pressureis 120 80, that essentially
means that during a systoliccycle uh, when systolic cycle is
when the blood is rushing outof the heart, so when it's
passing through your heart, yourblood pressure or pressure in
your heart is 120 mm right ofmercury.
And when it's not passing, forexample, when it's filling in
(44:10):
the heart and it's not passing,for example when it's filling in
the heart and it's not passingthrough the heart, then it's
diastolic phase, where it's 80mm.
So for a normal person 120 by80 is what you get the reading.
So that's what the person sawin the tube.
Almost a hundred years afterthat, for the first time,
somebody actually tried it onhumans, how A French physicist,
if I'm not wrong they put acatheter in the arterial line.
(44:33):
So usually when you go tohospitals and you get admitted
hopefully listeners of yourpodcast haven't gone through
that experience, but if theyhave, they would know that
there's an IV line that isinserted so, which is basically
for any sort of medication to begiven to you or something an
intravenous line is goingthrough Artery is slightly more
deeper than the veins, so that'swhere usually people don't
(44:56):
prefer arterial lines because itmay lead to infection.
It's risky only in riskiest ofriskiest patients it goes right.
So in 1800s or so, first time afrench physician put an
arterial line and then put acartograph on top of it where,
with a pencil on the graph, it'sbasically plotting the pressure
that's coming.
So you get to see a pressurewave.
(45:16):
Now you have the exact sametechnology digitized and you
would, on a patient monitor, seethe waveform when this systolic
blood pressure goes up,diastolic comes down, then goes
up and then goes down.
And then, 100 years after that,somewhere around in early 1900s
or so, a German physicianfinally came out with a
non-invasive, a cuff-basedtechnique when today also, you
(45:37):
see the sphygmomanometer inhospitals where the nurse, the
manual one, the nurse or thedoctor is pumping it and the
mercury is actually going up anddown, right.
And then finally, I thinkalmost 50 to 100 years after
that, we have the digital onenow with every one of us at home
, right, basic problems withthese.
These are amazing inventions tobe able to measure blood
pressure.
In fact, you'd be surprised toknow when the first time in
(45:59):
1800s, when the RTL bloodpressure was measured
coincidentally, all the patientsthat it was measured in were
nephropatients had problems withtheir kidneys.
So the first, for the first fewdecades of blood pressure, it
was thought of an indicator forkidney diseases, which is true.
But blood pressure has so manydifferent other
(46:21):
contraindications as well, or somany different comorbidities
sorry, contraindication would bethe wrong word so many other
comorbidities as well.
Right, and so the first fewdecades, it was considered that
if your blood pressurefluctuation, your kidney is
going bad whether it was or not,it's a different story
altogether.
Um, now, the obvious problemwith invasive blood pressure, uh
(46:44):
, is essentially around the factthat, uh, it is invasive, so
you have to put an arterial line, which is something which is
not advised.
So only in 5% of ICU patientstoday it's done so.
In a hospital there would be200 patients, 10 of them would
be in ICU and only two of themwould have an arterial line.
That's the sense of it.
For everybody else, you have thecuff.
(47:05):
Now, cuff is a good equipmentif handled by a professional.
It should not be loose, itshould be put in a particular
position and that is when youwould get an accurate reading.
Now, if you take the automaticonce and every 30 minutes, it
will take one of your readings.
The curve sometimes becomesloose, right and it's more
importantly, even if it wassuper accurate, it's super
uncomfortable to live with.
(47:25):
I've personally tried a fewtimes sleeping with it to
collect data, obviously for ,and it's very difficult to sleep
with it when every half an hour, something is uh, inflating on
your hand and then deflatingagain and pressing your hand
again and again, right, uh.
And this is where, if you askany doctor on the, I'm willing
to bet that, uh, out of the fivevitals that are there if we ask
(47:48):
them, we can only give you one.
You have to let go of otherfour.
They will choose blood pressurealways.
And this is where we startedresearching into this.
We read tons and tons ofresearch papers and this is
where I must give you credit aswell, because I remember you
sending one research paper aswell which contributed.
I couldn't understand it.
Yes, so, after reading it a lot,we experimented a little bit
(48:11):
with data and we understood thatwith heart rate, respiration
rate and everything else, we areonly dealing in time domain.
We are only dealing at whatrepeats.
If a heartbeat is repeatingagain and again.
I just need to count it so manytimes in a minute and I can say
I counted 70 beats.
That means your heart rate is70 beats per minute, it's 75,
then it's 75.
(48:32):
We are not worried about whatthe beat itself is.
If it's a heartbeat and if it'srepeating, we are counting that
.
But with blood pressure, we needto now start analyzing that
beat, with what intensity theblood rushed out of your heart
to the aorta, because our dosageis right under the mattress
around that area, around yourupper thoracic region.
So it's actually carrying thatimpact information as well,
(48:54):
right, but because it's passingthrough your body tissues, it
passing through the mattress andeverything, it's very difficult
to get an absolute value tothat.
So you can easily not veryeasily but you you have so many
metrics in the vibration datathat can essentially show you
how the blood pressure ischanging and that would be
captured in the vibrations thatare generating out from your
heart.
And that is where we built AImodels With tons and tons of
(49:17):
data Collected from ICU patientsWith arterial blood pressure,
because that's the most accuratefor continuous measurement.
We trained machines to identifychanges in blood pressure and
even subtle changes in bloodpressure.
And then we said you need onecalibration value To begin with,
because we cannot give you anabsolute value.
So when you calibrate it withrespect to another machine, once
(49:38):
for a patient, once from every10 minutes, then onwards you
will start getting the change inblood pressure and
automatically you will get theabsolute value because you have
the beginning point of thestarting point.
Now, this is something whichdoes not exist in the world.
As you absolutely, very, veryrightly said, there are many
that are coming which arecuffless, where you have to put
on a finger or it's workedthrough a patch or something
like that, but non-contact.
(49:59):
We are the first ones in theworld to do.
The regulatory submissions aregoing on.
The clinical studies have shownphenomenal results.
In fact, we've publishedpartially the results as well
and we're publishing moreresults as we can get these
results out.
But this is a game changingfeature for us as well.
This was something that tookalmost a year and a half to
build, to perfect, and then,after deployment, it required at
(50:21):
least.
I think.
We are at version 4.0 now andwe've had three major changes.
And then we had some few minorchanges in the middle as well,
where we've improved theaccuracy.
We've improved the specificityand sensitivity of the model.
Sanjay Swamy (50:35):
Perfect, and just
to clarify, it's live in India,
but the regulatory stuff isreally more from an FDA
perspective.
Gaurav Parchani (50:42):
Yes, absolutely
Awesome.
Sanjay Swamy (50:44):
Great.
So, look, this is a crazilyfascinating I guess topic,
basically fascinating I guesstopic.
And you know, really you guyshave done a lot to be on the
cutting edge I guess bleedingedge is the wrong word to use in
the healthcare space.
But kudos to you for stayingthe course and, of course, the
best is yet to come.
So all the best and lookforward to the journey ahead.
Gaurav Parchani (51:06):
Thank you.
Thank you so much for having me.
My pleasure Dear having me Ourpleasure.
Prime Venture Partners (51:11):
Dear
listeners, thank you for
listening to this episode of thepodcast.
Subscribe now on your favoritepodcast app for free and you'll
be the first one to know whennew episodes are available.
Just search for Prime VenturePartners Podcast in Apple
Podcast, spotify, castbox orhowever you get your podcasts
Then hit subscribe and if youhave enjoyed the show Spotify,
(51:33):
castbox or however you get yourpodcasts Then hit subscribe.
And if you have enjoyed theshow, we would be really
grateful if you leave us areview on Apple Podcast.
To read the full transcript,find the link in the show notes.