Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:00):
Hi there.
Welcome to the Lattice Podcastepisode number 96.
In this episode, I was fortunateto have Gilly Gitterin, founder
and CEO of Vent Creativity, tosit down in my studio and have a
deeper dive into his company.
Vent Creativity aims to createAI-powered to go into autophore
circle funding.
(00:20):
We'll discuss how VentCreativity to platforms
including data required aremaintained to work on a
transforming feature forspecific data, exploring your
workforce, and raising the barfor certificate.
(00:43):
Please listen to the disclaimerat the end of this podcast.
Hello.
Hi.
Welcome to the quote unquotestudio.
SPEAKER_07 (00:52):
That's amazing
looking.
SPEAKER_00 (00:53):
Thanks for coming,
Gilly.
SPEAKER_07 (00:55):
Thanks for having
me.
SPEAKER_00 (00:56):
And I want to
introduce really quickly, Gilly
is the CEO and co-founder of VenCreativity.
Is that right?
SPEAKER_07 (01:01):
That's right.
Founder.
SPEAKER_00 (01:02):
And how did you come
up with this company?
Why did you tell us a little bitabout what it does and you know
your inspiration, et cetera?
SPEAKER_08 (01:09):
Yeah, I've been in
uh orthopedics and medical
devices for about 20 years now.
And really the issue was I havestarted in robotics as
industrial level and then movedthat over to biomechanics and
everything else.
That's how I got intoorthopedics.
Then I worked at industrialrobots for uh robotics and
surgical applications.
(01:29):
Then uh worked on additivemanufacturing, as you know, uh
3D printing implants, and uhworked on biologics and various
different things.
Really at the end of the day,even though we're building all
these technologies, there wereincremental upgrades on existing
technologies.
SPEAKER_07 (01:45):
So there was really
no change in healthcare and
outcomes for patients.
And our goal was to createsystems and surgical plans that
were patient-specific.
And that really wasn't availableuntil we had AI and tools that
could really speed things up.
SPEAKER_04 (01:59):
Yep.
SPEAKER_08 (02:00):
Uh so just timing
felt right right in the middle
of COVID.
We had nothing better to do.
I found some engineers and said,all right, we're gonna automate
all these things so that allthese pain points surgeons face.
How do I get the STICOM file tocreate some meaningful thing
without having to have sixmillion people working for me?
And then how do I createreports?
(02:20):
And then how do I get betteroutcomes?
Uh, we decided we can fill allthose gaps and understand how to
solve them.
And that's how the companyreally started.
Initially, we started withreally understanding the
surgical plan, and then we foundgaps of segmentation.
It was just costly and not veryautomatic.
So we create our own.
And you know, slow by slow wefound different tools that we
(02:43):
really needed and then addedthose on as tools that matter to
us.
SPEAKER_07 (02:47):
Having done that,
the company became really more
consolidated because now it's astandalone product.
It doesn't really requireoutside help to function, uh,
which means it helps customers.
They don't have to shop aroundfor six different products, they
can work with one product andit's all cloud-based, so it's
easy to access.
And it reports automatic, sothey don't have to fiddle around
figuring how to write reportsout of that.
SPEAKER_08 (03:09):
So, you know, at the
end of it, it's like a really
boring product to solve problemsdriven by very um sophisticated
AI.
SPEAKER_07 (03:17):
But you know,
surgeons don't like to see
technology thrown in theirfaces.
So our goal is to really reducethe amount of complexity to them
and then give them realinsights, you know, five or six
insights they can really takeaction on that's gonna really
make a difference.
SPEAKER_00 (03:30):
Yeah.
You know, I have a theory.
Sure.
A theory is you have to be inthe orthopedic industry or
device industry for 20 years tomake things work.
SPEAKER_02 (03:39):
Right.
SPEAKER_00 (03:40):
Because um deep
industrial knowledge and
relationships really matter inthe space.
Um and the other thing thatsurprises me is that you you
founded this company in 2020,right?
I assume in the middle of thepandemic.
And back then, I actuallyremember there were quite a few
companies claim they haveautomated, automated
segmentation tools for either 3Dprinting or surgical planning or
(04:04):
yeah, virtual surgical planning.
SPEAKER_02 (04:06):
Yeah.
SPEAKER_00 (04:06):
And I'm just
surprised to hear that yet
another company needed to befound to actually do that.
SPEAKER_08 (04:12):
Yeah, they still
claim they have it, and uh, I
don't think most of them do.
They do in a certain extent.
So it really comes down to dataand uh what you do with it.
Right.
So a lot of the companies thathave auto-segmentation are
usually segmenting reallypristine bones, pristine
objects.
SPEAKER_03 (04:29):
Right, right.
SPEAKER_08 (04:30):
And uh that's fine,
it depends on the application.
For sports medicine, it makessense.
But when it comes to arthritis,the bone is not what you expect.
SPEAKER_00 (04:38):
Right, imperfect
bones.
SPEAKER_08 (04:39):
So imperfect bones.
And uh on top of that, we're notreally just creating shells of
the bones.
It's not like uh here's anoutline and do something with
it.
SPEAKER_03 (04:48):
Right.
SPEAKER_08 (04:49):
So even out of magic
manufacturing, as you know,
sealing the whole object isdifficult.
When it's a bone, it's even moredifficult because there are
holes and things in the bonewhere capillary is going.
So it makes a mess when you'retrying to print that thing.
SPEAKER_04 (05:04):
Right.
SPEAKER_08 (05:04):
So our system
doesn't really use meshes,
meshes that are byproduct ofpoint clouds that we're using.
SPEAKER_04 (05:10):
Yeah.
SPEAKER_08 (05:10):
So these point
clouds are essentially the
entire data from the scans, CT,MRI, fluore x-ray, doesn't
matter.
And uh that gives us the fullpicture.
Then we can isolate differentregions for different purposes,
whether that's the outline ofthe bone or density regions for
various um decision making.
So, you know, if we go back to3D printing, we could print
(05:31):
multiple different phases of thebone.
So you could have a veryrealistic object that has
different stiffnesses.
So if you want to cut it, itwould feel realistic to the
surgeon because you know therewill be a cortical bone,
cancellus bone, and they canreally interact with it.
Uh, we actually had some pantson that to do remote surgery.
So, you know, a surgeon can cuttheir bone in a kitchen and then
the robot can do it, you know,5,000 miles away in the real
(05:53):
patient.
So there's a lot of coolapplications that we haven't
really touched yet.
SPEAKER_00 (05:57):
Yeah.
Um, you have three coreplatforms.
Is that correct?
I mean, I just got thisinformation off the website.
You want to just unpack a littlebit, especially for the one that
congratulations on getting theFDA clearance, which is the
Herma's knee platform.
And then you also have somesomething called Miner Minerva
and Invent.
Would you like to just unpack alittle bit what they do briefly?
SPEAKER_08 (06:19):
Uh yeah, we like to
label things, it's fun.
SPEAKER_00 (06:22):
I know.
I'm like, how did you come upwith Hermes?
Um is your wife involved inthis?
SPEAKER_08 (06:27):
Yeah.
Uh she's definitely in themarketing.
And um typically what I do isI'm trying to understand.
Well, I'm from Turkey, sothere's a lot of Turkish Greek
um uh gods and things that gointo it because it's fun.
SPEAKER_04 (06:40):
Yeah.
SPEAKER_08 (06:40):
Uh Minerva is sort
of uh random offshoot, I guess
that's Roman.
But uh Minerva is the platformtechnology.
Uh it's like the system thatdoesn't really know anything but
does everything.
So coming back from my roboticsbackground, PLCs, programming
logic controllers don't knowwhat they're doing, but you can
program to do certain sequencesof things.
Right.
So Minerva doesn't really knowwhat it's doing, but it has a
(07:02):
rules-based approach.
It's a pure AI that says, I'mgonna try to keep this human
upright or achieve some goals,whatever the goals are, minimum
energy, et cetera.
And then Minerva can be appliedto different things, one of them
being Hermes.
And Hermes is specifically forknee.
And then there'll be a hipapplication, the application,
um, shoulder application, etcetera.
(07:23):
But essentially the idea in aHermes is it takes um rules from
Minerva, such as segmentation,landmarking, and surgical
planning.
Okay, and applies it to a veryspecific application of the
knee.
SPEAKER_04 (07:34):
Okay, got it.
SPEAKER_08 (07:35):
And uh, Invent is
our basically data center.
Um online UI that you canaccess.
Either you look at the plans,everything looks great.
It's all pre-planned for you, soyou accept it.
Or you can make changes or youcan customize things.
So the we have digital twins,what we call them.
So digital objects that you canintroduce into the point clouds,
so then you can see how thatinteracts with the bone.
(07:58):
So, for example, a cylindercould represent a drill hole or
a screw, or uh a block couldrepresent a cut, or a sphere
could represent a sphere, um,like a reaming operation.
SPEAKER_00 (08:08):
Yeah.
SPEAKER_08 (08:09):
So you can do those
virtually and then see what the
bone looks like around that.
SPEAKER_00 (08:13):
That's the invent,
right?
I just want to clarify.
Yeah.
I think I saw one of the YouTubevideos by the Venn Creativity
YouTube.
And I will share that link witheverybody.
Now, you you mentioned a reallyinteresting concept, which is
digital twins.
We know suddenly five years ago,everybody in 3D printing
industry started talking aboutdigital twins and this giant
(08:33):
marketing campaign out there.
That's right.
While it is pretty fascinating,and also we can totally just
conceptually understand whatthat means.
Can you just tell us what aresome of the real challenges you
have faced to actually create aproduct that can you can
actually call yourself digitaltwin, of course?
SPEAKER_08 (08:52):
Yeah, I think the
digital twin is uh a really
difficult area, and there isn'ta lot of players out there.
So it really started with umland surveying, uh, where um you
can have geo um measurements ofthe surface and then do certain
things like plotting forbuildings, etc.
Yeah, and more recentlyconstruction where you know
where pipes are, etc., bylooking through augmented
(09:14):
reality to know where to cut,where not to cut, to make sure
you don't um you know go througha pipe, et cetera.
SPEAKER_03 (09:19):
Right.
SPEAKER_08 (09:20):
So we really are
actually using the same
technology in terms of pointclouds.
It's all coming from surveyingand self-driving cars.
So the advantage we have is withpoint clouds, it's easier to
interact with objects than amesh object, which is really
difficult.
SPEAKER_01 (09:34):
Right.
SPEAKER_08 (09:34):
Because um, you
know, engineers out there would
know whenever you're trying todo a fine element model, it's
difficult.
And after hours and aftercompetition, it might just crash
and you start over and you don'tknow why it crashed.
Uh with point clouds, uh,because it's individual points,
you can interact with them andsee point-to-point interactions
easily.
(09:55):
And uh let's say a bone is oneobject, ligaments and muscles
are other objects.
So then we can create objectsand start assigning materials
part of it, and then have theminteract with each other with
collision detections.
So physics models borrow fromour uh partners in NVIDIA.
We're part of NVIDIA andinception programs, so we have
live resulting to us.
(10:15):
So then we can move.
Um, essentially, what happens isright now the gold standard in
the world is 2D imaging, and youtake a CT or X-ray, you look at
it in 2D slices and make adecision.
SPEAKER_04 (10:27):
Yeah, that's what I
do every day.
SPEAKER_08 (10:28):
Right.
Which is um very funny becausehumans are used to 3D worlds,
but in radiology and surgery,everybody's looking at 2D images
to make a decision.
Even in um top uh roboticsystems, you create a
three-dimensional model and thenslice it to 2D so that people
can look at it in 2D where theimplant's gonna fit.
Yeah.
Um, but in our case, we take 2Dslices, create 3D point clouds,
(10:51):
and then we animate them in 4D.
So now we know how a human kneeis gonna move, for example,
because we know how theligaments are gonna interact
with the bone.
SPEAKER_04 (11:00):
Yeah.
SPEAKER_08 (11:00):
So without us, so
Minerva figures out how this
bone is gonna move aroundwithout us telling him here's
how it should move, because ourgoal is to remove all of the
bias from the system.
So we don't want to bias it inany way that's going to change
the way it's gonna decide.
We say, Here are the ligamentsavailable to you, minimize the
energy and tell us how thatthing's gonna move.
(11:22):
Then introduce an implant andsay, all right, now try to fix
that motion so it's more stable.
And that's sort of Minervabasically planning a surgery
without knowing what it's doing,and that's the most stable
solution that's possible.
SPEAKER_00 (11:34):
Yeah.
So it totally makes sense.
That said, um, it's not easy.
SPEAKER_06 (11:40):
No.
SPEAKER_00 (11:40):
Um, one question I
just because since I'm not from
the engineering background andyou're certainly a lot more
knowledgeable than me, theconcept of pixelated 3D
printing, is that similar?
SPEAKER_08 (11:52):
So yeah, I think um
it's sort of our cousins in
terms of whenever anybody hasany image, right?
The first thing we do is createa 2D image, and that 2D image is
actually uh either an AI ormanually uh manipulated image.
So we're taking signal,softening it so it's not blocky.
(12:12):
And then in three dimensions,those are called voxels.
So the voxels are essentiallysort of a cube representation of
the point.
Right.
So now essentially, uh not toget too technical into it, but
essentially if you have a cubeinstead of actual shapes,
obviously you're gonna be off bya certain amount.
Right.
Yeah.
So we're removing that bias bysaying here are the points that
(12:33):
are available to us, and we'reat the resolution of the image.
It doesn't matter, whatever thatis.
SPEAKER_00 (12:37):
Yeah.
I would have guessed that's ahuge amount of data you have to
process.
SPEAKER_08 (12:40):
Right.
So it's 20, 30 million pointsper bone.
So we're talking maybe 100million points per scan.
Uh but the amount of informationyou get from that, especially
when you colorize it, is amazingbecause things that people don't
think are possible.
So cartilage, meniscus, andligaments from CT, or um, you
know, bone density or bonemodels from MRI, etc.
(13:03):
So it's very sort ofcross-functional.
Once we create a point cloud, wecan do a lot of things with it
in our own system.
And that really explains how ahuman body evolves.
So we're calling an evolutionaryalignment.
How do you evolve to sort ofstand up straight versus the
primates?
And we can figure out where theimplant should go based on that
evolutionary alignment.
(13:25):
So that removes all the biasesof here's a point area, so
that's a landmark.
SPEAKER_04 (13:30):
Yeah.
SPEAKER_08 (13:30):
Because that point
area, if you look at you know,
data, certain reserves are offby almost two centimeters in any
direction.
So that's not a very reliablelandmark to begin with.
We remove those landmarks and wejust say where do ligaments and
muscles attach and how do theyattach?
And landmarks don't reallymatter to us because how the
ligaments move matter to us.
SPEAKER_00 (13:51):
Right.
You added a lot more dynamicinformation into what used to be
usually static.
And honestly, when I waslooking, you know, at your
company, I was thinking, whycan't just regular diagnostic
radiology do the same thing?
Because I read MRI lombard spineevery day.
But you know, when we get MRI,we lie in a scanner.
It's a supine position.
(14:11):
So your nerve root and yourspinal cord and your discs are
all in your supine when you'relying down.
People are hurting when they'restanding up and walking around.
You know, same thing as a kneearthritis, for example, right?
How the abrasion of the damagedcartilage is causing, you know,
rubbing against each other, howthat's causing ligamentous
(14:34):
imbalance and stuff like that.
We could have done thatpotentially, isn't it?
SPEAKER_08 (14:38):
Yeah.
Interesting thing is, um, youknow, there's this thing called
wolf law.
Wherever the bone is needed, itgrows.
So what happens in arthritis oreven healthy knees is that you
can see where the density isbecause that's where the patient
spends most of their time.
It's sort of a snapshot of theirlife.
Right.
SPEAKER_00 (14:56):
It's a static one
point in life.
Yeah.
SPEAKER_08 (14:58):
Even if they're
laying down, we already know how
those two bones come togethernormally when they stand most of
the time.
SPEAKER_04 (15:04):
Yeah.
SPEAKER_08 (15:04):
So then we can
figure out what their natural
alignment is instead of uhforcing some sort of a position
for an implant or reattaching anACL.
So we know the exact tightnesswe need to achieve that
position.
And if the position is wrong,then we know sort of globally
average, as well as phenotyping,where that um density patch
(15:24):
should be.
And we've done some studies inthe past with implants where we
show that the bone grows back towhere it's needed if the implant
is positioned properly.
So the implant is forcing it tosort of bone to grow back to
normal locations.
So we can do both things, um,pre-operative or postoperative
analysis of where the bonedensity is and to see if we
achieved what we're trying toset out to do.
SPEAKER_00 (15:45):
And then do you have
any kind of validation to
validate what you're simulatingand predicting so far?
SPEAKER_08 (15:52):
Yeah, so uh we just
started cadaver studies uh with
Orlando Health Partnerships.
Uh we have our surgeons there.
And what we've been doing almostevery two weeks is we're doing
cadaver studies where we'reremoving the cadaver bones um
with their soft tissue intact.
And the surgeon moves thempassively as well as loading it
in different directions tofigure out the soft tissue
envelope.
(16:12):
And then we're checking our uhdigital twins against that to
see how far off we are.
And the goal is not to sort ofbe able to guess up front what
it is.
Our uh physics simulator isgonna be very fast, and the way
it's very fast is we're trainingit with data from those labs.
So it's gonna learn over time onthe properties that it's really
expecting, and then it getsbetter and better over time.
(16:35):
And then the next step will beclinical studies where obviously
the muscles and ligaments areactive in a patient.
SPEAKER_01 (16:41):
Yeah.
SPEAKER_08 (16:41):
And even in surgery,
they're not active.
So then, you know, we havepartnerships with a company
called orthopedic-drivenimaging, and uh, they have a
fluorosystem.
So we're able to see how thepatients move actively in real
world and then feed that back toour system to see how that
differs from a cadaver or apatient who's under anesthesia.
So it's um, you know, peoplealways say, Oh, you know, how do
(17:04):
you know?
Um, we don't, you know, we don'tpretend to know everything.
We're sort of buildingincrementally.
SPEAKER_03 (17:09):
Right.
SPEAKER_08 (17:09):
But the gold
standard right now is
essentially 2D images.
Right.
SPEAKER_00 (17:13):
It's is it's amazing
how powerful it can be,
actually.
SPEAKER_08 (17:16):
I think we're
creating improvements, but we're
not you know pretending thatwe're finished everything, we're
sort of building towards morevalue added.
But anecdotal evidence, our FDAcleared products and uh quite a
few surgeries now, and theoutcomes have been very good,
positive outcomes from uhsurgeon's perspective.
And uh, we'll start formalprotocols for clinical studies
(17:37):
to see how we're doing onoutcome scores that are
accepted.
SPEAKER_04 (17:40):
Yeah.
SPEAKER_08 (17:40):
Uh obviously they're
very subjective.
So we're looking to again uhfibroscopic imaging to see how
the kinematics changes in termsof patient mobility, flexion
angles, etc., yeah, that aremore predictive of sort of
outcomes than pain scores andothers that are very subjective.
unknown (17:56):
Yeah.
SPEAKER_08 (17:56):
So that'll be the
next steps for the company.
SPEAKER_00 (17:58):
I mean, AI in
medicine for this type of
medicine um is expensive.
Um people think it's you justhave to say AI, then the magic
genie would just give you whatyou want.
SPEAKER_06 (18:10):
Right.
SPEAKER_00 (18:11):
One, you have a data
challenge.
How are you gonna acquire thekind of data that's actually
useful for your simulation?
Because everybody uh get adifferent scan, slightly
different technique, dosage ofthe CT, MRI.
I don't know if you use MRI datayet.
Um how do you manage to acquirethis kind of and this amount of
(18:34):
data?
Um, what's your data strategy,in other words?
And and and maybe you can tie inlike how it's a secret.
SPEAKER_08 (18:40):
Then everyone helps.
SPEAKER_00 (18:42):
If it's a secret,
don't tell us.
Um and how does you know thesevarious um AI partnership, for
example, the NVIDIA inceptionprogram that you just mentioned
help you in that regard?
SPEAKER_08 (18:53):
Yeah.
Yeah, so um I think coming froma background in orthopedics, as
you said, 20 years.
Yeah.
Uh I forgot to sort of uh chimein on that.
Uh you know, I would say I cheatbecause I you know 20 years of
things that didn't work, so it'seasy to know what needs to be
done to fix it.
And having worked with all thesurgeons and radiologists over
(19:15):
the 20 years, yeah, they allknow uh my thought process and
they all understand what I'mdoing.
So they all believe in theproduct.
So they've been very upfront andvery open to partnerships.
So we're working with a lot ofradiologists and surgeons who
sign partnerships to share data.
SPEAKER_04 (19:33):
Yeah.
SPEAKER_08 (19:34):
And uh we're using
that data, uh, you know,
basically allow them to do useit for research in compensation.
And we're doing a lot ofresearch-based publications as
well as data sharing tounderstand how it can improve
their outcomes for theirsurgical grades systems.
So the goal here is um, youknow, upfront, uh, this actually
(19:56):
we'll talk about the 510kprocess later, but our goal has
always been completely theidentified data because, as I
said, uh we're obsessed with uhunbiased systems.
SPEAKER_04 (20:05):
Yeah.
SPEAKER_08 (20:06):
So I didn't want to
know anything about the
patients, from their gender totheir health to their age to
whatever.
And FDA wanted to show thatwe're, you know, represent the
entire demographic of the US.
We're like, we don't knowbecause we scrub all that data.
So we actually have to go backto our surgeons and get
information directly from themto see uh, you know, which scans
(20:26):
we actually ended up getting.
Uh but ironically, our goal hasum being unbiased, which means
that all the data we're gettingis getting broken down to um
statistical methods.
So going back to my phenotype umanswer, um, there's phenotyping
called CPAC.
Um we're not believers in that.
We don't think it worksproperly.
(20:47):
It's completely biased, but weuse three-dimensional
phenotyping again using pointclouds.
So we have you know hundreds ofthousands of points that we can
phenotype a patient with.
It's very similar to sort ofshape morphing, but in the other
direction.
So it's already shaped.
We're trying to figure outgroups, and we're using um you
know Gaussian models to figureout the phenotypes.
(21:10):
And our system is not supervisedor an unsupervised AI, it's
both.
So we always create an initialcondition that says we think the
number is 18 phenotypes, thenlet's nudge it.
We nudge it plus or minus fivetypes and see how many people
are jumping groups.
So in anything we do, we don'tsort of settle for one way or
(21:32):
another.
We're always sort of challengingthe system.
So now it's popularized asagentic systems, but we're sort
of created our system to beagentic where we have internal
systems that are fighting eachother for um, you know, who's
right.
And oftentimes when we have oneway of doing something, we
usually have two or three otherthings that are also trying to
(21:54):
do the same thing to see if theyall agree on the same solution
from very differentperspectives, whether that's
bone density or ligament lengthsor stiffness or uh multiple
other things, have to decide onwhere the plane of the cut
should be compared to a biasedmechanical axis, which is sort
of a plumb-drop line.
So everything we do builds intothe AI structures that are the
(22:17):
core of the company, because ifyou don't have a clearly defined
AI structure, then you're sortof you know grasping at straws
to figure out how to solve thenext problem.
But we have a clearly definedsort of culture of the company,
but it's an AI culture of thecompany where we know how to
solve problems based on how wego about um you know
step-by-step solutions.
SPEAKER_00 (22:38):
Yeah, that's
impressive.
I mean, thank you.
I read an article recentlytalking about robotics, since
you're from also the roboticsindustry, that one of the major
challenges in robotics is thesoftware side of things.
It's expensive to actually buildthings from ground up.
Yeah.
But yet I think you're actuallyquite frugal with uh your
capital allocation right nowbecause I know you guys are
(23:00):
still early stage, right, interms of funding.
Um so you you founded thecompany in 2020, which is also
not an easy time to actually geta team together.
You want to just tell us alittle bit of that journey?
Because I feel like I missed outon that.
SPEAKER_08 (23:14):
Yeah, um, I would
say it was the easiest part of
the company, uh, because youknow, most engineers were
sitting at home with no basics.
Nothing to do.
Exactly.
So uh if you leave an engineerfor a few days, they're gonna
get bored and they're gonna wantto do something or solve
something.
SPEAKER_03 (23:28):
Right.
SPEAKER_08 (23:29):
So um I'm from
Columbia University, biomedical
engineering, and now uh MBA aswell.
Uh so I went back to Colombiaand advertised, and I said, you
know, I'm looking for engineersand uh what Columbia is known
for is medical imaging and AI.
Uh so radiology and AI and BME.
SPEAKER_04 (23:47):
Yeah.
SPEAKER_08 (23:47):
So it's perfect.
So I was like, you know, I'lltake two.
So my first employees uh wereactually from there, and uh they
were just out of master'sprogram, so they just started
their careers with me.
So it was sort of you knowgetting them up to speed on a
company startup culture while atthe same time trying to build a
minimum viable product to show.
(24:08):
And um we really had a veryinteresting model.
So we spent a lot of timecreating a very fast prototype
model because our goal was notto create a product, our goal
was to create a service first.
So it really bootstrap.
And the goal here was workingwith large companies and say,
you probably have a problem withX, Y, Z.
(24:29):
We can help you with that.
Uh so they want to find out,going back to the example, is
the implant gonna grow the boneback to where it needs to be.
SPEAKER_04 (24:36):
Right.
So their RD pipeline, basically.
SPEAKER_08 (24:40):
So they would pay
um, you know, a small amount,
very large in our eyes, verysmall in their eyes.
Yeah.
So we solve their problems.
And the irony is, I mean,obviously that gets us through a
few months, uh, no problem.
But the irony is large companiesnever want to touch any kind of
product development,co-development agreements.
They just want to pay forresearch and then that's it.
(25:02):
And then as they grow and grow,they're like, wait a minute,
maybe we should have um had aproduct development agreement.
So yeah, it's sort of that's howyou know you grow companies
growing when all of a suddenthose research agreements are
not that easy to come by becauseall of a sudden they have a kind
of internal algorithm of whenthey're gonna their eyes are
gonna lit up.
So yeah.
That that you can feel the tiltin the company.
(25:23):
So now we're definitely a growthstage company because everybody
wants to work with us, but maybenot for just sort of paper play,
but something more.
But yeah, when we first started,um it was just a few engineers,
and the goal was really solvingproblems fast so that we can
build our core products at theexpense of doing research for
(25:43):
others.
And that's just sort of um modelthat I thought made sense
because uh being a deep deeptech company, nobody was going
to really understand what we'redoing until we could show them
something.
And uh my joke is you know, likeBane from Batman, nobody cared
about what I was doing until Iput on these ligament models.
As soon as we had some ligamentmodels, everyone was like, oh my
(26:04):
god, this is amazing.
But before that, nobody couldunderstand what we're working
on.
So that took five years to sortof get to that level.
But until then, it was just sortof all hand wavy, like here's
how it's gonna work, here'swhat's gonna work.
But it was very difficult tosell people on it.
SPEAKER_00 (26:20):
So you started with
uh getting some bootstrap money
uh by consulting for largercompanies.
And uh when this these platformsare uh I'm assuming they're
generating some kind of revenueor are they appre-revenues
still?
SPEAKER_08 (26:35):
Uh right now they're
sort of a bit of both.
So we're doing sort of researchgrade models, yeah.
And then our FDA clear product,we're still negotiating on
insurance.
Um there's a code for it.
But you know, uh hospitals haveto approve that.
So that's sort of where we are.
Make sure that we have a case inpoint with a couple of our
(26:56):
hospitals.
Once that's proven out, thenevery other hospital is
comfortable with it.
That's always difficult becausethere's not a clear hardware
that can easily be coded.
It's more sort of, you know,gray areas like, okay, so how
does this add value?
SPEAKER_04 (27:10):
Yeah.
SPEAKER_08 (27:10):
And that's at the
word of the surgeon, so it's
sort of chicken and egg.
We have to get over that hump.
SPEAKER_00 (27:16):
Yeah, I have to say,
you know, in the last couple
months, believe it or not, um,just on my dashboard for pitch
3D pitches, yeah, there are moreand more um digital virtual
surgical planning.
Yeah, oftentimes AI, AI driven.
Um what do you think the thefuture of the space is gonna be?
(27:37):
I mean, what was your what wasyour initial vision for Vent?
And then after five years, nowtoday, has that evolved over
time?
SPEAKER_08 (27:47):
Yeah, absolutely.
So I think initially I was fiveyears ahead of everyone on sort
of auto planning for surgery,but that was never gonna be a
differentiator because everybodycould auto plan at some points.
The AI is available to achievethat.
So I think we we basicallynoticed that, and then we had
(28:08):
some differentiated and pointclouds and everything else.
So it really went from pure umsurgical planning, which anybody
does with a bone mesh model uhwith some landmarks and
whatever, to really uhunderstanding the bone's own
structures and sort of how toplan it.
And then now in the past year orso, all the soft tissue and
(28:29):
ligaments, which would be verydifficult for everyone else to
do without our coretechnologies.
SPEAKER_04 (28:33):
Yes.
SPEAKER_08 (28:34):
So we really went
from surgical planning to a full
digital twin of the anatomy tosee how it works and how it
could work better, which is uhwhen you think about it, it's
not where do I cut to fill thisgap.
It's more how do I restructurethis entire anatomy to stand up?
So it's a very different, it'scomplete opposite direction of
(28:56):
coming from it than uh how do Iplace an implant to how do I fix
this human?
SPEAKER_00 (29:01):
It's right from the
adding all the other complex
factors into physiologicalinformations and bone density,
for example.
SPEAKER_08 (29:09):
It's a big mode, I
think, because it takes a lot of
undoing to start over for othercompanies to go to that
direction.
SPEAKER_00 (29:16):
Um how about finite
element simulation and that's
sort of more uh also kind offunctional evaluation of those
structures.
Are you guys incorporating thator is it something in the in the
books?
SPEAKER_08 (29:31):
Or we have some
things in the works, but again,
um we're not looking at it as anengineering or scientific tool
to use in surgery.
It's more of a uh educationaltool for surgeons.
So our goal is not to have veryuh strict and perfect um
solutions to say here's theexact stress with one misses,
(29:53):
whatever.
Yeah, it's more uh there's a isthere a fracture risk or not if
we place this thing here.
SPEAKER_03 (29:59):
Mm hmm.
SPEAKER_08 (30:00):
So that uh again
going back to our physics
models, there's new solutionsthat can create fast and
actionable items.
Uh the reason for that is anyFEA in a bone structure that is
very complicated will take daysor weeks.
SPEAKER_00 (30:17):
Yes.
SPEAKER_08 (30:17):
So not very useful.
SPEAKER_00 (30:19):
Even with India's
latest trips.
SPEAKER_08 (30:21):
So I was just in a
conference in Rome.
Yeah.
There were a lot of amazingpapers in biomechanics, but you
know, admittedly, there werelike this took weeks, and n
equals one.
You know, it'd be great if wecould scale this to clinical
grade.
I'm like, it would, but it's notpossible.
SPEAKER_04 (30:36):
Yeah.
SPEAKER_08 (30:37):
So there's some
shortcuts that we figured out
where we think it's going to addquality.
And, you know, the question'salways going to be, you know,
how do you know it's going towork?
Right.
But the answer is, you know,what's the penalty?
So like if if it's an acceptablelocation for implants that's
going to minimize the risk ofthe fracture in our minds, would
you go for the higher fracturerisk because visually it looks
(31:00):
like it might fracture?
Or would you go for the solutionjust because it's sort of so
it's it's more of a trustbuilding issue than uh I think
explaining physics to surgeonsbecause they're not going to
read uh finite elementsliterature and say, yes, it was
published and peer-reviewed, soit's perfect.
Uh we might do that, but wethink that our proprietary
(31:21):
systems would benefit better ifwe didn't open that up to
everything.
So our goal is to just say uhhere's a physics model, take it
or leave it.
Right.
You don't have to use the factormodels, you don't have to use
the attachment models for bonefixation, etc.
But uh I think there's a lot ofburden of truth that now that
really stifles innovation.
(31:41):
And we would like to um sort ofgive advice and sort of make it
on take it or leave it to kindof an advice on the side,
especially for fine on thatlevel.
SPEAKER_00 (31:50):
Yeah.
I mean, maybe maybe we don'tneed to make a relatively simple
problem more complicated, andespecially because we have
technological limitations at themoment.
I mean, we just don't haveenough computing power.
The going back to that article Iwas referring to about robotics
world, I mean, same thing.
Is there a lot of things thatare good to have?
I I would like to have someoneto brew tea for me as robot, but
(32:13):
that would take trillion dollarsto just get that thing right.
Um excellent.
Now, I mentioned earlier duringthose podcasts that you achieved
FDA queries for Ermay's kneeplatform this year.
Yeah.
Um but there are a lot ofstories behind it, and also you
(32:36):
have a history in regulatoryscience.
SPEAKER_07 (32:38):
I do.
SPEAKER_00 (32:39):
And how did you end
up there?
What a random journey.
SPEAKER_08 (32:42):
Yeah, uh about 30%
of the company is regulatory and
quality people.
SPEAKER_00 (32:46):
Wow.
SPEAKER_08 (32:47):
So we're saying
we're a very boring regulatory
company that happens to havesome AI in it.
Um because, you know, we're insort of it is a regulative
space.
Yeah, we're in Bay Area rightnow.
I think uh it's sort of areaction to the Bay Area.
There's a lot of softwaredevelopment here that is about
getting to the market fast.
And there's a lot of stories oflarge and small companies
(33:09):
getting into healthcare andgetting out really fast without
an explanation.
Yeah, not to throw too muchshade, but uh really uh my
engineering background uh alwaysgo back to it because engineers
go to school for four years ormaybe more, and then they get
out and make things.
And I'm always confused becauseyou know I think the core of it,
(33:29):
what engineering is defined isas defining requirements and
then risks, and then solving therisks to minimize them to meet
the requirements.
But then I think everybody justforgets that on the door and
says, all right, let's justbuild stuff, and then if it
breaks, we'll just fix it.
Put some duct tape on it.
Uh and our goal has always beenum let's build regulatory and
(33:51):
quality structures first, yeah,and then build um tools and then
software based on those tools,and then go to the market with
it with the FDA's approvalbecause I think uh FDA is now
more and more stringent on AI.
SPEAKER_00 (34:08):
Oh, really?
SPEAKER_08 (34:09):
We know this.
SPEAKER_00 (34:09):
Um because this is
the first uh first time I heard
about those, actually.
SPEAKER_08 (34:13):
Yeah, so we're
actually the I think the sixth
company in orthopedics to getFDA clearance for AI products.
SPEAKER_00 (34:19):
Uh-huh.
SPEAKER_08 (34:19):
And overall, I think
maybe in the order of a few
hundred, if not less.
Wow.
And it kind of in the olderdays, I think AI was sort of
nebulous to them.
But in the last two, threeyears, because there was such an
avalanche of AI tools,especially the language model
side of things, yeah, theylocked down and said anything
(34:41):
that uses any kind of neuralnetworks or machine learning is
AI.
It doesn't matter if it's sortof checking your name or if it's
you know segmenting bones.
So uh maybe two, three yearsbefore we could have passed
easily.
Right.
SPEAKER_00 (34:55):
I thought it was
like six months to clearance.
SPEAKER_08 (34:57):
Typically it's three
months for a decision, and then
usually you have questions.
So six months probably aboutright.
SPEAKER_04 (35:04):
Yeah.
SPEAKER_08 (35:05):
But um, it took us
much longer because we had to
update our tests and show moreum evidence.
And that's that's a good thing,I think, because um while we did
a lot of work on oursegmentation and landmarking,
sort of all those things thatare AI-based, right?
Uh they were validated bystudents, and they have to check
(35:27):
each time they use it.
So it's not sort of here it is,cut here.
They have to approve each stepand say segmentation looks
correct, landmark looks correct,cut planes looks correct.
So there's a lot of check stuff.
Uh but I think FDA is going tobe overwhelmed with all these AI
tools coming in, not just inlanguage model, but also in
(35:49):
segmentation and bone modeling,etc.
So I think it makes sense tolook into these and say, you
know, are these really meetingthe customer needs?
And they're not just sort ofdoing something that is maybe
not necessarily dangerous, butit's also not meaningless.
Oh, yeah.
SPEAKER_00 (36:06):
I mean, there are a
lot of meaningless uh software,
AI software in radiology, forexample.
SPEAKER_08 (36:12):
I keep hearing we're
gonna get value-added, uh
value-based care in the US.
Yeah, yes.
You know, I'm holding my breathand I'm about to suffocate.
But uh I would love to have thatbecause I'd love to prove that
our system is adding value tothe system.
And um, you know, some of theolder technologies, they came,
but I don't think they added toomuch value.
(36:33):
So um, but you know, having saidthat, I think they paved the way
for technology for people tounderstand technology matters in
the OR and in the healthcare.
So it was great they did able todo that, but the the burden of
proof is very low.
I'd like it to be higher so thatwe can say we meet that burden.
SPEAKER_04 (36:52):
Yeah.
SPEAKER_08 (36:53):
Because if you're
not increasing patient outcomes
by at least 5-10%, so then whatare you doing?
You're just adding anothertechnology for the surgeons to
have to deal with.
SPEAKER_00 (37:02):
Right.
I mean, that's actually one ofmy questions.
SPEAKER_08 (37:04):
Right.
SPEAKER_00 (37:05):
Because like I said,
have so many companies coming
through us just pitch to thesmall platform along in terms of
automated pre-surgical plenty ofvarious body components.
I mean, if I'm a general surgeonwho, you know, raw hospital have
to take care of basicallyeverything, right?
And I have all these tools, howam I gonna know which one
(37:27):
actually is worth my time tolearn and implement and pay for?
SPEAKER_08 (37:31):
Right.
So there's um, yeah, there's aconflict between first and
second mover advantage inhealthcare.
SPEAKER_00 (37:37):
Yeah.
SPEAKER_08 (37:38):
I can never tell
which one is better.
SPEAKER_00 (37:40):
But I think you're a
first or second.
SPEAKER_08 (37:43):
I don't know.
It it switches from day to day.
Uh first mover, in terms of uh,I think our field is very human,
even though it's software andAI.
SPEAKER_04 (37:53):
Yeah.
SPEAKER_08 (37:53):
Because the surgeon
really has to trust me.
SPEAKER_04 (37:56):
Yes.
SPEAKER_08 (37:57):
That me, as the
person who created this, knows
what they're doing.
So then by default, what Icreated should be correct as
well.
Because at the end of the day, Ithink there's gonna be a lot of
distrust in AI because it's ablack box.
Um, how do we know this thing isactually giving us the answers?
Half the time I have to check tosee why was that solution,
because that's an interestingsolution.
Uh, I had comments like, oh,that was a different sizing
(38:20):
plant and different locationthat I would have normally put.
I tried it and it was verystable.
So, you know, sometimes itdoesn't make a lot of sense in
terms of conventional solutions.
SPEAKER_04 (38:29):
Yeah.
SPEAKER_08 (38:30):
So it's a very human
solution that they have to trust
me and say, okay, you know, theydidn't write all this time.
SPEAKER_04 (38:35):
Yeah.
SPEAKER_08 (38:35):
They're gonna be
right again.
There's a lot of burden on thattoo, right?
Yeah, because I have to be rightevery time.
Otherwise, your first moveadvantage goes away.
SPEAKER_04 (38:43):
Yeah.
SPEAKER_08 (38:44):
Second motor
advantage, I think I'm less
worried about in my fieldbecause uh in 3D printing,
there's always a better, faster,or different technology that
comes.
Uh, and somebody can say, I'mgonna spend another few thousand
dollars to get another printerbecause this one is better for
XYZ reasons.
SPEAKER_00 (39:01):
Yep, every day.
There's something new.
SPEAKER_08 (39:04):
Um, I think going
back to your comments on
surgical planners, I thinkthat's second move advantage.
SPEAKER_03 (39:09):
Right.
SPEAKER_08 (39:10):
Because both
surgeons and radiologists hate
software for one reason oranother.
If you know how to solve theirpain points, then second mover
just looks at the problems andthen solves them.
Great.
Uh, but that's I think for sortof incremental technologies.
SPEAKER_04 (39:25):
Yeah.
SPEAKER_08 (39:26):
If we're going to
ground them up, it's very
difficult to follow.
It means you have to start fromscratch and believe that we're
doing something right.
And I think if there's secondmovers in my fields that are
working directly with me, that'svalidating that I'm doing the
right thing.
So I'm happy to welcome them.
So please uh by all means enterdigital twins.
SPEAKER_00 (39:47):
Yeah, well, a couple
of comments.
First of all, I I do agree, Ithink relationship building with
the clinicians is a barrier toentry to scale up your product.
And uh I have my personal, Ihave personal experiences with
new technology, pack system, EHRsystems, and the customer
(40:07):
service part from the softwareprovider makes a huge difference
of whether or not we're gonnacontinue to use something.
SPEAKER_02 (40:13):
Yeah.
SPEAKER_00 (40:14):
And you know, when
certain when service degrades or
going away, that's when weactually stop purchasing.
So yeah, building thoserelationships are huge.
And the other thing I want tomention when you said that
there's a huge um, not huge, buta resistance to new software and
stuff like that.
(40:34):
Recently, I just realized mymother is using Chat GTP.
She's in her 70s and she enjoysit.
She's creating all sorts ofstuff every day.
And I'm just like, you knowwhat?
She's not supposed to know howto use this.
Why is she using this?
And my conclusion is becausethey made the UI UX so simple
(40:56):
that a 70-year-old who didn'tknow how to use computers don't
understand engineering, cantotally just play with it.
SPEAKER_02 (41:02):
Yeah.
SPEAKER_00 (41:02):
So 3D 3D printing,
for example, 3D printer, I got a
couple of 3D printers andnothing ever worked.
And I'm fairly skilled.
And I think the bottom line isthe technology just hasn't, the
UI UI except the technologyhasn't evolved to the point
where it can scale sometimes isthe problem.
(41:24):
So, what do you think ofdeveloping a platform that's
friendly to people who are gonnause it?
SPEAKER_08 (41:30):
Yeah, a couple
comments on that.
I think the barrier to entry isvery difficult.
But um, you know, in Chat GPTand other models, if there's no
barrier to entry, it's free.
Then you're gonna try it.
SPEAKER_01 (41:41):
Yeah.
SPEAKER_08 (41:42):
And you're gonna
have an experience.
Uh I personally love and hateit.
SPEAKER_01 (41:46):
Me too.
SPEAKER_08 (41:47):
It never works on
code, it just drives me crazy.
And it works great in you know,text that I need to sort of
adjust for whatever reason.
So, you know, I use it with agrain of salt every time.
And um, the interesting thing isum the access to entry, I think,
is always, you know, I'm anengineer, so I'm guilty as
(42:09):
charged.
I'm creating engineeringproducts, hoping that surgeons
will understand it.
So up until this point, maybefour years into the company, we
built things that we knew weregonna be good because I know
what needs to be fixed.
And um I worked at a largecompany for seven years, working
directly with consultantsurgeons.
(42:30):
So I knew exactly what they wereasking, and I knew exactly what
we would give them, and notnecessarily what they were
asking, because oftentimesthey're looking for the faster
horse.
SPEAKER_01 (42:40):
Right.
SPEAKER_08 (42:40):
And I know I can do
much better than that.
So good point.
Then creativity has always been,yeah, yeah, yeah, we know what
you want.
You know, a 3D planner for yourmechanical access, but that's
not gonna solve anything.
That's gonna be the sametechnology, automate it, getting
the same outcomes, useless.
So we sort of listen, but at thesame time, we do what we know is
(43:03):
gonna help.
But at the tail end, now in thepast year, the priority has been
user experience.
So understanding how to take uha PAX or a radiology system and
getting rid of six millionbuttons to say, here are the
five buttons, they're all veryobvious.
And if you click this, this isgonna happen.
And if you click here, this iswhat happens.
(43:25):
So uh the goal is almost to nothave user instructions.
We have to have it because FDAmandates it, but you should not
need it.
It should be obvious to you.
And you know, when you clicksomething, what's gonna happen?
Very much like Apple.
SPEAKER_00 (43:38):
That would be
amazing because I'll never read.
SPEAKER_08 (43:40):
Right.
I'm an engineer, I don't readinstructions, my wife.
But um, the goal here is umtranslate to 2D images now, and
then also 3D in augmentedreality.
Because my biggest pet peeve isaugmented reality is heavily
used for showing really largescreens, which is ridiculous to
(44:00):
me.
And I think I'm worried that ARis gonna go away again.
It comes and goes every 10years.
I I know it's about to collapseagain because a lot of the
companies are falling down andnow it's turning to consumer
glasses, which is not veryuseful.
The issue is I think lack ofimagination on what to do with
AR.
(44:20):
It's always the same.
Uh we can show instructions, wecan show directions or whatever.
So in surgery, what we're doingis we're not showing giant
screens of radiology images foryou to act on, but we're
displaying three-dimensionalbone models and ligaments and
cut planes and where to place aninstrument virtually so you can
(44:40):
sort of move your retractorthere, etc.
So it's real meaningful toolsthat you can use and improve
your surgery.
But we're a decidedly softwarecompany.
So if the hardware goes away,we'll have to go to navigation
and robotics, and there will beno AR.
So the worry is always I wake upin the morning and Google loves
(45:02):
stalking me.
So it tells me what's going onin the world of AR or AI every
morning.
So I don't really have to do toomuch research.
And I open YouTube and it givesme the latest and greatest on
the things that I need to know.
But yeah.
SPEAKER_00 (45:16):
That's that's a very
interesting perspective because
usually, you know, most of thepeople I interview are from the
hardware side.
They don't worry about thehardware going away because that
they represent that.
Right.
And you're the softwareprovider.
Now you're worried about theconduit of your software, is may
experience a downturn.
SPEAKER_02 (45:35):
Right.
SPEAKER_00 (45:35):
I think Meta is
still gonna be there.
I think Apple is gonna be therethough.
SPEAKER_08 (45:40):
They are, but um,
obviously they're very
consumer-focused companies.
True.
So what they're creating is forconsumers, and a lot of the
glasses I need have um, youknow, trackers and inputs that
are not available in consumerlevel products.
I see.
I can always you know pivot andfigure out how to use those, but
uh I think they're notnecessarily gonna have the
(46:01):
burden of proof that FDArequires for medical grade.
So, you know, for better orworse, we'll see how it shakes
out probably in a year or two.
Hopefully, there are new glassesthat are industrial grade coming
out.
SPEAKER_04 (46:14):
Yeah.
SPEAKER_08 (46:14):
Uh Magic Leap is
sort of what we're working on
right now, and they'repartnering with Google.
So the hope is that they'regonna have more advanced classes
out.
Uh looks like Google's creatingour own consumer level glasses.
SPEAKER_00 (46:26):
Yes, with Warby
Parker, I think.
Yeah.
SPEAKER_08 (46:29):
Assuming they're
using Magic Leap fans and
technologies uh to augment it.
But hopefully there's anindustrial level uh progress
because uh I think there's a lotof applications for it.
SPEAKER_00 (46:41):
There are ways,
Billy.
There are ways.
Where are you in Silicon Valley?
You just have to go down to thecoffee shop down the down the
street and and hang out.
That's what you need to do.
Now, have you actually scrubbedinto operation with your
software ever?
SPEAKER_08 (46:56):
Uh with my software,
uh actually no.
Ironically, no.
I've been very uh remote.
Uh I've scrubbed into hundredsof surgeries and cadaver labs.
SPEAKER_04 (47:06):
Okay.
SPEAKER_08 (47:07):
But um, it's been um
so a lot of the places that are
currently using my technologyare away from me.
Uh this is my uh wife's job.
I'm actually going back andforth.
Our office is in New York.
SPEAKER_00 (47:19):
So um so where are
your clinics?
Where who are using yoursoftware currently located?
SPEAKER_08 (47:23):
So right now we're
active in Orlando Health, okay
in Orlando.
SPEAKER_00 (47:27):
Right.
SPEAKER_08 (47:27):
And then HSS in New
York.
SPEAKER_04 (47:29):
Okay, yeah, I know
about that.
SPEAKER_08 (47:30):
Right.
Uh that's a good hospital.
Yeah.
And uh Anderson Clinic has usedit several times.
And now hopefully we're talkingto some Stafford surgeons, and
so we'll get that started.
SPEAKER_04 (47:40):
Yeah.
SPEAKER_08 (47:40):
And uh we're talking
to uh some other major clinics
uh in the United States.
Hopefully, we'll announce themsoon.
But the goal is uh it's what'scalled a limited market release
and healthcare.
SPEAKER_04 (47:50):
Yeah.
SPEAKER_08 (47:51):
Uh five or six sites
just to get the growth pains out
of the way in terms of datastructures and dealing with
PACs, uh, if anybody has to dealwith it.
So how do you transfer data isalways a difficult thing.
SPEAKER_00 (48:03):
Right.
The the tiniest step that wenever anticipated would have a
problem.
Right.
Now, have you are you planningto go to OR at some point?
Because I know other medicaldevice people would.
SPEAKER_08 (48:14):
Yeah, yeah.
So um I think um I'd like to goto every OR, and the reason is
we picked handpicked these ORsbecause they're varying in sizes
and uh volume in terms of howmany surgeries they do, and also
the implant types.
SPEAKER_03 (48:32):
I three.
SPEAKER_08 (48:32):
So the goal of
Vencreativity is to be agnostic
to everything as well asunbiased.
So it should work on any medicalimaging at any resolution.
So we're shooting for the lowestcommon denominator and
resolution, yeah.
As well as any implants and anysurgeon's hands in any hospital.
That way the goal is sort ofrising tides.
(48:53):
So I'll have a very busyschedule once we're up and
running, just going to dig inthe case.
I know, I can't imagine.
SPEAKER_00 (48:58):
Yeah, but it's
impressive.
Your goals are very ambitious,um, but I hope it became a
reality.
So I can't wait to hear yourupdate next.
Now, in terms of the cliniciansyou kind of got in touch or
currently working with, not tobe ageist, but uh is there a
difference talking to differentgenerations of surgeons?
SPEAKER_08 (49:20):
Yeah, I mean, uh I
think I mean I I usually pick
surgeons who are open totechnology.
I have to, because it's gonna bea very difficult conversation.
Uh, but yeah, I think I shouldmention we're probably one of
the only technology companiesthat addresses sort of 100% of
the market for orthopedics.
Because um, when you're talkingabout navigation and robotics
(49:40):
and augmented reality, that'sreally representing anywhere
between 20 and 30 percent of themarkets uh because they have to
be open and have the capitalupfront expense to build and buy
those.
Right.
Versus 70% of the market isstill looking at an x-ray and
sizing an implants based onliteral transparencies from you
(50:00):
know 80s uh high schools andsaying looks about right, and
then they have manualinstruments to place in and then
cut.
SPEAKER_04 (50:08):
Yeah.
SPEAKER_08 (50:08):
So our system is
open to x-rays and other
imaging.
So we can say instead of yourusual three degrees that you
always do, maybe do two orwhatever.
So you know, with augmentedreality, we can refine a manual
surgery to any small practice aswell as manual surgeries.
So the goal here is to expand100%, and um that only happens
(50:33):
when we can go back, you know,going back to trust issues.
SPEAKER_01 (50:36):
Yeah.
SPEAKER_08 (50:37):
Uh older and younger
surgeons all have to trust the
system.
And it's obviously moredifficult for older surgeons
because not because they'reolder, but because they've done
so many surgeries, they're like,they don't need I've done this
thousand times, I don't needyour help.
SPEAKER_01 (50:50):
Exactly.
Yeah.
SPEAKER_08 (50:51):
So, you know, my job
is more about uh negotiations
and trust building than uhselling a technology.
And I think uh that's where Iexcel in terms of as an
engineer.
I'm not sort of like a fullengineer, I'm like a I used to,
what was I called myself?
Communicator, essentially, abridge between an engineer and a
(51:13):
surgeon.
So I understand what both sidesare saying, and somebody has to
translate in the middle.
So I know what they're gonnarespond to.
They're gonna respond to youknow better whatever flexion
axis.
They're gonna respond to, youknow, less time in the or
they're gonna respond to lesscomplications, better outcomes,
you know, uh more surgeriesbecause you know most of them
(51:35):
are getting paid less and lessper day.
So, you know, you really have toknow your audience.
And having talked to a lot ofthese advisors in my previous
roles, I know more or less whatthey respond to.
And you know, this also goeswith VCs and every other group.
Uh, you never go into anycommunication in my field and
(51:56):
say, here's my pitch or here'smy saletaker later.
You have to be very fast on yourfeet and watch their eyes and
watch their emotions and sayit's not sticking, pivot.
SPEAKER_04 (52:07):
Yeah, yeah.
SPEAKER_08 (52:08):
And it has to be so
fast that it doesn't feel uh
defensive.
So it's a whole thing you haveto build as a as a founder, and
it's not something you're bornwith.
So you just have to try a fewhundred times.
SPEAKER_00 (52:20):
Personalized,
personalized pitch for
everybody.
Exactly.
SPEAKER_08 (52:23):
Personalized surgery
and personalized pitches.
SPEAKER_00 (52:26):
That's a good
salesperson.
SPEAKER_08 (52:27):
Exactly.
SPEAKER_00 (52:27):
You know, there are
actually papers published
recently about 3D printedpre-surgical models or some sort
of similar along that line.
And they got feedback fromdifferent levels of surgeons who
are, you know, surgeons who arevery new in training versus
those who are highlyexperienced.
And usually the highlyexperienced people just it
(52:50):
doesn't improve their outcome,doesn't improve their economics.
And on the other hand, I mean Iwould say economics, I would
walk back a little bit.
I think it will improve theireconomics, except they haven't
thought about it a lot of times.
SPEAKER_08 (53:04):
I have a love and
hate relationship with
personalized um cutting guides.
Yeah.
Because you're sort of stuckwith what was planned.
SPEAKER_01 (53:11):
Right.
SPEAKER_08 (53:11):
There's no way
around it at that point.
So either 100% ditch it or 100%go for it.
So I think back in early 2000s,they were working on um plastic
guys that were flexible so theycould sort of change alignment
on the fly if needed.
SPEAKER_00 (53:27):
Yes, yes, I heard
actually they still do.
They still do, they can actuallyadjust the cutting guide now
interactively.
SPEAKER_08 (53:33):
It needs to happen
because then you're stuck.
You're sort of then the surgeonis frustrated because they have
to throw it away and then go toplan B.
SPEAKER_04 (53:40):
Yeah.
SPEAKER_08 (53:40):
But uh I think
really uh what it comes down to
with um 3D printing and guidesis um, yeah, again, trust
building with the surgeon.
They have to absolutely knowwhat they planned and they have
to know how it's gonna fit, andthey have to be happy with the
plans, otherwise, it just turnsinto frustration.
SPEAKER_00 (53:59):
Yeah.
Well, it's funny that you talkabout surgical guide, because
that's not a topic I wasplanning to talk about, but I
have a lot of background inthat.
Yes, and also the field isdefinitely getting disrupted by,
I mean, just the surgical guidealone itself is getting
disrupted by the robotic side ofthings, all the orthopedic
implants companies now havetheir own various robots to work
(54:21):
with, their implants.
And you guys, yeah, the virtualsurgical planning.
Now, are you availableinteroperatively?
I'm assuming, or is this onlypurely pre-operative?
SPEAKER_08 (54:32):
Right now, clear for
pre-op, and we are about to
design freeze our interop.
SPEAKER_00 (54:36):
Okay.
SPEAKER_08 (54:37):
So in interop, we
have we're going to sort of
probably have two umapplications at the same time.
One will be your typicalinfrared tracking with typical
cameras.
And then the second will be theaugmented reality tracking, the
infrared.
SPEAKER_04 (54:51):
Okay.
SPEAKER_08 (54:52):
And we're also
working on direct bone tracking,
so there will be no need fortrackers at all.
So that's sort of that's thegold standard is you put in your
glasses, it shows you where thecuplanes are without any
trackers, and you start cutting.
So there's zero uh frictiongoing into surgery.
SPEAKER_00 (55:08):
And this is
superimposed with all your uh
wave point or cloud-based umsystem.
SPEAKER_08 (55:13):
Yeah, you can turn
things on and off with your
voice so your hands are free.
And you can say internalligaments.
SPEAKER_00 (55:19):
The density map, the
dynamics of I mean the dynamic
information on the ligaments aswell, interoperatively.
SPEAKER_08 (55:25):
Yeah, that's the I
think our value add is you can
see all the soft tissue andligaments.
So if you're putting tension andyou're trying to see if that's
gonna be a viable location foran implant, or if you put like
uh what's called a um trials.
Yeah.
So they're not quite implants,they're just plastic pieces.
You can move the knee around andthen you can see virtually if
the ligaments are gettingstrained or not.
(55:47):
Sometimes one ligament getsstrained and there's just no way
around it.
But the surrogate sees throughtheir glasses exactly which
fiber to release.
That could be like a tiny nick,and that's gonna make a whole
difference.
So the patient doesn't havepain.
So we can give instant feedback,and they can also check intro-op
if the pre-op plan was corrector not.
If it's not, you can ask for arecalculation based on the
(56:08):
ligament tensions.
So all of that is without aphysical tensioner, which is
typically a hardware um createdsystem that's you know, adding
all, you know, going back tovalue base, adding more hardware
and more things to work with forthe surgeon.
SPEAKER_04 (56:23):
Yeah.
SPEAKER_08 (56:24):
So the goal here is
in the intro version, um, either
augmented reality or justtypical robotics showing where
to cut.
And uh going back to yourcomment on patient-specific
guides, we don't havepatient-specific cutting guides.
We have an interesting solution,I think, is we can 3D print uh
an angel wing, which isessentially um something you put
(56:47):
on a cuts uh guide.
So your typical off-the-shelfcut guide you would use for
manual surgery.
SPEAKER_04 (56:52):
Okay.
SPEAKER_08 (56:53):
There's a cuts and
slots that the stall would go
into.
You can put our little angelwing in there and it's
instrumented, so you can trackit in space.
Now your cutting guide isinstrumented.
I see.
So now your um essentiallytypical manual instruments are
became a million-dollar robots.
And now you know where to placeit exactly as you planned, pin
(57:15):
it, and that's a robotic surgeryfor a few thousand dollars or
just a million dollars.
SPEAKER_00 (57:20):
You know, I think I
I saw a very similar publication
on this years ago by a Chineseresearcher.
SPEAKER_08 (57:27):
Yeah.
SPEAKER_00 (57:27):
So yeah, the concept
was definitely out there.
SPEAKER_08 (57:30):
The concept, uh, I
think where we added value in 3D
printing is my lack of trust in3D printers.
So you can 3D print a plasticpiece and it doesn't have to be
accurate.
So uh obviously coming fromregulation quality, my issue is
always okay, now we have toregister this thing to make sure
it was printed correctly.
Right.
But I don't care about that.
(57:51):
I put the instrument on it,track instrument.
Then you can digitize the bottomsurface, even if it's completely
off from what you planned as aCAD model, then that plane is
now accurate compared to thetracker.
SPEAKER_00 (58:04):
Do you have a
picture of those?
So maybe later on I can get itfrom you.
SPEAKER_08 (58:07):
Yeah, I think yeah,
I think we should have to do
that.
SPEAKER_00 (58:08):
So just have a
concept of what it looks like.
I mean, if you're allowed toshare.
SPEAKER_08 (58:12):
I mean, yeah, it's
it's it's an incremental
technology as far as I'mconcerned, because our goal is
to be trackerless anyway.
SPEAKER_04 (58:19):
Yeah.
SPEAKER_08 (58:20):
So it's not
something we're likely gonna
commercialize.
Uh we may or may not will seeit.
SPEAKER_04 (58:25):
Yeah.
SPEAKER_08 (58:25):
It kind of depends
on sort of, as I said, how the
technology progresses.
SPEAKER_00 (58:29):
Yeah, I would love
to see the interoperative state
phase of your product.
That now, how do you feel likeyou feel the competition from
the larger players are gonnathreaten your survival?
Because, you know, obviously Jand J, Stryker, they're all
working on their own thinginternally.
SPEAKER_08 (58:47):
Um, no.
unknown (58:49):
Okay.
SPEAKER_08 (58:50):
I think uh, I mean,
having come from large
companies, right?
Um, it's I look at it as a lifecycle.
So engineers like me work therefor a while, but on very
specific um projects that areadding value.
SPEAKER_04 (59:03):
Yeah.
SPEAKER_08 (59:03):
And large companies
make their money on um implants.
So robots sells for a million,let's say.
SPEAKER_00 (59:12):
Yes, as a million
dollar attract last time.
SPEAKER_08 (59:15):
Right.
Let's say you sell a hundredrobots over the year, and then
another hundred, and theneighty, and you know, there are
only so many robots you cansell.
Right.
And maybe you sell softwarepackages, et cetera.
But at the end of the day, thesoftware packages and everything
else is a cost center for largecompanies.
So they're always looking fornew innovation than they can buy
(59:38):
instead of build, becauseinternally those engineers are
better used for other thingsthan being a cost center.
So the sort of industry selfreleases in terms of innovation.
Uh, there's no value incompanies building themselves
because I'm giving so manysecrets away here.
But RD versus MA are.
SPEAKER_00 (01:00:00):
Yeah.
The audio recording is stillrecording in progress.
(01:00:26):
Okay.
Where do we start again?
Or are you talking about a largecompany?
Go back to the MA and R.
Good stuff.
SPEAKER_08 (01:00:36):
I'll start an R D
and MA and they'll be like a
nice um break.
SPEAKER_00 (01:00:40):
Okay.
One, two, three.
Start.
SPEAKER_08 (01:00:43):
Right.
So the R D and MA are twodifferent buckets.
That's sort of the not so umsudden, uh, not so subtle
difference.
Uh RD, the goal is to buildproducts that sell at minimum
cost possible.
MA is buying products.
And my role has been pricing outMA compared to RD.
(01:01:04):
So how much can we build it for?
How much can we buy it for?
Buy usually wins.
So the goal here is findinginnovative companies, buy them,
and roll them into the system.
Then it's already pre-built andready for the markets.
Oftentimes already in themarket, like us.
And um, you know, it costs a lotof money and time to go into
(01:01:25):
smaller centers.
Even in the big centers, ittakes money and time to enter
with software.
So I don't see large companiesas competitors, I see them as
partners.
Yeah.
So we're all talking tobasically all the large players
right now in the market becausethey're all interested in what
we're building.
And they know that they're notgoing to build that uh typical
(01:01:46):
life cycle for any kind ofmedical devices, about seven
years in large companies, butand you know, about a year for
us or less.
SPEAKER_00 (01:01:54):
I have to say,
Gilly, this is a secret sauce
for success.
It's just enjoy the pain pointof the larger companies.
Um, it's really hard to knowunless you're already an
industry insider.
SPEAKER_08 (01:02:08):
Um it creates
innovation.
So I think it's it's aninteresting cycle where people
leave because they know theyhave a good idea, but they're
not gonna be able to build it ina big company.
So then it spurs out innovationoutside, and then oftentimes
they're brought back.
So it goes in cycles.
SPEAKER_00 (01:02:27):
Yeah, I definitely
have a known stars along that
line with big uh big implantproviders.
Right.
Now, I think we have really hada great conversation so far.
Um, I want to just kind ofending in on a sector of future
outlook.
Sure.
Um, we mentioned there were someindustry cycles, and we could be
(01:02:51):
either up or down with variouskeywords.
Um what do you think in three tofive years, what do you what do
you want to see?
SPEAKER_08 (01:03:02):
In healthcare as a
whole?
SPEAKER_00 (01:03:04):
It's an open end a
question in healthcare, in
particular orthopedic surgeryworld.
SPEAKER_08 (01:03:10):
Sure.
SPEAKER_00 (01:03:11):
And also with Vent
creativity.
SPEAKER_08 (01:03:13):
Okay.
Yeah.
I mean, as a as Vents, uh, Ithink our goal is much larger in
orthopedics.
Uh, we're a digital twin companyin healthcare, yeah.
Not in orthopedics.
So our goal is to expand adigital human uh to the entire
field.
SPEAKER_04 (01:03:28):
Yeah.
SPEAKER_08 (01:03:29):
Uh our goal is to be
the Amazon of healthcare where
you can come in and use ourmarketplace for your specific
needs.
So um going back to how we werefounded in 2020, um, because
orthopedic was considered umnon-priority surgery, it was
shut down.
SPEAKER_04 (01:03:46):
Yes.
SPEAKER_08 (01:03:47):
So we actually
started off with lung analysis,
heart analysis, et cetera, tounderstand, you know, where to
find nodules for COVID, etcetera.
SPEAKER_01 (01:03:56):
Yeah.
SPEAKER_08 (01:03:56):
And uh we're never
not forgetting that.
So we're right now working onhernia as well as looking into
hearts and oncology in otherareas.
The goal is to see where else wecan be useful.
Yeah.
Looking into a full body scan ofMRIs for preventive care.
SPEAKER_04 (01:04:14):
Yeah.
SPEAKER_08 (01:04:14):
So I think three to
five years, we would have made a
significant change inorthopedics in terms of outcomes
for the patients.
We're talking 10-15% increase.
The rest of it we probably can'taddress from you know patient
biases in terms of pain, etc.
But I think mechanically we canaddress that with our system.
(01:04:35):
And then uh at the same time,growing other verticals where we
can use Minerva for otherapplications where someone like
me is not gonna be able to runthat show.
We're gonna need people who areexperts in those buckets.
But I think um I have the visionto sort of start that and bring
people that are uh in my mindsetto say, how can we break this
(01:04:58):
down because it's not working?
And how can we build it back up?
So I think digital twinning isgonna expand in uh healthcare
with increasing compute power.
SPEAKER_04 (01:05:08):
Yeah.
SPEAKER_08 (01:05:09):
But at the same
time, cynically, I'm not um you
know blind to the fact thatcompute power is drawing a lot
of energy.
So uh three to five years, weneed solutions in energy.
And I'm hopeful always forfusion and fission.
Um, so I think those are veryviable ways of solving a lot of
problems.
Today I was looking at umgeothermal energy, that was very
(01:05:31):
interesting.
SPEAKER_04 (01:05:32):
Yeah.
SPEAKER_08 (01:05:32):
Uh blasting through
the earth to get to that.
But energy and water are gonnabe major issues for everyone.
SPEAKER_04 (01:05:39):
Yeah.
SPEAKER_08 (01:05:40):
Uh, but uh at the
same time, healthcare is not
gonna really progress, orbasically nothing's gonna
progress if AI is priority,because that's sort of a
zero-sum game in terms of thepie.
Uh, we can't all compete forenergy, somebody has to lose.
So hopefully we can expand itwith new technologies.
SPEAKER_00 (01:05:59):
Yeah, I mean, there
are things that we know we don't
know, and there are things wedon't know we don't know.
What I mentioned is that, youknow, uh, what is that?
Deep seek from China.
For example, it's kind of likeout of blue kind of scenario.
Right.
I mean, I don't know the wholestory about it, but the scenario
of a better foundation model uhcould dramatically reduce the
(01:06:22):
kind the kind of energy that weuse as a one scenario that it's
a possibility.
We just don't know if it'sthere.
SPEAKER_08 (01:06:28):
Yeah.
Um stand up contrarian, and weneed a lot of contrarians who
say there's better ways to solvethat.
SPEAKER_00 (01:06:36):
Yeah.
Um now, final reflections here.
Um, well, actually, a couple ofthings.
One is um, what do you what isyour your major challenge?
What are your major challengesright now that you're facing?
SPEAKER_08 (01:06:49):
I think major
challenge is explain the story.
Uh it's always difficult.
I think we're still at thatphase of people not quite
understanding uh what thatmeans, what digital twin means.
SPEAKER_04 (01:07:01):
Yeah.
SPEAKER_08 (01:07:02):
Um I'm trying to be
very careful not to say
simulation because I thinkthere's a very difficult
difference between simulationand digital twinning.
SPEAKER_00 (01:07:08):
Okay, what's the
difference?
SPEAKER_08 (01:07:09):
Okay.
Simulation is essentiallyunderstanding more or less how a
system moves based on averagerules and based on average
computing.
I think digital twinning isessentially taking the exact
human or whatever object andusing that exact information to
solve it.
There's a large difference interms of uh specificity, but uh
(01:07:34):
there may or may not be adifference in the solution.
But if there is, then digitaltwin should win because it's
very specific to that person inthe healthcare aspect.
I think in Digital Twin, we haveto really concentrate on that
aspect.
And uh again, going back to thewhole bias, I think going into
the solution, people really needto be very mindful of bias in
(01:07:55):
AI.
And I don't see a lot of that.
So that's that's my biggest petpeeve in the field is I think
you need to go in there withmajor rules and going back to
engineering again.
Has to be requirements and rulesput in place on how do we reduce
bias going in so that it's notdisadvantaging any person or any
patient or you know, anysituation.
(01:08:18):
Otherwise, uh it's an echochamber.
Then it's gonna work for enoughpeople that it's good enough,
but maybe that's a marketingtool, not really an engineering
solution.
SPEAKER_00 (01:08:29):
So maybe maybe your
tools can be used for uh for
China.
SPEAKER_08 (01:08:34):
I mean, I tools can
be used around the globe, so we
have no boundaries.
We are used in Switzerland andTurkey right now, in outside of
the United States.
And uh we're actually juststarting in Belgium uh for
research.
SPEAKER_00 (01:08:47):
Uh we'll get into
the NZRs eventually, but maybe
one of these days also to Asiacountry, because you know it's
totally bought different bodysize and type and bone density.
SPEAKER_08 (01:08:56):
Japan is a very
large market in terms of
technology and orthopedics.
Going to Japan tomorrow.
Yeah.
Because Japanese uh patients arevery different than uh
Caucasians.
Yeah.
And the bone structures are notreally allowing those implants
to fit correctly.
Right.
So how do we fit to theirspecific alignments, which is
very various?
This is gonna probably lose halfthe crowd, but uh you know,
(01:09:19):
bow-legged essentially.
And how do you solve for that sothat they're not forced to be a
different alignment?
They're not, yeah, isessentially our core.
So we can solve it, but withimplants available to us.
And what we found is there's amarket for patient-specific
implants, only 10%, but that'sstill 10% that's underserved.
(01:09:40):
So, you know, that goes topatient satisfaction.
Yeah.
Uh 10% we can solve formechanical alignment.
10% we need to solve withpatient-specific implants and
instruments, etc., that are sortof outliers that just can't be
resolved with your off-the-shelfproducts.
SPEAKER_00 (01:09:56):
Yeah, I think the
10% is actually probably
overestimates.
Probably I think the majority ofpeople could benefit just
virtual planning.
Personalized virtual planning.
SPEAKER_08 (01:10:06):
Yeah, planning,
yeah.
I mean, I meant implants.
SPEAKER_00 (01:10:07):
Yeah, implants are
difficult because there's a lot
of economic issues with andwe'll experience the same
problem that you mentioned aboutsurgical guides.
Yeah.
Okay, you can create thisexpensive personalized implant,
but on the day of operation, thepatient changed.
SPEAKER_08 (01:10:22):
Yep.
SPEAKER_00 (01:10:23):
What are you gonna
do?
SPEAKER_08 (01:10:24):
Yeah, yeah.
So yeah, I think uh going backto our fluoroscopy partners, uh,
that change.
You can have instantaneoussurgery down the road where they
get a flora, plan is created in10 minutes, then you go into OR,
and then outcome of the OR canbe decided when they're getting
discharged to see how they'redoing.
SPEAKER_04 (01:10:43):
Yeah.
SPEAKER_08 (01:10:43):
So there's a world
where the ecosystem is
completely a chain where youdon't need to have any delays in
the system.
SPEAKER_04 (01:10:50):
Yeah.
SPEAKER_00 (01:10:51):
Okay, well, final
question, I promise, sure is do
you have any suggestions for thenext generation of
entrepreneurs, engineers,stewards?
SPEAKER_08 (01:11:00):
Stay in school kids,
man.
SPEAKER_00 (01:11:02):
Don't do drugs.
SPEAKER_08 (01:11:04):
Exactly.
Um, I think it's funny becausewhen I was in school, uh, what
was the coolest thing when I wasin school?
It's almost changes every you'realways off by 10 years, right?
So I think you probably shouldnot go for what's popular right
now.
So, you know, AI and softwareright now is obviously the
biggest thing.
(01:11:24):
But what's next?
When I was in school, I think itwas materials because everybody
was getting into custommaterials and um fibers and you
know, nanoparticles, et cetera.
SPEAKER_00 (01:11:34):
That is so unsexy
right now.
I just want to say that.
SPEAKER_08 (01:11:36):
No one cares.
And then three, it was we werejust talking about it earlier.
Ten years ago, I was in 3Dprinting and it was the hottest
field ever.
I don't think I heard 3Dprinting in a while.
So, not to throw shade at thewhole crowd here, obviously, but
uh, you know, it's it's gonnacome back, but it goes in waves.
And then augmented reality andvirtual reality, I think it's
gonna go over again and then beback up.
(01:11:58):
So it's a matter of, I think,having that internal gut feeling
of what's next.
And probably not listening topeople because oftentimes you're
gonna say, well, no one's gonnaneed that.
Um, so I went into this field uhbeing a hardware and robotics
engineer.
I pivoted to software.
(01:12:18):
Not because I had this masterplan, but I thought, you know,
that's how we solve everything.
And then AI happens right in themiddle of my um progress.
SPEAKER_00 (01:12:28):
So that was almost
like you picked up something
earlier than everybody else.
SPEAKER_08 (01:12:33):
Yeah, my you know, I
had a few sayings, one of them
is fortune favors you prepared.
So I think what I would adviseto all the students and everyone
else is study game theory.
Uh I'm biased because I comefrom an industrial engineering
background, but yeah, gametheory and operations research
is the core of everything you doin the world, really.
(01:12:53):
Very boring level, butessentially not assuming
something's gonna happen.
Uh my wife calls me pessimistic,but I'm not really pessimistic.
I'm sort of prepared because ifif I know what's the worst thing
can happen, I have a preparedsolution for that.
Yeah.
Versus, you know, hoping it'sgonna be the best case scenario.
(01:13:15):
So now I'm prepared for all thedifferent solutions possible.
And then hopefully I have asmart contouring part uh who's
going to sort of behave uh asyou would expect, even if they
don't, you sort of have an ideaof what's gonna happen.
And I think uh engineeringprinciples and these um game
theory principles really go along way as core parts of your
(01:13:38):
life, as well as social skills,obviously.
But with these, I think youcan't really do wrong because if
you're believing a product, thenyou know how to position it
based on you know how people areresponding to it.
SPEAKER_00 (01:13:51):
Absolutely.
You know what?
I'm so inspired today.
I'm gonna read these right afterthis podcast.
Well, thank you so much forcoming over and I really enjoyed
this conversation.
And hopefully we can haveanother catch-up thought
sometime in the future.
SPEAKER_08 (01:14:06):
Thank you, Gary.
SPEAKER_00 (01:14:09):
This podcast is for
educational and informational
purposes only.
The views express do notconstitute medical or financial
advice.
The technologies and proceduresthis cost may not be
commercially available orsuitable for every case, and
always consult with a licensedprofessional.