All Episodes

April 15, 2024 89 mins

Send us a text

Learn about the amazing AI-enhanced advances for capture and rendering of 3D models for printed wearables and virtual clothing, avatars, and virtual worlds - all showcased at GDC.

AI has literally revolutionized 3D scanning, from markerless motion capture, to high quality avatars generated from text and images. You can now use AI to create 3D scans using only your cell phone, and these scans can be used to make beautiful 3D printed objects and professional quality wearables. You can even use AI to re-render your old scans into superior new high quality 3D models. All of these new features combined are making 3D printing more efficient and accessible to everyone.

Ryan also covers these AI 3D Advances:

-NeRF's, 3D Gaussian Splatting, and SMERF advances for 3D world representation and AI models for photogrammetry.

-Real-time error correction during the printing process that enables faster 3D printing.

-Image to 3D full physics clothing simulation for video games and in-world models.

-Real-time facial capture.

-Real-time physics.

-Avatar generation directly from text and images.

About Ryan Sternlicht:
Ryan Sternlicht is a San Francisco-raised educator, researcher, advisor, and maker, who has advised a number of startups in the fields of AR, VR, AI, neurotech, and 3D manufacturing.

As a maker, he has volunteered or advised a number of makerspaces and hackerspaces, as well as numerous technical non-profit organizations in California and upstate New York. 

While in the bay area, he volunteers and teaches at Noisebridge hackerspace, staffs a large number of film festivals and conventions, such as Aaron Swartz Day, Offkai Expo, Maker Faire, and Open Sauce.  He also works at Alamo Drafthouse New Mission, and helps teach game development at CCSF.

FOLLOW Mindplex Podcast:

WEBSITE → https://magazine.mindplex.ai/

Mindplex News → https://magazine.mindplex.ai/news/

Mindplex TWITTER → https://twitter.com/Mindplex_ai/

FACEBOOK → https://www.facebook.com/MindplexAI/

YOUTUBE → https://www.youtube.com/@mindplexpodcast/

Spotify → https://open.spotify.com/show/3RbGNlj9SX8rYOVblnVnGg/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Lisa Rein (00:03):
Hello, everybody, welcome to the mind plex
podcast. We're very excitedtoday to have Ryan sternlicht.
Hi, Ryan. He is an expert inmany things. And we've worked
together on some virtual realityprojects, artificial
intelligence projects, neurotech, all sorts of exciting

(00:24):
things. Over the years, it'sbeen exciting for me to learn
from him. And so when it wastime to start covering VR, and
xr in these shows, I like Oh, Iknow the person to call for
sure. So he came on beforetelling us about the latest in
headgear for VR. And we lookedat the Apple vision Pro and and

(00:47):
all the latest headgear. And youcan find that on the show on our
channel. And today, we're goingto talk about the AI 3d
revolution. When he got backfrom GDC, and was telling me
about all the stuff he saw,basically, there is a revolution
that has happened, it is clearin scanning and rendering AI has

(01:10):
made it so you can not only dovery high quality scans with
your phone now. But you can evenrerender your own scans into
better models with thistechnology. And so that's all
that's new and exciting. And Ithought that's it, we've got to

(01:31):
do a show on that that counts.
And I will let Ryan, take itaway. And until you tell us a
little bit more about yourself,Ryan, and then jump into this
amazing shit that you saw at GDCthis year.

Ryan Sternlicht (01:45):
Yep. Thank you so much, Lisa. So I'm Orion
sternlicht. I was on here a fewweeks ago or now?

Lisa Rein (01:59):
Yeah, time flies.
Yeah.

Ryan Sternlicht (02:03):
And I do a lot of different research for
different people, as well as Ihelped advise people on
different upcoming technologiesquite a bit. So these days that
mostly is focused around AI, VR,different types of neuro tech

(02:27):
and brain computer interfacestuff, as well as some 3d
manufacturing, and differentforms of like, how this stuff
integrates with the consumerside, but also the manufacturing
or business side a differentstuff. And recently, I went to

(02:53):
GDC, the Czech I've gone to GDC.
This one my ninth year. Yeah.
And so

Lisa Rein (03:04):
I really want to know, you know, I want to jump
right in and talk about theseone by one because there's a lot
to talk about. There's basicallythree or four or five, I think,
specific technologies. Yeah, Ireally wanted to make sure that
we covered. Yeah, so let's Yeah,so let's start off. Well, you
can you can let us know where tostart off. Yeah. What's the

(03:25):
scanning? The rendering?
Probably the scanning?

Ryan Sternlicht (03:28):
Yeah, are okay, let's do the scanning one.
First. Okay. So, 3d is scanning.
So I think a lot of people herehave heard of 3d scanning in
some form, whether it's, oh, we3d scanned a building, or we are

(03:50):
turning a object in real lifeinto a 3d model for putting in a
video game or for historicalpreservation is actually one of
the most critical uses of 3dscanning. And they're over the
past like 20 years, it's changeda lot. I first got started 3d

(04:15):
scanning back in 2013. When Ifirst learned about the basics
of what is kind of the back endof all this which is
photogrammetry which is creating3d models from photos by using
triangulation between photos tofind the distance to different

(04:45):
points, um, I and then I talkedwith a number of companies at
GDC and There's been a lot ofdifferent changes in the past
year or so, like the technologyI was using in 2013, it would

(05:10):
take me a week to get a 3d scanof my head with about three
unread photos, I did take avideo going around my head with
my phone, I can I convert thatvideo to a set of photos that I

(05:32):
have used thing called structurefrom motion SFM to convert that
into a point cloud, that wouldthen needed to be converted into
a 3d model mesh. So about aweek.

Lisa Rein (05:54):
So let's let's wait and to compare the three
different kinds of scans likewe're going to, because I really
want you to jump in and tell usabout the new stuff. Okay,

Ryan Sternlicht (06:03):
yeah. Okay, so.
So you got query engine here.
Yeah. So that's what one of thecompanies at GDC. And they are
one of the bigger players in aphone based 3d scanning tool
set. They have been around forof, I think, a few years now.

(06:29):
And they, their stuff is really,really cool. In that it is very,
very high quality for phonebased stuff, because instead of
doing any processing really onyour phone, it uploads it to

(06:50):
their cloud where it can beprocessed much more efficiently.
And somewhat the big is thingsthat for years were difficult to
do with photogrammetry is thingslike PBR is Physically Based

(07:10):
Rendering and textures, whichthat is things like color
reflection, the normal maps,pretty much all that stuff that
makes something look like a realobject and not just a blob in
that's of a certain shape. Theyhave implemented multiple

(07:38):
different forms of 3d scanningpipelines, including stuff that
utilizes the LIDAR scanners, oniPhones, you can use video to
photo set conversion, you can 3dScan small objects, you can scan

(07:58):
rooms, it is very, very cool.
And

Lisa Rein (08:06):
is that if your phone isn't set up, it utilizes
everything that your phone has.
But if your phone doesn't havethe processing power, it makes
up for it on the cloud. Is thatwhat I'm hearing? Okay,

Ryan Sternlicht (08:18):
pretty to a degree. So most phones cannot
process a 3d scan, they don't,they can render one once it's
processed, but they can'tprocess it directly. So it has
to be uploaded to the cloud,which is a lot that that stuff.

(08:42):
No one would ever do somethinglike this about five years ago
to do free exports of 3d scans,because the amount of processing
required was so substantial thatyou you just would be losing

(09:04):
money hand over foot because ofhow inefficient that was. But
now there

Lisa Rein (09:09):
was a company doing it. Remember at the beginning of
the pandemic, that company thatI was doing all those scans?
Yeah. And they just went away.
Do you happen to remember theirname? Do you I can't remember
their name,

Ryan Sternlicht (09:21):
but I will try to find it. i It's really hard.

Lisa Rein (09:25):
Yeah, I couldn't find it. But that was years ago, you
know, that was like 2021 andthey got bought and then I
haven't seen the technology popup, you know, anywhere else. So
it's, it's but the thing thatthey did was that depending on
how you did your scan, it mightnot upload it was a huge

(09:47):
undertaking to do the scan. Youknow, with your phone, you had
to walk around the image andover and over again and, and do
all that stuff. And basically,it's not such an arduous process
now during the scan, right,that's completely different when
you use your phone to do ascans. So explain that
difference too, because that's abig difference. Yeah,

Ryan Sternlicht (10:09):
so this is like, that's actually kinda
explains some of that stuffthey're doing. So there is a
paid model, it's not veryexpensive for the quality of
stuff it's doing. But thingslike with the free one, you do
get LIDAR rooms scanning, you'reable to use different things

(10:34):
like low poly conversion, whichis very valuable. If you want to
use the model. With the paidversion, you get read topology
with quad mesh, which is a veryvaluable thing and PBR
materials. And then this AIpowered featureless Object mode.

(10:59):
So this is, the big problem fora long time with 3d scanning
stuff was reflective,transparent, and translucent
materials. Because the way a lotof this stuff was originally
processed, it couldn't handlethat sort of stuff very well. So

(11:27):
they, over the years, differentplaces have gotten much better
at 3d scanning, andunderstanding how to do
different things, mitigate thoseissues. So there's four, 3d

(11:49):
printing you This is an exampleof kind of the workflow, you go
through to do some of thesethings where you would take
video of a person's headconverted into a set of photos,
then you edit it in 3d modelingsoftware, then you could 3d

(12:15):
print it. And you could, thisused to be weeks long process to
do all of this with some of thetools you have. Now, you can do
this in a matter of hours. Likehere's a brick, and you can put

(12:35):
it in a game you could 3d These,Dan, objects you really love,
but you're worried aboutbreaking 3d, scan them, 3d,
print them, and then paint them.
And you can do that in anafternoon. Now, that is was not
really possible before. So doingwhat a lot,

Lisa Rein (13:03):
what are they doing to the it's with the AI? It's a
number of different

Unknown (13:08):
techniques are using Yeah,

Ryan Sternlicht (13:10):
into that, can you talk a bit about some of the
stuff they're doing. So the mainthings that are happening right
now, in the past year and ahalf, some new AI driven
photogrammetry models that areusing MLMs, and a lot of

(13:31):
different new AI tools becausewe everyone's seen the
proliferation of the new AI eyesin the past couple of years. And
of course, that stuff would beapplied to different things such
as photogrammetry. So the one ofthe main companies that's been

(13:59):
pushing this is Nvidia andMicrosoft have both been doing
stuff. So this is actual that isnot a video that is a 3d model
made from scans of images. Sothis is using a thing called

(14:24):
nerfs, which are neural radiancefields, it uses a neural models
do pretty much more advancedcomparison of the information
between the two images, to getbetter 3d depth data, color

(14:48):
data, and other types of stuff.
So it as all of these 3dscanning technologies have a
very very similar first two orthree steps of you take photos,
figure out where they are inspace, then you process them.
But this processing is what'scompletely changed in the past

(15:13):
year and a half. But that,because those first two steps
aren't the same as they were1015 years ago, you can still
reprocess old photos and videosfor 3d scans, but because that
process has not changed in 20years, which is really cool. So

(15:39):
a lot of people like me, havebeen starting to try and find
our old 3d scans to reprocess.

Lisa Rein (15:55):
So they're backwards compatible.

Ryan Sternlicht (15:57):
Yes. Which it kind of insane. That

Lisa Rein (16:00):
is, I'm surprised.
Yeah.

Ryan Sternlicht (16:04):
And the different types of so let me
find a

Lisa Rein (16:11):
question that was asked earlier in the week when I
was researching this. Is this,anything like upscaling when
it's adding information? Is it?
Is it getting more information?
My question is, is it gettingmore information out of the
file? That's there more pointinformation? Or is it doing AI
stuff to kind of upscale it justto make it look better?

Ryan Sternlicht (16:34):
If I were both?
Kind of in between? Like, Iwouldn't say it's upscaling like
it, you can actually do that youcould apply a up to a resolution
upscaler at the beginning.
Right, right.

Lisa Rein (16:51):
I know you could do that. But that's not what
they're doing. Right? No,

Ryan Sternlicht (16:55):
no, this is generally the issue with nerfs.
for wildlife. This was firstkind of design nerfs work first,
done a few years ago, but theywere still pretty slow. They
were a lot faster than old stufflike this stuff that took me a

(17:18):
week to do what nerve would takeabout eight hours now.

Lisa Rein (17:24):
Yeah, the nerves require GPUs of GPUs, right? Or

Ryan Sternlicht (17:30):
photogrammetry always has required GPUs. So

Lisa Rein (17:34):
three of the systems required GPUs. Yeah, all odo

Ryan Sternlicht (17:38):
gravimetry.
Pretty much all 3d scanning isincredibly GPU intensive. Both
for processing, but often, it'salso pretty hard for the
rendering. Like we, one of theplaces I helped out in SF
Noisebridge, hackerspace, we gotto really high end 3d scan of

(17:59):
our space about seven years ago.
And even on a really, really topof the line $10,000 computer, it
would crash that computerbecause the file was so big, and

(18:19):
there was so much informationthat it would trying to render
all at once that it would justcrash the computer outright,
every single time.

Lisa Rein (18:32):
So this makes sense.
So it's actually like the waythat things are processed, it's
being done more efficiently, ina way, that's not only going to
make a better model, but it'sgoing to render better, it's
going to be less susceptible tothings like crashes and yeah,
and stuff like that. Yeah.

Ryan Sternlicht (18:48):
So this, this stuff is very cool, then, like
there's, people still get kindaconfused about nerfs versus
photogrammetry. Like nerfs ispretty much a AI or or a more
advanced AI drivenphotogrammetry methodology.

Lisa Rein (19:13):
Right that titles a weird title, right? Because it's
not really versus it's, it'syeah, it's not first one's a
subset of the other. Yeah, yeah.
So it's all photogrammetry andthen it's just whether it's
literally someone walking aroundwith a camera taking pictures or
Yeah,

Ryan Sternlicht (19:32):
so like, this is actually great image of the,
in essence, usually second step,or third step of photogrammetry
and of nerves and of what we'lltalk about in just a minute.
This is where you're figuringout of view perspectives of each
of the images that you are thengoing to process. So you take

(19:56):
you usually photogrammetry isdone On, on with an outside in
methodology where the objectyou're scanning is inside of
your perspective of the camera.
So you walk around the object,but you can flip it the opposite
way and do like room scans whereyou're doing inside to out.

(20:19):
Which often gets confusing, butthey use the same terminology in
both VR and motion capture fordifferent things, but they
generally mean the same thing ofwhat the camera perspective is.
Um, so that's,

Lisa Rein (20:44):
that's part of the fun in in VR a lot of times is
you're inside a large object.
You know, the book was burningman exhibits and stuff, we were
literally flying around insidethe exhibits. So if you would,
that would be pretty useful toscan from the inside, you know?
Or if you're making a building,you do an outside scattered
inside skin. Yeah.

Ryan Sternlicht (21:05):
Yeah. So this is a kind of actually
interesting way of showing whata basic point cloud kinda looks
like. So this has nerfs, andphotogrammetry. So the this it,
point, clouds are what yougenerally get out of this

(21:28):
information, which point cloudsare not that useful in a lot of
situations currently, becausethey don't have any geometry.
Point Clouds are. But the thingis, if your point cloud is dense
enough, it will lookindistinguishable bowl from

(21:52):
geometry. But generally, itlooks kinda like this. It looks
very blobby, and it has no colorto it. It has no reflection.
It's just a map blob. Yeah. Sothat's, you're losing a lot
information that those imageshad, when you just have the

(22:17):
base, like photogrammetry, andpoint cloud. But AI has really
gone along different distance inallowing you to get all that
information back. And also inferinformation that you didn't

(22:37):
fully have. So the nexttechnology that kind of came out
recently, is Gaussian. splattingfor so are actually wait, that

(22:59):
one's in just a sec, first test.
So this is what came out alittle while ago, back in August
of last year, that was really,really interesting. This was
taking a lot that thattechnology that nerfs had, which

(23:21):
was nerves were still prettyslow at the time. And figuring
out what can we do instead ofnerves that's more efficient.
And that was 3d Gaussian.

(23:42):
splatting. Which I'm trying tofind. So how

Lisa Rein (23:47):
is this implemented?
Like, where's somebody going tosee this in action?

Ryan Sternlicht (23:51):
Okay, so now our first 3d, here we go. So
they both use different types ofAI models. So the 3d godson is
a, it's pretty hard to explainin simple terms of you, it uses

(24:17):
a lot more parameters related tothe information to process it.
So,

Lisa Rein (24:27):
yeah, so it's using machine learning to process.
Yeah, yeah, that's fine. Wedon't have to go into the math.
The point is, it uses math tomachine learning to to process
it. Now if you're in a virtualworld, and the objects are
around you, all the differentobjects, they could have all
been made different ways, right?
Well, yeah, I'd have been a scanwhen my be golfing splatting

(24:49):
when might be a nerf. And thepoint is once it's done and
rendered, it's just a renderedobject in the in the world at
that point, right? Yes,

Ryan Sternlicht (24:59):
but Okay, looking better. But the other
thing with a lot of these is,there is a issue a bit with a
lot of these, which he Yeah, soas this actually is a great demo

(25:23):
with this steps to it. So youhave your ground truth 3d model
image set, which there arestandard image sets, this is one
of them this bike, you dostructure from motion, which
basically what I used to use,this is what I usually got,

(25:44):
after a week,

Lisa Rein (25:45):
when you said standard image set, you mean
like a bike is a bike as a bikeso that all the AIS will know
what a bike is, or what do youmean,

Ryan Sternlicht (25:54):
literally a image set that everyone can use
as a comparison or benchmark fortheir 3d modeling. So

Lisa Rein (26:11):
okay, so suddenly, I think it's an 3d modeling thing.

Ryan Sternlicht (26:15):
Yeah. Or this is just in general, every
industry has a benchmark set.

Lisa Rein (26:21):
Right? I'm talking about the one that you're
talking about right now. Yeah.
That

Ryan Sternlicht (26:26):
one specifically for 3d scanning.
Okay. And for photogrammetry. Sothis is what a lot of people
use. And yeah, so here's anexample of multiple different

(26:46):
systems, all rendering the samebike scene using different tool
sets. So that and PSNR. That isthe I can't remember but what
the P is, but it's like signalnoise ratio, which the height in

(27:07):
this case, the higher thenumber, the better, the more
it's realistic. And then if youlook at these things, these have
to do with their render time.
And training time and stuff. Solike, MIPS nerve, which used to
be a peak signal,

Lisa Rein (27:30):
signal. Thank you Reiter.

Ryan Sternlicht (27:33):
It's MIP nerve used to be one of the best
variants of this a few monthsago, and it took 48 hours to
train, which is not bad, butit's not great. Then you get

(27:53):
into some of these things. Like

Lisa Rein (27:56):
what? Well, I'm sorry, what are you talking
about to train it to do what abike is?

Ryan Sternlicht (28:00):
Or no, it's not gonna know yet. Or were talking
about the 3d scan? Pretty muchit took 48 hours to process,
this 3d Scan.

Lisa Rein (28:13):
We have a bike of what yeah. So second, about a
scan in general on the system,that it took two days to process
the scan. Yes.

Ryan Sternlicht (28:23):
Okay. Um, and give you an output with a good
piece are using AI, they areconsidered training. And then
you can see like this, like,this is the same scene, but it
only took 51 minutes, and it hasa better PSNR than the one that

(28:49):
took 48 hours.

Lisa Rein (28:51):
So better models faster. Yes, yeah. Yes.

Ryan Sternlicht (28:56):
Which, and, like the difference in when
these two technologies released,it's about eight months. So
that's about a, like, order,like 48 times faster in six

(29:17):
months, which, that's prettycrazy.

Lisa Rein (29:20):
And so again, which two things are we comparing now
the guys in splatting and thenerfing or the

Ryan Sternlicht (29:26):
soap, nerve, this

Lisa Rein (29:29):
cladding. Okay, so the nerf is the fast one, right?
Just checking out.

Ryan Sternlicht (29:34):
I've seen splatting in this case. Oh,

Lisa Rein (29:37):
in this case, is the faster one. Yes, I thought we
were saying before, so itmatters what you're processing
as far as which one is going tobe better for that kind of
model. Ah, okay.

Ryan Sternlicht (29:48):
Um, and a lot of these are all very easy to
like, run yourself. Like, youcan get GitHub models have a lot
of these. But the thing is, isin the past, like, month or two,

(30:09):
a lot of this is like, stuff ischanging really, really dang
fast. So let me find the recentone. So like this the one from
two days ago, yeah, April8 2024. Like this stuff is, like

(30:37):
improving every week, basically.
So one of the biggest issueswith Gaussian splatting,
compared to nerfs, has been,there's been a lot of blurring
issues in certain parts of ascene. And there's, in just last
week, there's was a new proposedmethodology to fix or like,

(31:02):
solve some of this problem,which is doing this, which is,
in essence, improvingprocessing, on certain parts of
the scene, and slightlymodifying the model for what's

Lisa Rein (31:23):
gonna stay, there's a little bit that smells like
upscaling to me in the sensethat you're, you're making up
for information you don't havein those areas. So I know are a
blur, the blurry areas are wherethey don't have enough pictures.
To put it together. Now what Iwas, I was literally taking
pictures with our friendBernice, with a camera in the

(31:44):
beginning, okay. And when yougot your model back, you know,
and you you ended up doingthings two or three times
because you weren't sure if yougot you were like, oh, did I get
that angle, and then you try tocome up with methods of making
sure you got every angle, and itwas just, it was really hard,
you would get the model back andyou would see what you missed,
because there would be theseblurry areas that you missed. So

(32:06):
are you saying that like this?
is a way of using math to to fixthat missing information? Or is
it actually deriving thatinformation from the information
that it has, instead of sort oftrying to patch it up and make
it look better? It's actuallygetting the information from
somewhere else in the file?
Yeah, so

Ryan Sternlicht (32:26):
try, it's pretty much running a lake
better. So generally, you run anentire scene on a single model,
per se. But the thing is, ismodels are very flexible. So in
this case, it's looking at themodel figuring out like to kind

(32:51):
of like simply looking

Lisa Rein (32:53):
at what is that right now? Figuring

Ryan Sternlicht (32:56):
out which areas are having issues being
rendered. And then

Lisa Rein (33:02):
it's rather Bryce if we're talking about Yeah, okay,
those images.

Ryan Sternlicht (33:07):
So, um, yeah, the Yeah. Yeah. So it sounds

Lisa Rein (33:16):
like both to be honest. Right? It sounds like a
little bit of using theinformation you have in
different ways with math, andthen using AI to smooth it. To
make it look right, what youthink is right. And, and it's,
yeah, comparing what it thinksis, right.

Ryan Sternlicht (33:35):
Yeah. So, um, but I want to talk about one
that is really cool on top ofthis, which is Smurf, which is a
modification of nerfs,specifically for real time large

(33:56):
scale visualizations, becausenerfs can be kind of hard to
render at times. So they'recalled streamable memory
efficient radiance fields. Andthis technology

Lisa Rein (34:17):
nurse with a buffer screaming nurse, okay.

Ryan Sternlicht (34:23):
Yeah. So let me find the right one. Ah, here we
go. Here's the actual thing. Sothe, this is a running in real,
this can be running in real timeon a phone, which you use to not

(34:44):
be able to run a 3d scan thisnice on a phone like this was
done using a similar photos set.

Lisa Rein (34:54):
You'd have to make a video out of it. Yeah. To look
at it on your phone. Yeah, or

Ryan Sternlicht (34:59):
Yeah, Are No not a video? No, you can walk
around this space. This is afull 3d Scan. You know, I'm

Lisa Rein (35:06):
saying in the in the olden days, if you wanted to run
on your phone, we had togenerate MP fours the model and
just look at it and be like,yep, that's my model. You
weren't doing shit with it. Butbut you can look at it. Yeah. So
this is something you can walkaround. This is something you
can go in with a headset or whatare you saying? Yes,

Ryan Sternlicht (35:25):
yeah. Okay.
There's so that like, this ispretty key. So here are some of
the different well known scenes.
So let's use laptop with strongGPU. Oh, yeah, this is going to

(35:45):
take just a sec to process.

Lisa Rein (35:50):
topics. So pretty fast, though. All right,

Ryan Sternlicht (35:56):
is running in real time on my computer right
now at 240 frames per second.
And I can move, I can zoom in.
Can I move around in this one?
This

Lisa Rein (36:13):
is running in the browser. What does this actually
running in? Yeah, yeah,

Ryan Sternlicht (36:16):
that's running in the browser. So there's a
plugin.

Lisa Rein (36:20):
There's a Smurf plugin or something for running?
Yes, the browser? Yes.

Ryan Sternlicht (36:24):
But you can also view this in a apple vision
Pro, any number of differenttools? And the thing is,

Lisa Rein (36:35):
what's the actual file format? That you're looking
at?

Ryan Sternlicht (36:38):
That one for this? I am not sure. I have to
look at the back end. But yeah,

Lisa Rein (36:44):
just look at the end of the URL and see the name of
the file format? See, I can'tsee it because it cuts off.
Yeah,

Ryan Sternlicht (36:50):
I am not sure this, you can't just go to the
running. And so

Lisa Rein (36:59):
sometimes if you get Okay, so it's not a so it could
be it could be differentformats, is what you're saying.
It's not it's not like a nativeformat. It's one of those
formats that you can generateall the different formats that
you that go in and out of unityand stuff like that.

Ryan Sternlicht (37:16):
But yeah, let's see with the laptop one that
will take minutes. So that's a

Lisa Rein (37:27):
real time scanning.
That's so that sort ofeverything all at once in that
model we just saw, right. Yeah,the error correction, oh, stuff
like that. Okay,

Ryan Sternlicht (37:39):
this is, like, best one has some issues right
now running. But the other thingthat is really cool with nerves
and 3d, Godsey and splatting ishow it handles transparency, and

(38:01):
reflections. Because view basedreflections are something
photogrammetry never use deep indream of handling. So you see,
you can see the reflection ofthat pillow in that TV. And it
right now, it's it's a bitwonky. But when it's when things

(38:27):
are processed correctly, you canhave view dependent reflections
of other things in the scene. So

Lisa Rein (38:35):
and this gets into the real time physics that Yeah,
yeah, are built in now to someof these scans. Right. That's
new. Yeah, that's an AI thing.
That wasn't there before.

Ryan Sternlicht (38:45):
Yeah. Tell us about that. Yeah, so the amount
of base is able to understand alot more of, in essence,
photonics, like lightinteractions like physics,
which, like we aren't, it'sbuilt

Lisa Rein (39:07):
in to the point where, if you are the objects
identified or something so theyknow how to reflect. I mean, how
do you how do you enable that,depending on what system you're
using? Yeah,

Ryan Sternlicht (39:18):
like here. It's a really cool one. Look at this
wineglass a old 3d Scan wouldnever be able to handle doing
this at all. This wineglasswould just be an opaque object.

Lisa Rein (39:39):
Or saying literally couldn't scan transparent
objects or it was a process

Ryan Sternlicht (39:45):
that it was transparent. Whatever

Lisa Rein (39:48):
didn't come out, right. And he and he couldn't do
it. You would be like, where'smy glass? Okay. Yeah, I remember
that because they had, you wouldhave to make your glass you
know, blue were, you know, giveit some, some transparent, like
color. So you knew it was aglass. But I didn't realize that
couldn't be transparency. Ididn't realize I thought that

(40:12):
was just an artistic choice atthe time. I didn't realize it
didn't You didn't think aboutit, nothing's transparent. Um,
yeah, so this real time errorcorrection for these models,
this is a good time to talkabout how it goes into 3d
printing. Because there's theconnection, I think, when the
revolution light went on in myhead, it was when it was the

(40:35):
combination of the real timephysics and things like that for
the 3d models, and then theability to get all that
information back out. And intomeatspace. World. If you wanted
versions of your objects thatyou could hold in your hand, and
then they're printed out fasterand better than ever. So tell us

(40:56):
Oh, yeah, yeah.

Ryan Sternlicht (40:57):
So when you have tools this powerful, you
can use them for so much,because a lot of people have
wanted to be able to createthings a lot easier. 3d printing
has come a long way compared towhat it used to be. But it

(41:24):
generally had a lot of issueswith speed, accuracy and cost.
But now it is getting to thepoint where all of those are not
really an issue anymore. Withspeed during the pandemic,

(41:45):
actually, a bunch of 3d printingpeople were like, We have good
quality now. But it's slow. Whydon't we all try to do things
faster? And they did. So thestandard 3d printing benchmark

(42:05):
object called the benci, anormal 3d printer in 2020, took
about an hour 30 To print adecent one. Then, people were
like, Let's print this as fastas possible. So they created a

(42:28):
thing called the speedboat race,to see who can print this
fastest. And right now, theworld record is about two
minutes. Do this 3d print, whichis insane. The problem is the

(42:50):
quality is kind of lost. But thealgorithms they've used to get
this fast, actually mean thatthe quality version instead of
being an hour and a half now,it's about 30 minutes. On every
day home 3d printers not on highend machines, right?

Lisa Rein (43:14):
The error correction, yeah.

Ryan Sternlicht (43:17):
A lot that this error correction for the bending
of the 3d printer is, in thatsense, machine learning and
like, literally running aalgorithm on the data of the 3d
model to understand the literalphysics of how the machine is

(43:39):
going to be moving. Andcorrecting for momentum is the
big thing. So we're moving the3d printer is moving so fast now
that it it can overshoot

Lisa Rein (43:59):
thing intim interesting. So actually, while
the while the nozzle is goingfrom one.to The next dot, that's
a thing that, that hasproperties and, and might make a
boo boo like, well, it's movingover something depending on the
shape of the objects, right?
Like when you when you watch oneof these things get made, you
know, and you're seeing itliterally, it's like a look a

(44:22):
little liquid laser doing it,you know, well depends on what
kind of printer it is. But whatthe point is they can actually
predict depending on the objectwhere those errors are going to
be when printed fast and thenadjust for it in the processing.
But I'm still it's interesting.

(44:42):
So they have to do all that theyhave to fix all that stuff
before it gets to the printerbecause by the time it's being
printed, it's got a No it's justgonna come out like it is. So
did it was It must have takenwas it machine learning to do
that error correction? How didthey event So the

Ryan Sternlicht (45:01):
sounds used for that. It depends a lot. So it's
different stuff. Okay? Yeah. Solike, here's a example of a two
minute Banshee like the problemis less that we will you won't
be able to actually see thisthing printing at actual speed,

(45:23):
it's really going because it'sfaster than the camera can
capture for the most part. Sothis is right now it's just
calibrating itself. Now it'sprinting you, the camera can't
actually capture how fast

Lisa Rein (45:46):
like they used to be.
Yeah.

Ryan Sternlicht (45:52):
But the amount of forces on this machine are
kind of crazy. It's, yeah, soyou can see the actual real
world counter and the machine.
It's, and this is only about athree or $400 machine with
probably $400 In mods on it. Um,but yeah, so AI processing has a

(46:19):
number of companies now havelittle out AI algorithms kind of
built into the 3d printingprocess pipeline. And, yeah,
help do this stuff faster. It isso

Lisa Rein (46:38):
strong. I'm surprised it doesn't break. I'm surprised
it can hold on. Oh, yeah. Asit's. So that was the first two
minute benci here. And when thathappened that happened two
months ago. Two

Ryan Sternlicht (46:51):
months ago.
Yeah. But yeah, so I'm sad thatthis is not running. Correct?
Yeah. Yeah, that's just, yeah,I.

Lisa Rein (47:04):
So let's move on. So we're talking about the about 3d
printed objects. And, um,basically, it's interesting to
me that this converts intowearables and, and virtual
clothing, that or, you know,actual, actual meatspace

(47:25):
clothing. And that wholeconnection, and the error
correction was a big part ofthat, too. You were telling me
because, well, of course, ifit's you're trying to be
fashionable, if it's clothing,you know, it's, it's got to look
good, it's got to look nice. Andthen what's neat is you can take
your real clothing and make it3d scan it, bring it in to your

(47:47):
virtual world. Or you can take avirtual world object that
somebody's wearing, and be like,Oh, I think I want an actual
jacket of that. And you can takeit out and send it somewhere and
have the jacket made. So that'shere. That's a thing. Yeah, no,
no. Tell us about stuff. You sawGDC, you know?

Ryan Sternlicht (48:09):
Yeah, that's so one of the there's a few
companies working on clothing,there's like two big ones, there
is Marvelous Designer and style3d, both of which are just now
really starting to try to do alot more AI stuff with being

(48:34):
able to take drawn images ofclothes, and turn it into 3d
models, they, at GDC, theydidn't have any demos, but they
did say they were trying to workon that stuff. Because that for
video game characters, if youwant it to be like, Oh, I drew

(48:57):
these really cool clothes, itwould be really cool if I could
put them on my character. Butfor a long time, you had to go
through a bunch of steps to makeclothes for 3d characters. Or to
make them such that you canactually cut and sew them in

(49:20):
real life. And there's, but theother side of other thing
clothes, trying to get betteravatars has been a big deal. And
that is actually one of thereally cool things I saw at GDC

(49:46):
pretty much on the very firstday we're just here is some
really cool stuff related togetting better at Jr's into
video games using AI. So there'sa few companies working on this,

(50:10):
but the most notable is calledthe most tech. I've been around
for quite a while. And are yousay right now? Oh, we didn't?
Yeah, I, here we go. Okay. Um,so they had to talk at GDC. On

(50:34):
the first day that I went tothat was absolutely fantastic.
Talking about how they got theirchat avatar system, which I'll
show you in a minute, into agame. So this game Earth
revival, which is a PC andmobile game, they put their

(50:58):
avatar creator in this game, toboth generate NPCs as well as
improve the character creatorsfor its users. So this is all
using

Lisa Rein (51:14):
Well, LLM in the character creators. Yes.

Ryan Sternlicht (51:19):
Okay. Or it's, it's using actually three
different sets of AI pipelines.
So there's the dream face, whichis the one where they're
actually dream face is that oneof the back end parts of the
chat avatar system, that they'reusing some system, some search

(51:46):
methodology, like aI search andmodification tools. And then
they also have a image to 3dmodels set of tools that they're
working on. And like, oh, show,I, I sadly, don't have this game

(52:11):
currently, but I can show like,some of Deimos is stuff. So the
reason the most can make suchhigh end avatars is because they
have a background in the VFXindustry of 3d scanning faces.

(52:36):
They have 1000s of faces thatthey've 3d scanned over the
years to, and then they traineda model on all their faces,
they've scanned, they'veactually trained many different
models. So doing facereplacement, they did facial

(52:59):
reconstruction stuff. And theyhave a very high end lighting
kit. So they, they are reallycool with this stuff. And they
have been able to do differenttypes of yeah, let's see, oh, I

(53:27):
think one of the viewing Oh,there we go. Um, being able to,
let's see.

Lisa Rein (53:47):
So you can use their tools to to, and give them a
picture and it will create thismodel and then you export it
back out and importantly, towhatever you want it for, right?

Ryan Sternlicht (53:59):
Yes, but they've made it much easier
using this tool called hyperhuman with or hyper human is a
project series. And chat avataris one of them, where you can
either drag an image into it. Soyou can so like, here's a

(54:26):
original image someone uploaded,which that looks like it was
generated using a scan system,then you upload that image, it
generates the base, head shape,and then it generates the skin

(54:46):
mask for it. And so literally,you can like use a text
generation tool that generate animage, upload that or You can
actually just directly uploadlike facial images of real

(55:10):
faces, do it. Or let me find onethat that text based ones.

Lisa Rein (55:18):
They let you give some prompts to I was messing
around with a little bit. So yougive it your picture, and then
you give it the prompts. And Icouldn't get it. I got weird
models out of it, but I wasgiving it weird pictures. Yeah,
that I was it was an angle andthe lighting was weird. So you
really do need a photograph thatshows you know from the front

(55:41):
that shows you straight on toget to get a good result. But
then then there is yes. Or you

Ryan Sternlicht (55:47):
can also just say use, you can also just use
text. So like if I generate fromtext, so let's see.

Unknown (56:05):
Oh, man

Lisa Rein (56:19):
let's see, this is great. This is actually creating
a 3d model just from text.
Right?

Ryan Sternlicht (56:25):
Yeah. Let's see. Oh, wait, I forgot it
requires sign and I can't do itat the moment. But I'll. But the
other project they have rightnow, which this is only their
early beta i Right now, it'sonly a multimodal 3d search

(56:46):
engine, it works pretty well,you can upload an image and it
will search all the differentweb 3d date databases. Find what
type of model you want. But thething is, is this is their point

(57:09):
one beta at GDC. They've showedtheir 1.0, which is a 3d model
generative AI, which this issomething that they've done,
people have wanted for a verylong time. So Deimos says show

(57:33):
you different things. So theyliterally you upload an image.
And then it will generate a 3dmodel of that object that you
can then immediately export.
And, like open, so that was inless than a minute, taken from a

(57:57):
image to a real 3d model. Andthey are able to do this really,
really easily. Right now. Theyhave a closed beta. But they're
hoping, hoping to open this upin the near future. And yeah, so

(58:19):
I also wanted show like, whatthose those chat avatars, things
you saw earlier, those were justthe skin meshes that when they
are outside of something beingused, so Oh, wait, it wasn't the

(58:43):
whole

Lisa Rein (58:45):
avatar, it was just yeah, that would just skin.

Ryan Sternlicht (58:49):
Yeah, you can attach it in unity, unreal,
Blender to actual 3d models. Andthen you can have them fully
rigged. So facial rigging, whichthen after you do that you can

(59:11):
use AR kit or any number otherreal time facial capture tools
to convert it and allow you touse it. So this, this pipeline
used to be incredibly complex.
But now there's automatedrigging tools for rigging faces.

(59:33):
There's things like AR kit anddifferent types of tools for
facial motion capture, whichfacial motion capture has gone
through a lot in the past aboutthree years. A big part of this

(59:55):
actually is due u two V tubingand different people wanting to
use facial motion capture inreal time. It all actually
started about 10 years ago witha thing called Face rake, which
was a like Steam like app youcould use that allowed you to do

(01:00:21):
webcam based facial tracking andapply it to an avatar. Then when
Apple released the iPhone 10,and the facial unlock system,
that tool set actually is very,very good for facial motion

(01:00:45):
capture. So many people startedplaying around with Hey, can we
use an iPhone to do facialmotion capture? And yes, and
they actually use it inHollywood at this point,
literally just a iPhone on ahead rake. That

Lisa Rein (01:01:10):
yeah, helping that out? Right? I mean, yes.

Ryan Sternlicht (01:01:16):
Based off different, like aI help identify
the features at the face, andthen focus on tracking those
features in real time. It and onthat subject, there was actually

(01:01:36):
another really cool thing atGDC, which was real time
markerless motion

Lisa Rein (01:01:51):
capture. Great. Yeah, that was the last thing that was
the thing. I wanted to make surethat we covered. Yep,

Ryan Sternlicht (01:01:57):
yeah. So I got to do a demo of a company called
ar 50. Ones, marker lead,capture, going, like they're
doing, you know, later on andsharing that with all of you.

(01:02:18):
But it was incredible. So alittle bit of background is full
body motion capture. So likewhere you're moving around. That
is very hard to do. Becausegenerally don't have a good

(01:02:39):
sense of depth. You need a lotof information from cameras,
which, for a long time, theseeasiest way to do this
processing wise was to usemarkers on your body that these
cameras could easily track. Soif you've ever heard of bicon or

(01:03:03):
Optitrack, those are the two bigcompanies that do this. So

Lisa Rein (01:03:12):
it's when you're talking about markers. Are you
talking about like a body suit?
Gloves? or Yes, yeah. So likethe app so that the camera can
see it and know where thedifferent parts of your body are
like how the VR headsets work?

Ryan Sternlicht (01:03:25):
Yeah. So yeah, they updated track is probably
the most well known one. Andthey are expensive. Like, very
expensive as

Lisa Rein (01:03:40):
the expensive two.
Oh, yeah. Yeah, they're all thepieces are expensive.

Ryan Sternlicht (01:03:47):
Yeah. I don't even remember if they? Yeah, so
like the suit without anything.
Is

Lisa Rein (01:03:55):
out the sensors? Yes.
Basically.

Ryan Sternlicht (01:04:00):
fabric you are putting on and then putting
sensors on it. It's alreadythree unraid. Yeah, dollars,
then gloves, then you have toactually get the markers. Which
the markers are. These littleballs. See?

Lisa Rein (01:04:26):
Yeah, and the more the merrier. Right. The more
points you have, the better. Itwill see you. Yeah,

Ryan Sternlicht (01:04:32):
and you need these marker bases on this suit.
So it's, it adds up

Lisa Rein (01:04:38):
on a suit. Where does that go? That thing that things
like this sticks

Ryan Sternlicht (01:04:43):
on the suit using usually Velcro. Then you
put one of these balls on thatlittle post, every single so

Lisa Rein (01:04:53):
every ball I post

Ryan Sternlicht (01:04:57):
every Monday until lot,

Lisa Rein (01:05:00):
so really, it should just be 3d print and all that
stuff. Right? You 3d printstuff. Yeah.

Ryan Sternlicht (01:05:07):
Or the balls?
You can't these are retroreflective things. No, yeah,

Lisa Rein (01:05:12):
he was Velcro it on why would you have to put it on
that stand? It? The wholething's ridiculous. We don't
need it anymore. Right? So wegot markerless. Yeah. Like,

Ryan Sternlicht (01:05:23):
we're talking good setup with this is anywhere
from like 50 It's 1000s to like,

Lisa Rein (01:05:34):
like one person or to people or something if you've
actually trying to do it. AndRaiders letting us know for the
best tracking, you still needit. Okay. Yeah, but yeah, but
anyway, tell us about themarkerless thing because it's
pretty cool. You're showing meYeah, darn good.

Ryan Sternlicht (01:05:51):
Like, or, actually, the funny thing is, is
proper steam trackers, like onyour headset actually have
higher tracking accuracy thanthese do. If you do everything
correctly Lake, and they tinywant higher than, like, two

(01:06:14):
millimeters of precisionprecision, you're going to have
a bad time, no matter what techyou're using. But one of the
companies at the NGDC ar 51 wasdemoing markerless motion

(01:06:35):
capture, which they were doingjust using similar ish camera
like high frame rate cameras.
There. But the big thing isthey're using AI to help track
where you are, so that you theycan then render it in different

(01:07:01):
ways. So like, and the thing is,is you can apply this really
easily and it's very costefficient. It's more expensive
than the IMU based motioncapture. With so a IMU based

(01:07:26):
motion capture suit is generallyabout two to $5,000. But that's
pursuit, which you need one soupper person. And they have a

(01:07:46):
inherent flaw of drift

Lisa Rein (01:07:58):
that's a basketball game.

Ryan Sternlicht (01:08:01):
Drift. Yeah, so let me find there. Yeah, so it's

Lisa Rein (01:08:10):
really funny we do it's like it's like somebody's
just avatar just kind of floats.
Yeah,

Ryan Sternlicht (01:08:16):
yeah, no thanks. People build IMU based
system before and they're like,oh, drift won't be a problem. I
they get their algorithmrunning. And within three
seconds, their hand just floatsoff. And it's like gone. And

(01:08:37):
it's like, yeah, it's a problem.
So that

Lisa Rein (01:08:40):
would be the system not understanding in the 3d
space exactly where their avatarhand should be. Right? It

Ryan Sternlicht (01:08:49):
has to do with the earth having this thing
called a magnetic field. Andmost objects in modern everyday
society have these things calledelectronics in them, which I am
use use a magnetometer do thefull six degree of freedom

(01:09:11):
tracking, so they use aaccelerometer to do roll, pitch
and yaw, then a magnetometer todo X, Y, Z translation. And the
thing is, that magnetometer getsaffected by any magnetic field.

Lisa Rein (01:09:29):
So after all that the Earth's gravitational force is
just going to toss it all outthe window. Anyway,

Ryan Sternlicht (01:09:37):
I mean, generally you can correct for
the earth, but you can be

Lisa Rein (01:09:41):
a big magnet. And we have a magnet on one side house.
Yes.

Ryan Sternlicht (01:09:48):
Or if your house has metal in it, that
attracts magnet becausemagnetometer uses a tiny little,
incredibly sensitive Mac. So

Lisa Rein (01:09:59):
is there any way to Fix it. Don't get it how you're
supposed to fix it, though.
We're not We're not read as anEarth's magnetic field. That's,

Ryan Sternlicht (01:10:07):
you can saying a thing called a Faraday cage.
But the problem is,

Lisa Rein (01:10:15):
you can't do your VR.
What are you going to do go intoa very big cage everytime you
maybe are. Yeah, good luck withthat. That's yeah, no collusion.
But yeah.

Ryan Sternlicht (01:10:27):
But you could theoretically use multi
precision magnetometers wherethey are referencing each
other's local fields. But you'reyou get into all these extra

Lisa Rein (01:10:42):
things like you just want to take it out of the
picture. You just don't Yeah,pray to be able to pick up on
it. Yeah,

Ryan Sternlicht (01:10:48):
you want to take it out? Did the picture you
don't want to wear a suit withall these trackers. So that's
where marker lists is such a bigdeal. So like,

Lisa Rein (01:11:02):
so tell us about talk about your you actually, were in
the booth at GDC. And you weretalking about how they were
basically tracking you from themoment you walked into your
booth? Tell us a little bit. Youhave a video to show. Did you
bring your video?

Ryan Sternlicht (01:11:15):
I I needed. I

Lisa Rein (01:11:19):
will link to it in the description later. Okay. But
yeah, let us know. Tell us moreabout that. That was sounded

Ryan Sternlicht (01:11:25):
really Yeah. So it, it was quite amazing. Just
they had a booth set up withlike, 14 cameras, I think it
was, each of these cameras isonly about 800 to $1,000. All
those were plugged into acomputer on the ground in the

(01:11:48):
corner. Wait,

Lisa Rein (01:11:49):
how many do you need to do this? You

Ryan Sternlicht (01:11:53):
can scale up or down. Depending on how well you
want stuff tracked. You can

Lisa Rein (01:12:00):
pop, the average person is maybe going to be able
to buy maybe three of them atthe most right, three or four?

Ryan Sternlicht (01:12:07):
At the cameras currently. Air Base. So

Lisa Rein (01:12:12):
the price will go down hopefully.

Ryan Sternlicht (01:12:14):
Oh, yes. Very, very fast.

Lisa Rein (01:12:18):
Probably you need at least three or what are you
thinking? I mean, the thequestion is three, when it when
it had the sensors, it wantedthree of them.

Ryan Sternlicht (01:12:29):
So these AI algorithms are getting good
enough that you can use a singlewebcam and a mirror. So then

Lisa Rein (01:12:38):
you don't need an $800 camera or you do need an $8
camera. Okay,

Ryan Sternlicht (01:12:43):
that is just because if you want a really
professional setup that'stracking at high frame rate and
has like sub millimeter, likehas millimeter accuracy, so the

Lisa Rein (01:12:55):
AR 15 one system has that, yes. Okay. But

Ryan Sternlicht (01:12:59):
and it's scale, but like you can use like four
cameras for a like, for what anormal person might have at
home, which might be a 10 by 10.
area.

Lisa Rein (01:13:12):
Probably four cameras. So it's still right now
that would still be, you know, afew

Ryan Sternlicht (01:13:17):
dollars. Yeah.
But the thing is,

Lisa Rein (01:13:20):
it's the webcam mirror thing you're talking
about that. So right

Ryan Sternlicht (01:13:24):
now, part of the reason using AI for this
stuff is so great is thesealgorithms are changing just as
fast as all the others. Soeveryone in VR has wanted easy,
cheap, full body motion trackingfor years. And a bout a year

(01:13:45):
ago, someone was like, let mejust point a webcam at the
mirror, and then run a AIalgorithm to figure out what the
person is doing. And it worked.
And then there some people haveplayed around with having a few
webcams or a few mirrors to getbetter three, the positional

(01:14:08):
accuracy. But every one all whatsoftware

Lisa Rein (01:14:15):
is it using at that point with the mirror the
webcam, what software is usingif I was gonna go set it up
right now in my, in my house.

Ryan Sternlicht (01:14:24):
It's like usually like a get you go on
GitHub. It's usually I thinkPython or it's just a little app
running that is then sendingdata to whatever motion capture
lakes. It's sending it to eitherVR chat, Unity or Unreal, which

(01:14:47):
all of those can pick up make

Lisa Rein (01:14:50):
to make an object out of it, too. Yeah, yeah. Okay,

Ryan Sternlicht (01:14:54):
it's, um, but the thing I really loved about
AR 51 thing is it, it just isimmediately tracking you. And it
can track as many objects as itcan really process which it they
talked about that on the firstday, they add a little event

(01:15:16):
there, it would tracking 20people in their booth.

Lisa Rein (01:15:21):
Did it show you the model that was being generated
from that? Yes,

Ryan Sternlicht (01:15:25):
yes, it was.
That's the other cool thing theywere showing you in real time,
what it was seeing? The thingis, is, as more people go in
processing increases by dates,

Lisa Rein (01:15:40):
oh, the quality doesn't go down.

Ryan Sternlicht (01:15:43):
The frame rate goes down. So if you have like
14 cameras and a decent computersetup up, like we're still
talking at, like, five days$1,000 computer, but it's, or

(01:16:03):
more if you really want to go,Pam. But generally, it was they
said that their setup currentlywith their 14 cameras was
running at nine millisecondslatency for tracking the entire
area, which this was a 10 footby 20 foot area. And they said

(01:16:29):
it doesn't increase. Like 12 or13 milliseconds tell you if like
six or seven people, which forVR, the threshold for good
movement is 11 milliseconds,which is 90 frames per second.

(01:16:51):
nine milliseconds is rightaround 120 frames per second. So
they were and if you're everdoing something like live
streams on YouTube, you onlyneed do 60 frames per second.
Which means you can have like,six, I can't remember if it's

(01:17:14):
like 16 or somethingmilliseconds latency. It's

Lisa Rein (01:17:20):
but you're saying the frame rate can be affected. What
I meant by quality was just notlooking as good. The frame rate
affects how it looks. Yeah.
More. Yeah, that's a live scanwith 20 people. But that was
what the 14 cameras, right. Yes,yeah. So this is new, though. It
hasn't had a price hasn't had achance to go down yet.

Ryan Sternlicht (01:17:43):
Yeah. And we'll go well, so both the price and
the processing requirements,because as these algorithms
improve, you can do it on lowerand lower end machines. And as
cameras become cheaper andbetter, you can do it on lower

(01:18:03):
and lower end cameras. And thosecameras are going to be better
cameras than what they usestaff. So right now, like a lot
of people I tell them, theystill have webcams, which like
this. I'm using a Logitech BRIO4k webcam, which is okay. But

(01:18:27):
this camera came out 10 yearsago, I think at this point or
nine years ago, and up untilvery recently, it was one of the
only 4k webcams and I'm not evenrunning it at 4k. I'm running it
at 1080 p 60. With HDR. All

Lisa Rein (01:18:48):
right, but doesn't really do 4k. It doesn't really
stream 4k. Yeah, if

Ryan Sternlicht (01:18:54):
but it's like yeah, in the past couple of
months, a company called OSP botreleased a thing called the tail
err, which is a 4k PTZ webcam,which PTZ means it's robotic. It

(01:19:17):
can follow you around. It's pantilt zoom. And that thing is
amazing. It's it's about $500but it's literally better than
PTZ cameras that conferencecenters pay five to $6,000 for

(01:19:37):
and it's tiny, and it's about$500 and that that also has AI
tracking built into it. They'llfollow you around and it it also
has gesture based control so youcan turn it on or off Have a

(01:20:00):
track, have it switch scenes dodifferent things. Like I plan to
get one probably because it's socool. And using AI for all these
camera things is very useful forboth, like making your

(01:20:25):
conference or like podcast setupbetter. But this stuff also is
applying to the very high endcameras that they are currently
using in motion capture. Andit's quite amazing the level. So

(01:20:51):
let me show a little bit of AR50, one's markerless motion
capture. So this is so to giveyou an idea of what's going on
is these two people are in VR,you can on the bottom right

(01:21:12):
screen see the perspective ofone of the people. And on the
bottom right see kind of a worldperspective. And these are paper
scissors. And it's tracking themin real time. Without them
wearing anything. It's so thefinger tracking is being done by

(01:21:41):
the not division pros, they'rewearing the quests that but
their movement, their real worldpositioning was being done using
the cameras around so ar 51 hasled me find that there cool

(01:22:09):
video of it running in. So thesetwo people so they're in a
capture studio with this capturestudio looks kinda crazy or VR
space, but down on this TV orlaptop, actually, oh, same. I

(01:22:34):
know that laptop there. That'swhat is going on in the scene.
And you will see in a sec,they're going to get up. And
it's still tracking them as theymove around without them wearing

(01:22:57):
anything. And it is for it to bebeen this good was basically
impossible of few years ago,probably even a year ago, that
would have been likesubstantially more expensive
than it is now. All right, andsee people running on in the

(01:23:27):
booth, and they're all beingtracked. So that is pretty
amazing to just like walk on andoff and be tracked. With a rate
like it seems like a absurdlyhigh end drag of this

(01:23:51):
technology, it would only beabout 20 $40,000 which is a lot
less than marker motion capture.
And the thing is, that alsomeans if you're in like the game
dev industry or any industrythat just needs to rent this

(01:24:14):
stuff or rent a studio, thestudio rental prices will be
much lower because very fewpeople need these setups at all
times. So most people rent themand a lower cost. Setup means
they probably will have a lowerrental cost as well as you don't

(01:24:40):
have the suits to worry aboutand the soup cost. It's
literally once you have no morecosts at all. And that actually
might like as algorithms getbetter you You don't really even
need to upgrade in the for avery long time. Intel's like

(01:25:04):
better

Lisa Rein (01:25:07):
and better with software. Yeah.

Ryan Sternlicht (01:25:09):
Yeah, that's, that's like the crazy thing at
this point, a lot that thishardware doesn't really need to
improve. It's just let thesoftware improve.

Unknown (01:25:20):
Yeah. Yeah.

Lisa Rein (01:25:22):
Well, Ryan degi, thank you so much. I'm looking
forward. What is the next thingcoming? Do you think down the
pike that you're excited about?
Before we Hmm.

Ryan Sternlicht (01:25:35):
So the next conference I'm going to go to is
display week in mid May in SanFrancisco, which is the big
display technology conference.
That is where a lot of the bigdisplay manufacturers show a lot
of their research. So these arethings like TVs, and phone

(01:25:58):
screens and stuff of things thataren't coming out for five
years.

Lisa Rein (01:26:06):
So what are you looking forward to seeing these
specific, I'm trying to find aspecific new thing that's coming
that maybe you may you might seethere.

Ryan Sternlicht (01:26:16):
I'm hoping to see a lot more I'm like, I, I
view independent 3d displays alake, we're getting close to a
point where real time Hellography. And like, certain types

(01:26:42):
of three dimensional viewablecontent is going to be much
easier to display and willactually look good, it won't be
like wearing the blue and redglasses that the cinemas, it
will be you independent, whichmeans you don't have to be in a

(01:27:02):
specific spot for the effect towork. And a lot of that has to
do with processing and lightfield. Things as well as a lot
of upcoming VR displays. Lastyear, when I went, I found a

(01:27:23):
bunch of different technologiesthat I would like I want to
smash these all together,because they get

Lisa Rein (01:27:30):
your wish. Yeah, yeah, that's maybe what's what's
happening is a lot of things arekind of combining Are there ways
of doing that. So

Ryan Sternlicht (01:27:39):
having better displays is something very
important for VR. I mean, if youif any of you tried the apple
vision Pro, it is nice. But italso still fails in a number of
visual tests, mainly brightness.
And that's been a big issue withVR and actually just displays in

(01:28:01):
general, if anyone, even withthe really nice phone has ever
brought their phone out in likedirect sunlight and trying to
view anything. They're like, Oh,God, I can't see this. And
that's because our displayscan't get bright enough. Even
though our phone screens areactually better than most.

(01:28:24):
That's why I put

Lisa Rein (01:28:27):
your jacket around your phone. Yeah. Okay, great.
Well, thank you so much, Ryan,for coming on. It's been a great
show, everybody, remember tosubscribe. And we will. We'll
see you soon. Oh, yeah. Andthanks so much for coming on the
show, Ryan. Really appreciateit.

Ryan Sternlicht (01:28:48):
I'll be sharing all the links to all of this
stuff short. Yes.

Lisa Rein (01:28:53):
Ryan has spreadsheets basically that go shows just to
make sure to have direct linksto everything that was covered.
And we'll have that up in thenext probably 24 hours. All
right. Thank you so much,everybody. Sweet dreams.

Ryan Sternlicht (01:29:07):
Thank you.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.