Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Silvia Sellan (00:04):
The key idea is
we extend this, very well known
algorithm called PO surfacereconstruction.
We give it a statisticalformalism and study the space of
possible surfaces that arereconstructed from a.
Welcome to Talking Papers, thepodcast where we talk about
papers and let the papers do thetalking.
(00:25):
We host early career academicsand PhD students to share their
cutting edge research incomputer vision, machine
learning, and everything inbetween.
I'm your host it Ben Shabbat, aresearcher by day and podcaster
by night.
Let's get started.
Happy woman greeting and (00:43):
Hello
and welcome to Talking Papers,
the podcast where we talk aboutpapers and let the papers do the
talking.
Today we'll be talking about thepaper Stochastic for San Surface
Reconstruction, published at SeaGraph Asia 2022.
I am happy to host the firstauthor of the paper, Sylvia
Cian.
Hello and welcome to thepodcast.
Silvia Sellan (01:01):
Hi.
Happy woman greeting and ta (01:02):
Can
you, introduce yourself?
Silvia Sellan (01:03):
Uh, yes.
I'm Sylvia Sian.
I'm a student, uh, a PhD studentat the University of Toronto.
I'll be finishing up in oneyear.
Happy woman greeting (01:10):
Excellent.
And who are the co-authors ofthe
Silvia Sellan (01:13):
this is a joint
work with my advisor, professor
Alec Jacobson from theUniversity of Toronto.
And that's,
Happy woman greeting and ta (01:18):
All
right, so let's get started in a
TLDR kind of format, two, threesentences.
What is This paper about?
Silvia Sellan (01:26):
This is about?
Quantifying the uncertainty ofsurface reconstruction from
point land.
So there's this classicalgorithm called on surface
reconstruction, that, takes apoint set and puts an implicit
representation of a surface.
We take that algorithm andgeneralize it to give it a full
statistical formalizm.
Happy woman greeting and tal (01:44):
So
what is the problem that the
paper's addressing?
Silvia Sellan (01:47):
the the
overarching problem is surface
reconstruction, so you get apoint cloud as input, which can
be.
they output up a 3D scanner or alighter scanner or something
like that, and you want torecover a fully determined
surface.
so you can imagine that you're acar driving down the street, an
autonomous car driving down thestreet.
You scan your surroundings usingsome lighter scanner, and you
want to know what they look likeso that you know that you're not
(02:08):
crashing into anything.
traditionally if you ask anycomputer graphics researchers,
they'll tell you the easy way ofdoing it is by using a thing
called put on surface recon.
Uh, This was an algorithmpublished in, uh, 2006 that
takes that point loud andoutputs an implicit
distribution.
So something that tells you inor out for any point in space.
(02:29):
however it does it, it onlygives you one possible implicit
distribution.
And of course, recovering afully determined surface from a
point cloud is an underdetermined problem, right?
There are many possible surfacesthat could interpolate the
points in the point cloud.
instead of just outputting one,we extend personal surface
reconstruction and we outputevery possible surface with a
(02:52):
specific probability.
So every possible surface thatcould be reconstructed from a
given point cloud with differentprobabilities.
Happy woman greeting and (02:58):
Okay,
so essentially given a point
cloud as input, You could find,Multiple ways to connect between
the points, right?
So finding the surface that,that these points were sampled
on, that's the big question thateverybody wants to solve.
And you're saying, well, there'san infinite number of surfaces
that could theoretically gothrough these points, especially
if there's like a gap in the
Silvia Sellan (03:17):
That's right.
That's right.
Happy woman greeting and t (03:19):
And,
and the on surface construction
method basically says, well,there's only one.
Here you go.
That's what my output is.
And your method is saying, well,there could be other options.
Silvia Sellan (03:29):
That's exactly
it.
That's exactly it.
And, and in a way we interpretedon reconstruction as giving you
the most likely output undersome, constraint under some,
prior, uh,, conditions.
But, Sometimes you don't wantjust the most likely, right?
You can imagine if you're theone that's driving the car,
that's doing the point, you wantto know, okay, I won't crash
into anything.
(03:50):
Not just under the most likelyreconstruction, but under 99% of
the reconstruction so that youkeep driving, right?
So we quantify that uncertaintyof of the reconstruction.
Itzik (04:00):
Right.
So, so this kind of ties to thequestion.
So, so why is this problemimportant?
And I think the example ofautonomous driving is one of
these amazing examples where yousay, well, I don't wanna found
out the collision after Icollided.
I wanna know beforehand.
Silvia Sellan (04:14):
Uh, that, that,
that's right.
That's a great example.
I, I was recently talking to, tosome people about this work and,
and they work in, Automatedsurgery.
So they were also telling methat sometimes you want to be
very, very sure that you're notcutting through a nerve.
So you want to be absolutelysure of what your nerve looks
like.
And apparently in some softwarethey do use a point leverage
reconstruction algorithm.
(04:35):
So, so this is that, that wouldbe yet another example of a
situation where you want, youreally want to quantify the
uncertainty, cuz you don't wannaparalyze someone.
Happy woman greeting and (04:42):
Super
interesting.
So now we know why this isuseful, but what are the main
challenges in this domain?
Silvia Sellan (04:48):
Well, the main
challenge is, is that, uh, it's
not specially hard to quantifythe uncertainty of
reconstruction, in general to goto, to device an algorithm that
takes a pointent and would giveyou an uncertain, surface.
The problem is that we alreadyhave this other algorithm called
on surface reconstruction, thatcombines many of the good things
we would want in a surfacereconstruction algorithm, and
(05:10):
they also have very good,efficient code online, so it.
almost everyone who is doingpoint cloud reconstruction is
using on surface reconstruction.
So the challenge was not justisesome other algorithm, but
generalize this one.
So we needed to reallyunderstand on surface
reconstruction, and give it anew statistical formalism.
(05:30):
That meant for me, the mainchallenge was that I'm a
graphics or a geometryresearcher.
I'm not a statist researcher.
So it meant familiarity myselfwith a lot of.
Statistical learning literaturethat I would understand where
the statistical formalism comein.
And uh, that was mostly thetheoretical.
Our, our paper is mainlytheoretical, and that was the
main theoretical challenge that,that we struggle with.
(05:52):
It took a couple of weeks over,uh, last year's winter, uh,
Christmas break to reallyunderstand where did the, the
statistical formalism, where,where can we plug that in to?
Plus on surface reconstruction,
Happy woman greeting and (06:05):
Okay.
Can't wait to hear more aboutthat in the approach section,
but before we go down to that,let's talk a little bit about
the contributions.
So what are the maincontributions of the paper?
Silvia Sellan (06:14):
Well, the main
contribution, like I said, is
we, we give a statisticalformalism to put on surface
reconstruction.
that's the one sentence version,uh, that's two sentence version
would be that usually put onsurface reconstruction gives you
just one value of an implicitfunction.
We extend that and instead ofone value we give.
Uh, a mean and a variance thatfully determine a ga and
(06:35):
distribution of what the valueat that point is.
I know that like people in, in,in your field use this term,
coordinate network.
This is not a network, but it'skind of a, a coordinate function
that implicitly defines asurface.
Think of it as quantifying thevariance of the output of a
coordinate.
Happy woman greeting and tal (06:52):
So
I'm super excited to get down to
what the approach is doing, butbefore we do, let's talk a
little bit about the relatedworks.
So if you had to name two orthree, works that are crucial
for anyone coming to read yourpaper, which ones would those
Silvia Sellan (07:06):
Well, the most
obvious one is on surface
reconstruction, uh, by capstoneat all.
That's, uh, 2000.
six, symposium and geometryprocessing paper.
That's the main work that we'reextending.
So, you know, we, we give asummary in our paper, so you
could read our paper withoutreading on surface
reconstruction, but that's themain work that, that we build
(07:26):
on.
So definitely that's, that's themost important one.
Uh, then we use GA processes,I'll explain this later, but
we'll use GA processes toformalize this statistical
understanding.
So, two of the ones I've most, Iwould recommend someone read
are, uh, gian processes formachine learning.
This is a book, and alsothere's, uh, geometric priors
(07:50):
for Gian Process ImplicitSurfaces by Martins at all.
This is.
A paper that, that basicallyuses gotcha processes for the
specific case of recon, ofsurface reconstruction.
and it's written more for agraphics audience.
So it's, it might, it was easierto understand them for me than
one of these gotcha processesfor machine learning.
More general papers.
(08:11):
So if you come from a graphics,background, geometric priors for
calcium process, implicitsurface,
Happy woman greeting and (08:17):
Okay,
excellent.
I will be sure to put links toall of those relevant references
in the episode's description.
Personally, I think that anyresearcher working on surface
reconstruction, has to read foron surface reconstruction, like
it's a must read.
so it's time to dive deep intothe approach.
So tell us what did you do andhow did you do it?
Silvia Sellan (08:39):
Uh, well we
combine put on surface
reconstruction with this conceptcalled kan processes.
Uh, I'll be careful on how toexplain this cuz I know that
most of your a is from machinelearning, not necessarily
graphics.
So I'll.
Basically you need to understandboth things to understand our
approach.
And our approach relies on onevery specific interpretation of
on surface reconstruction, onone very specific interpretation
(09:02):
of gian processes.
And then we put those together.
so what we did is we wentthrough on surface
reconstruction and weinterpreted to work in two
steps.
So basically on surfacereconstruction takes a point,
cloud as input that point.
Cloud is oriented, so it comeswith a bunch of normal vector.
the, the first step of onsurface reconstruction is to
take those vectors andinterpolate them into a full
(09:24):
vector field that's defined foreverywhere in space.
So that's step one.
Step two is that they then solvea partial differential equation
to get an implicit function.
Whose gradient field is thatvector field.
So basically step one, you gofrom a discreet set of points to
a vector field, and then steptwo, you go from a vector field
to an implicit.
(09:44):
that's basically all you need tounderstand about on surface
reconstruction.
And of course, that PDE that yousolve is azo, uh, equation.
So that's why it's called onreconstruction.
But really the part we careabout is that step where you go
from an oriented point cloud toa vector field.
We notice that, uh, that stepcan be seen a as a gian process.
So what is a gauch process?
(10:06):
Basically, a gian process foryour audience is just a way of
doing supervised learning.
but just, just in case someonefrom graphics is listening to
this and is wondering what, whatis that?
Uh, that means that you want tolearn some function that you
don't know what that looks like.
Um, and you've observed it atsome points, a finite, discrete
set of points.
So I like to think of, uh, thefunction being how long does my
(10:28):
advisor to respond take, torespond to an email, right?
So, uh, the, the, the variableis the time of the data you send
the.
And the response or the, thefunction that you wanna learn is
that the hours it takes for himto respond.
So, you know, maybe I send myadvisor an email at noon, I send
my advisor an email at 2:00 PMand I get two data points,
(10:50):
right?
But then I ask myself, well,what, what would it look like if
I sent him an email at 1:00 PMRight?
That's a new point that Ihaven't considered.
Uh, we call that the test point.
And the cool thing about gotchaprocesses is that they tell me,
well, if it.
Two hours for him to respond atnoon, and it took five hours for
him to respond at 2:00 PM Thenat 1:00 PM it'll take something
(11:13):
like three hours plus minus two.
Right?
So it, it will not just tell mea guess for how long it'll take.
It'll tell me sort of an errorbar, a variance for how long
it'll take.
And you, we can compete those,um, that, that mean, and that
variance with simple matrix.
So we do some, uh, assumptionsthat I'm not gonna get into
(11:35):
until a gallian process.
And we notice that that stepfrom on surface reconstruction,
that going from a discreet setof oriented points to a vector
field, that step is a supervisedlearning step.
So like the, the vector fieldthat was on reconstruction
outputs is the mean of aGaussian process where you're
(11:55):
trying to learn the fun thatvector field as the.
So as the major, it bearsstopping there for a second.
So we notice that the vectorfield from reconstruction could
be understood as the mean of aGaussian process.
So then we wonder, well, what,what would it look like if
reconstruction had wanted to doa Gaussian process from the
(12:16):
start?
So we reinterpretreconstruction, that's what we
call stochastic surface.
So instead of just this mean, wewondered, well, if we wanted to
get to, to do a gar process fromthe start, we would not just get
the mean, we would get avariance too.
So we get this sort of sarcasticvector field instead of just a
vector field, and we can solvethe same equation that we solved
(12:39):
earlier to go from vector fieldto implicit function.
We can solve it again now in thespace of statistical
distributions to go fromstochastic vector field to
stochastic scaler.
What does this give us?
This means that at the end weget for each point in space, not
just an, not just a value.
So reconstruction, traditionalreconstruction would give you,
(13:01):
for this point in space, thevalue is 0.2.
And since that's bigger thanzero, that means outside.
Instead we would give you a fulldistribution, so, so we would
tell you 0.2 plus minus 0.3, andthat'll give you an idea of how
sure you are of that point beinginside or.
So that's our approach.
We take reconstruction and wereinterpret it as a GA process,
(13:24):
and we can output a fullystochastic scaler field as the
output.
Happy woman greeting and ta (13:28):
Uh,
this is such an interesting
approach.
It's really not like all ofthose new papers coming out
that, oh, yeah, we.
Switch some block and now itworks better.
It's actually like looking atthe problem from a different
perspective, right?
Like looking at it in the waythat you, you now have like this
go process, which gives you thisstochastic properties you can,
(13:51):
you can utilize to, to do somuch more than you could before
with the traditional or classicfor surface reconstruction.
Silvia Sellan (13:59):
I'm glad you
enjoy our first.
I can shout out, uh, Derek Lou,who I think was on this podcast
a few months.
Uh, I asked him, before Istarted to work on anything
related to machine learning, Iasked him, how do you work on
machine learning such thatyou're not waking up every
Monday?
Check, checking archive to seeif you still have a project?
Cuz that feels too stressful forme.
(14:21):
Uh, I don't wanna panic.
I already panic enough in mylife.
I don't, I don't want that tohave that.
and he said you just need towork on something so
fundamentally different fromeverything.
That no one's gonna scoop you,which is like a classic Derek
advice where you're like, well,obviously if I could, I would
work at something revolutionaryand fundamentally different.
Right?
Uh, the problem is I don't, but,but this paper felt like that in
(14:42):
the sense that, you know, forsome reconstruction has been
around for 15, 16, 17 years.
There aren't many people thatare working on, uh,
statistically formalizing postreconstruction.
So it felt like a nice newspaperto write that.
Wasn't gon that.
I didn't need to be anxiousabout being skipped on.
So that's kind of the reason.
Happy woman greeting and (15:03):
Yeah,
and I think this message is like
super important because most ofthe audience of this podcast are
early career academics or PhDstudents and, and I think the
message of.
not, you know, crunching theparameters all day and trying to
really find something that'sfundamentally different than
what everybody else is doing, Ithink is a super important
(15:25):
message to convey.
So thank you for that.
Silvia Sellan (15:27):
To be clear,
that's Derek Lou's message.
No, not mine.
Happy woman greeting and (15:30):
Yeah.
That fits for you.
And it fits for me too.
I mean, um, I, I think that whatresearch should be about, right?
It shouldn't be.
It's unfortunate that the, thatthe, a lot of fields are now in
the place where it's abouttuning parameters and rather
than coming up with new andinteresting approaches and
perspective.
(15:51):
on this podcast, I try to bringall those that do, do that.
Like they give a new perspectiveon a field, solve a problem, and
seems like I've got two for tunefor now.
Um,
Silvia Sellan (16:01):
okay.
Maybe I'll, I'll ask you aquestion then I'll change the
format a little bit and you cancut it if you don't want it.
Uh, but this, you are, you'refocused around papers a lot in
this, uh, podcast, obviously.
Uh, and I wonder if part of theproblem is that we're, we're
using this academic currency,so, you know, obviously if we
(16:23):
have an incentive, we, we, ifwe, if we create an incentive,
People and I include myself, aregonna look for, you know, what
is the idea that most quicklyand concisely resembles a neuros
paper or a cvpr paper, acigarette paper?
Um, I wonder how much ourcurrent, like scientific
publication process encouragesthose types of works that are, I
(16:45):
changed the parameter a littlebit and I got like this bold
face number at the end of thetable, and that's a CPR paper,
like, which are definitelyimportant work works of
research.
But currently we have no way ofdistinguishing, putting
fundamentally differentapproaches and those types of
works.
So, um, you know, how much ofit, how much of it is our fault
(17:07):
for focusing on papers too?
Is there gonna be a talking blogpost podcast send?
Happy woman greeting and (17:13):
Well,
actually that's a great
question.
Um, and well, Personally, I tryto bring those papers that do
the extra mile as well.
So usually papers that have aproject website and a blog post
and they try to convey it andteach it.
And not only, okay, here are theball numbers, we, we were the
(17:34):
best.
Right?
Where some of the, an idea, butI think you touched on a very
important point where you said,well, our incentive system.
Not good.
I'm not sure I have a good ideafor what is a good incentive
system, but the current one isnot good.
We're judged by the number ofpapers that we shoot out and the
more they get accepted to highvenues, the the better.
(17:56):
And that helps you securefunding and securing funding
helps you get students andthat's your academic career and.
There's nothing that looks at,and you can actually even see it
on Twitter, right?
Every other academic kind ofpost, oh, we had seven papers
accepted to
Silvia Sellan (18:12):
Yeah, exactly.
Happy woman greeting and ta (18:13):
and
now is this out of how many?
Right?
It's it's not just about thesuccesses, right?
It's also about the, all of thetimes that you tried something
new and risky and novel andfailed because you're doomed to
fail, right?
That's research.
If we knew the answer before westarted the project, then it's
not a very interesting questionto work on.
And yeah, I agree.
It's a big problem in, in, inmultiple fields at the moment.
(18:37):
Um, but the upside is that itpushes everyone to the limit and
it pushes the field forward muchfaster than than any other
field.
And I think there are some,Things that people now do in
addition to papers that evenfurther promote that, right?
(18:57):
Publishing the code, that wasn'ta huge thing.
I don't know, 10 years ago, likewho would've put the code online
and made sure that it can run onmultiple platforms.
Today it's almost standard toput your code on GitHub, right?
Um, so bidder, right?
You can't have the one withoutthe outer.
Okay.
(19:17):
But back to the episode, thatwas a great, uh, question.
Thank you.
and by the way, I think this isone of the thing that, that this
podcast is trying to do, right?
It's, it tries to, to kind ofhave a little look like a peak
inside the mind, behind thepaper.
It tries to see the way ofthought, not just the results,
Silvia Sellan (19:38):
that's great.
I just wonder, I just wonder,you know, there are papers that
I've, and you know, I don't meanthis as a compliment
necessarily, but take it as thatif you want.
There are papers that I've.
uh, listen to this podcast onand, and looked at the project
page and I feel like Iunderstand them.
You know, like I feel like theactual pdf, I've never opened it
(20:01):
and I feel like I understandthat paper well enough to, to be
inspired by it and like work onon future works.
So like at some point, yeah, thewhole format of a paper is being
rewritten.
Happy woman greeting and (20:14):
Yeah,
this is part of the reason I
started a new medium for sharingresearch.
Uh, But yeah, it would beinteresting to see where we are
in a few years.
I know that, um, it used to beonly about citations, but now
there's this whole line of, theycall it alt matrix.
So all these kind of differentways of measuring the impact of
the paper, which are notnecessarily influenced by
(20:36):
citations, but it's not aswidely adopted as citations Back
to the episode structure.
So we talked about the approach,super interesting.
let's talk a little bit aboutresults and applications.
So in which situation did you,apply your stochastic for
surface reconstruction and howdid that work?
Silvia Sellan (20:55):
Right.
So as I was telling you, byusing, uh, by combining on
surface reconstruction with aianprocess, we had this.
Um, uncertainty map for everypoint in space that told us how
likely that point was of beingin the reconstructed surface.
Instead of just, is it in or isit out?
We got something like, oh, ithas a 60% chance of being inside
the reconstructive surface.
(21:16):
This map, we can ask for everypoint in space, and that's
actually very useful.
The main use is, for example,that car example I said at the
beginning.
So like, you know, we, we had avery toy example of a car that's
driving in 3d.
Takes a scan of its surroundingsand it, you know, through a
trajectory, it can ask, howlikely am I to intersect any of
(21:37):
the other shapes in this scene?
And you can see that like, it'slike 30%.
You know, 30% means thatprobably what on surface
reconstruction, the traditionalmethod would've told you no,
there's no intersection andthat's it.
Right?
But 30% chance of crashing yourcar.
It's you, it probably means thatyou want to take another
trajectory, right?
You, you can only do so many ofthose chances before you break
(21:59):
your car.
Uh, so we do examples like that.
another thing we do is, um, youknow, if you think about it, the
more these probabilities arecloser to either a hundred or
zero, the more certain you areof what the shape looks like.
So, you know, if I give you anuncertainty map that just looks.
It's zero in all this part and ahundred in all of this part.
(22:22):
It means we're, we're very sureof what the shape looks like,
because for every point of spacewe can ask, we can very
confidently say if it's in orout.
But if it's mostly 50%, then wedon't have a lot of idea of what
the shape looks like.
So another thing that we do iswe can introduce a thing called
Integrated Uncertainty that justmeasures how far this
probability is from zero point.
(22:45):
How close this probability is to0.5.
So the, the higher, the moreuncertain you are about what the
shape looks like.
And that's something that we canuse, uh, for example as a
reconstruction threshold.
So if we're scanning somethingfrom different angles, we can
compute this integrateduncertainty and say, you know,
keep scanning from random anglesuntil you reach 0.1.
(23:07):
Integrated uncertainty.
And this is something that'sagnostic on the shape that
you're actually reconstructing.
So you can use it as a t.
For scanning unknown shapes sothat you get a similar
reconstruction quality.
So this is something that, thatwe output.
Happy woman greeting and tal (23:21):
So
this is something that's very
interesting for, I guess kind oflike robotics applications,
right?
You have a robot walking aroundthe house, it sees a bunch of
things.
It's not sure what it's seeing,so it want, it should get a
better look.
And this is what we do as human,right, humans, right?
We see something we've neverseen before.
The first thing we would do islike,
Silvia Sellan (23:38):
Right.
The, the, the only difference isthat, we humans would have an
intuitive feeling for where weshould look as the next point.
Right.
Whereas what I'm saying is justlike, oh, I would tell the robot
to like, keep scanning randomlyuntil it, it, it figures out
what the shape is, which isn'texactly what we as humans into,
right?
Like, if you see something, youwould turn it around because you
(24:01):
know that it's the back partthat you haven't seen yet.
Um, so this is actuallysomething that we looked at
further.
Um, so we, we, we have anexample in the paper where, Uh,
have an incomplete point cloudand we set different cameras
around it and we simulate Rayfrom those cameras onto the
point cloud.
So this is something that we cando.
We call it Ray casting onuncertain geometry, but
(24:23):
basically we can use the samestatistical formalism to cast
ray from a hypothetical scanposition on the surface that
that means that for eachpossible camera we can simulate
which points would this scanningposition add to the.
We can add those points.
I see if our integrateduncertainty got better, right?
(24:45):
So we can ask, you know, byadding this new scanning
position, did I actually gainany knowledge or not?
And that's kind of closer towhat the human is doing, which
is identifying the next bestview position.
Um, and that's kind of a furtherexample that we have, so we can
(25:05):
also do it with our statisticalformal.
There's also, there's also, soI, I always make this joke that
like if you, you might haveheard that actors have some
movies that they do one forthem, one for me.
So like they have one movie thatthat sells.
So they'll do Avengers so thatit allow of tickets, but then
(25:27):
they'll use that money to fundtheir small project that won't
sell as much, but they really,really want.
So sort of these like next weekplanning, collision detection,
all of these applications arethe ones I did for the
reviewers, right?
So this is for them.
Uh, the application I reallyliked is that by understanding
post reconstruction as a, as agaugin process in, in the
(25:51):
process of understanding it likethat, we needed to assume a
certain prior, right?
Because a gotcha process startby assuming a prior, but now
that we understandreconstruction like this, we.
Does that prior make sense forevery reconstruction test?
So like, can we change thatprior?
So can we use this statisticalunderstanding, not just for new
applications, but to improve theapplication of reconstruction,
(26:14):
which is just straight upsurface reconstruction.
So the main result applicationI'm interested in is using
different priors.
So for example, we show examplesin the paper where we enforce
that the reconstructive surfacehas to be.
This is something that's a knownproblem with reconstruction.
They some sometimes outputs opensurfaces.
(26:37):
We solve that with like lessthan a line of code with half a
line of code.
We we solve that.
We have a similar one where we,we, we close a car
reconstruction that onreconstruction would've, um,
would've given an open output.
So basically changing thesepriors to, uh, improve the
reconstruction is, are some ofthe most exciting results that
(26:58):
we have and some of the mostexciting future work directions
that, that we have, that I guesswe'll talk about.
Happy woman greeting and (27:04):
Yeah.
And I think like it really makessense to, to have that like
dependency on some prior, right?
Because a lot of, I don't know,classification networks, uh, I
mean, many people can say thatit's already solved, right?
So if you knew that you'relooking at a car, And you have
like, I don't know, a very noisyone, directional scan of that
(27:24):
car.
It would be really good for thereconstruction process to say,
well, that's a car now.
You know, that's a car.
Use that information to improvethe
Silvia Sellan (27:33):
right.
That's right, that's right.
And, and there are, to be clear,like, you know, point completion
or, or, or surfacereconstruction algorithms that
use, database knowledge, theproblem is that those don't
leverage all the good thingsabout Pozo reconstruction.
So Pozo reconstruction isextremely fast, it's extremely
efficient.
It has.
(27:53):
These local global dual stepsthat, that make it fast and
robust, but also noiseresilient.
So it has, it, it somereconstruction is the best of
the best that we have forsurface reconstruction.
So like, I think there, thanksto our paper, hopefully there's
a very clear follow up of datadriven on surface
reconstruction.
(28:14):
I guess I'll, I'll pitch it toyour audience.
If you wanna write that paper,send me an email cuz we can
write it together and, and Iknow how to use the code so I'll
do that.
But you do the data driven part.
I, I think that there's anopportunity for an immediate
follow up there that's very easyand, and could be revol
revolutionary.
Happy woman greeting and ta (28:33):
Low
hanging fruit.
Silvia Sellan (28:35):
Or rather, we, we
built a ladder that tells, takes
you very far, very near thefruit.
Right?
It wasn't low hanging fivemonths ago.
Happy woman greeting and (28:44):
Okay.
Were there any like fail casesor any unexpected, results that
you encountered?
Silvia Sellan (28:50):
Well.
Hmm, good question.
The main drawback of ouralgorithm is the speed.
So our algorithm is slow.
This isn't really like afailure.
It's more like computing.
The variance of our estimationis very slow.
This doesn't affect the project.
I was just pitch.
Because, uh, that would just becomput in the reconstruction,
that that is fast.
(29:11):
But computing the variance is,is slow.
So we, uh, found that, you know,when we jumped from 2D to 3d, we
straight up couldn't do what wewere doing in 2d, in 3d, in a
reasonable time, in a reasonablecomputer.
So we had to use, uh, you know,a space decomposition, uh, a
space reduction trick to, tomake the solver manageable.
(29:31):
So that was a bit disappointing.
Um, another.
Failure case, we, maybe not afailure case, but uh, something
in our paper that didn't go asplanned is that we have this
step where we basically lump ama, we make a matrix diagonal so
that it's easier to invert.
Uh, basically.
(29:51):
And uh, this is based onsomething that I know from
finite element analysis wherepeople usually, uh, make a
matrix diagonal we show thatit's valid and there's some
assumptions, but it's notentire.
Accurate under all assumptions.
Um, this is not our, like we hadto do this lumping so that we
recover post on reconstruction.
So this is, it's not that weproposed this, it's as we
(30:13):
explained post on reconstructionas having done this.
but recently there's been, a newpaper by Alex Turnin at all
called Numerical numericallystable sparse SCO from processes
via minimum Separation usingcover trees.
And this basically shows you abetter way of doing what we did.
This is a, this is a paper thatwas posted on archive two weeks
(30:33):
ago, so, so there's no way wecould have used it for our,
work.
But, this is a very, I recommendthis to anyone working on gian
processes and thinking aboutapplying gian process.
At scale, because this basicallygives you, I, I don't think Alex
would agree with this, uh,interpretation, but it basically
gives you a smart way of turningthat ma of, of making that
(30:54):
matrix smaller.
So, uh, this is one thing that Iwish, I wish this other paper
had come out a year ago, andthen we would've, we would've
used it.
Happy woman greeting and (31:04):
Yeah,
these kind of like the field is
going forward and you never knowwhich block you can swap into
another block and somethingthat's a challenge today that
you had to circumvent in someway at some point would turn up
to be solved.
but, but this is great.
It means we're working in avery, productive and high paced
field.
Silvia Sellan (31:21):
Yep.
Happy woman greeting and (31:22):
Moving
on to the conclusions and future
work section.
So how do you see the impact ofthe paper going forward?
Silvia Sellan (31:28):
So I look at this
paper as a, as a computer
graphics and geometry processingresearcher, and the part that
excites me the most about thispaper is that it's a, it's a way
of quantifying uncertainty of aprocess that we use in geometry
processing.
So namely surface reconstructionfrom point light.
So, You know, one thing I wouldlike to work on in the future
and I would like to encouragepeople to work on, cuz I think
(31:50):
it's a, a very, can be a verypromising field, is a fully
uncertain geometry processingpipeline.
So there are all these workslike ours that quantify the
uncertainty of the capturesteps.
So like going from a real worldobject to an uncertain surface.
There are several works likethat, like that hours among
them.
(32:10):
But we sort of stopped.
And I would like, I would likeus to do things to that
uncertain surface, right?
So, you know, geometryprocessing doesn't stop at
capture.
We then solve partialdifferential equations on that
geometry.
We then compute differentialquantities on that geometry.
We then deformed that geometry.
We do physics simulation on thatgeometry, right?
(32:31):
There's a lot of things thatthat geometry processing does,
but we're not doing it for thoseuncertain shapes.
So the next steps, the ones I'minterested by.
Okay.
Now that, now that I've given,I've scanned the thing and I've
given you the differentpossibilities of surfaces that
it could be, now tell me thedifferent possibilities of
(32:51):
curvatures that it could have.
Right?
That extra step.
It, I don't think it has beendone before and I think that's
very exciting cuz then we caninform the scanning, right?
We can go back and.
Well, I want, I know that theshape has certain maximum
curvature, so if I take it allthe way, I know where I should
scan next because there areregions where my curvature is
(33:14):
more uncertain or something likethat.
Right.
So this is something that, thisis a direction I think is very
promising.
We already talked about, uh,task specific or data driven on
reconstruction that I think isan extremely promising avenue
for future work.
Low hanging fruit, like yousaid.
Happy woman greeting and ta (33:34):
No,
uh, a tall
Silvia Sellan (33:35):
A tall ladder.
Exactly.
Uh, and, uh, you know, there,yeah, there, there are many
applica, many ways ofquantifying uncertainty that,
that, uh, we could do ingeometry processing.
And I hope that this is just afirst step, uh, to that vision.
There are some steps that Ithink we will take, and there
are some steps that I hope thecommunity also.
Happy woman greeting and (33:56):
Yeah,
this one of the things I really
liked about this paper.
From the first read, I couldtotally see that it kind of
opens up this whole branch ofsarcastically informed,
inspired, or motivated furthersteps in the pipeline that you
can use this work with.
(34:19):
And yeah, it's, it's exciting tosee what what will come up next.
now to my favorite part of thepodcast, what did reviewer two
say?
Please share some of theinsightful comments that
Reviewer had through the reviewprocess.
Silvia Sellan (34:33):
Okay.
So this was a very, so anyway,the whole story of how we
published this paper was, wasreally fun.
Uh, I, I.
I, I am one of those people thatdoesn't like crunching for a
deadline.
So I work on something steadily,consistently, several months
instead of one week where Idon't sleep.
(34:55):
There are two kinds of people.
I'm one kind, my advisor ismostly the other kind.
Uh, but this time I got c fourdays before the deadline, so I
couldn't crunch.
So I just sent basically mycurrent draft to my advisor and.
Look, here's the victor.
If you wanna change anything,change it.
But I'm not working on it cuz Ihave like a fever and I'm just
(35:16):
gonna lay on the couch for, forfive days.
So, we basically submitted ourdraft without a lot of the
things that we would've liked todo in those four days.
So I was a bit disappointed thatmaybe we would get rejected
because I couldn't do, forexample, the data driven part
that was like a plan that Iwanted to do in the days before
the submission deadline.
So that was a bit sad.
but we actually got verypositive reviews.
(35:36):
We got seven review.
Uh, which is un unheard of forsra.
Like SRA usually has fivereviews.
Sometimes they bring in a sixthone, but I had never had seven.
I don't know anyone who has hadseven reviewers.
So that was very, surprising.
Most of the reviews werepositive.
I think six outta seven or fiveoutta seven were positive,
(35:57):
except that we had one, I don'tknow if it was strong reject or
reject.
So like the work, the way itworks at Segra is you have like
strong reject.
We reject, we accept, accept andstrong accept.
So six.
And, uh, we had one that waseither strong rejector or, or
reject that, that basicallytanks your paper if you have one
of those usually.
(36:18):
Uh, and the, the review was verysurprising.
It said, Basically, you know, Ilike the paper on a first read.
I loved it.
But then on a second read, Istarted realizing that none of
the quantities that the authorsare introducing make any sense.
So if you look at the variancemaps, so these are the maps of
like where we are most confidentof the reconstruction.
(36:38):
The variance is higher near thesample points.
That shouldn't be like that.
The variant should be lower nearthe sample points, right?
Because we are more.
of what the value is near thedata, right?
So it doesn't make any sense.
So then I realized that nothingin the paper makes sense, so now
I want to reject it.
The problem was as simple asthat, that revere was misreading
(37:02):
our color map.
So the color map was yellow,yellow meant high, and purple
meant low.
It was this plasma, uh, matteplot lib color bar that, that
you may have used or your, your.
Viewers might be familiar with.
So it was just a matter that wedid not include a color bar
saying like, this is low, thisis high.
(37:24):
We did not include a color barin all our pictures.
This is basically, our figuresare full of these color images.
So if we added color bars, wewould have 200 color bars in the
paper.
Uh, but this reviewermisunderstood it.
So, you know, it was a veryinteresting rebuttal to write
where we had to.
(37:46):
You know, we're, we will addcolor bars to every figure and
they will show that unlikereviewer two or three is
interpretation variances indeedlower near the data points.
That's, you know, it is what youexpected it to be.
Uh, so it was very scary cuz I,for some seconds there, I
thought we might just get apaper rejected because we didn't
(38:08):
add color bars to the plot.
So, you know, under every, youknow, behind every sign there's
a story.
Always add color bars to yourplots.
You never know.
If we had had one other negativereview, it might have, it might
have tagged the papercompletely.
So that was a lesson I willnever forget.
I'm sorry, review two that wedidn't add color Barss.
(38:30):
It's not your fault.
You, you, you get, you.
There were two possibleinterpretations and you took one
of them.
Uh, we should have added it.
And now if you look at ourfavorite, it has a lot of color
bars.
Because we're not making thatmistake again.
So that, that's my Revere twostory,
Happy woman greeting and tal (38:46):
Oh
wow.
I absolutely love those kind ofpaper war stories, the whole C
submission deadline, and thenthe coba and, and yeah, I think
it's an amazing lesson.
And I, and I know that everypaper from now on you or any one
of your future collaborators,you would never forget to put
the coba.
Silvia Sellan (39:05):
right.
That's
Happy woman greeting and ta (39:06):
Uh,
yeah.
So don't forget the color abouteveryone.
That was a great story.
alright, anything else before wewrap up?
Silvia Sellan (39:15):
I guess, if any
of what I sent sounds
interesting.
I don't have enough time to doall the project ideas.
I, I.
So, uh, definitely email me ifyou wanna work on anything
related to what I just said, andwe can work on it together.
So I'm sure its, it will put mywebsite somewhere in the episode
(39:35):
notes.
Uh, you can go there, find myemail and send me an email.
I'm always open to, to gettingrandom emails from people.
Happy woman greeting and (39:42):
Yeah,
excellent.
I'll be sure to put all of thecontact information for Sylvia
in the project description, andI should also probably mention
to all of the more seniorlisteners that we have that
Sylvia is looking for postdoc orfaculty position starting fall
2024.
So don't miss out on thisamazing opportu.
(40:04):
and All right, Sylvia, thank youvery much for being a part of
the podcast, and until next timethat your papers due to talking.
Itzik (40:17):
Thank you for listening.
That's it for this episode ofTalking Papers.
Please subscribe to the podcastFeed on your favorite podcast
app.
All links are available in thisepisode description and on the
Talking Papers website.
If you would like to be a gueston the podcast, sponsor it, or
just share your thoughts withus, feel free to email talking
(40:38):
papers dot podcast gmail.com.
Be sure to tune in every weekfor the latest episodes, and
until then, let your papers dothe talking.