All Episodes

December 19, 2012 42 mins

What does motion capture mean? What are the different systems used to capture a performer’s movements? Why do some animators consider motion capture cheating?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by the two thousand and twelve Toyota Camera.
Get in touch with technology with tech Stuff from how
stuff works dot com. Hello again, everyone, and welcome to
tech stuff. My name is Chris Poulette, and I'm an
editor at how stuff works dot com. Sitting across from

(00:21):
me as always as senior writer Jonathan Strickland. Hey there,
So today we thought we'd talk a bit about a
type of performance that is relatively new as far as
performance goes. Uh, something that uh, I guess this falls
into our movie making category, but it's also something that's
been used in things like video games and and other

(00:44):
forms of media as well. Motion capture. Yeah, you'll even
see it in uh, in sports, they've been talking about
this for a while now. And if you've ever seen
the making of a a video or a game, um,
or you know, even in sports rehabilitation in medicine, um,

(01:05):
they where the people are wearing dots, little white dots
all over their clothing and sometimes their faces and hands. UM,
that's probably what they were doing. Either that or they
just really like stickers. Yeah, yeah, I mean, who doesn't.
I remember being very competitive in elementary school in order
to get a sticker. And also this is a tangent

(01:25):
but a true story. I got a gold star sticker
uh just last month awes from Tracy, the head of
our our site. So anyway, um, yeah, motion capture. Actually,
there are a lot of different terms that you can
use in this in this realm Uh, motion capture or
mo cap is probably the one I hear the most frequently,

(01:47):
but also things like performance animation, performance capture, digital puppetry,
real time animation, motion scanning, which is really more of
a proprietary thing, but these are The concept is pretty
much the same across the board. The idea is to
capture the physical representation of something and then converted into

(02:09):
a virtual format. So usually it's something that's in motion,
but it's not always that way. Uh, since you know
we're talking about motion capture, that makes sense. But you're
trying to get uh, translate something that is moving through
real space into a digital format. And uh, there's different
ways to do this. I mean, you could do it

(02:30):
the really hard way, which is where you study something
and then you try to recreate it, uh, either by
hand or by or digitally, you know, by by programming
movements into an animated figure. But this is an idea
that kind of takes that step out where you are
directly porting the movements. Uh, something is making within physical

(02:50):
space into virtual space. Yeah, there was an early technique, um.
And of course this is this is all an attempt
to get as real as you can with of animation. UM.
And one of the earlier techniques that that was sort
of a predecessor to this is called rotoscoping. Uh. Ralph
Box She's Lord of the Rings had a lot of

(03:13):
rotoscoping in it. Well, what happens is, um, in that
case is that a real uh, real human being goes
through the motions and they act through the parts that
are that you're going to see in the animation. They
shoot that on film, yes, yes, and then the the
animators basically are looking at that and are drawing more

(03:34):
or less on top of that. They see a projection
of that, and they are drawing, uh, the animation over
that to capture the way that person's body looks. And
this this was famous. You know, the Disney studios were
famous for this. We're studying models and then they would
do the rotoscoping technique to to try to make their uh,

(03:55):
their characters look more realistic. Yeah. And there are some artists,
like I said, like box She who famously would leave
the film image as part of the animation, so that
you had this this weird effect where the thing you
were looking at was part uh well quote unquote real
image and part animated image, which was it was an

(04:17):
artistic choice, uh, definitely something that was not meant to
to necessarily fool you into thinking, oh, well, that animated
character is moving very realistically. It was done on purpose,
but it was. That's what I always think of when
I think rhodoscoping, as I just think of the different
box Sheet films, but in particular I think of his
Lord of the Rings adaptation um, which, as I recall,

(04:37):
ended halfway through the Two Towers. So anyway, that's just
bringing back memories. But yeah, that was that was sort
of a precursor to motion capture. Motion capture itself. There
are many different ways of achieving this. For example, there
were it's not used very frequently now, but there were

(05:00):
mechanical systems where you had sensors that would be attached
to specific joints uh that would relay movement. And usually
it was kind of like a like an actor would
wear a physical metallic skeleton type device that would have
the sensors attached to the various joints and as the

(05:22):
actor moved, the sensors would register the changes in motion
in this metallic skeleton and UH, and that would be
relayed through usually cables to a computer system that would
measure these or take the measurements from the sensors and
translated into movements for the virtual character. UH. It's very

(05:46):
limiting this particular system. There was another one that was
a little more versatile, which was used electro magnets. And
in this case you talked about sensors that would be
attached by really thin cables that again would go to
a computer, and there'd be a magnetic field and by

(06:06):
moving through this magnetic field, the sensors would pick up alterations.
You know, it would you know, moving through magnetic field,
you would get little electrical changes. We we've talked a
lot about electricity magnetism in general, moving through UH, Fluctuating
a magnetic field can induce electricity through a conductor, or

(06:27):
putting electricity through a conductor can induce a magnetic field.
So anyway, by moving these sensors through the magnetic field,
it would create these electronic fluctuations that would then be
measured and translated into movement, and again this was a
fairly effective way of picking up movements. It actually didn't

(06:48):
use as many points of contact as the optical systems
that we mostly think about. That was the kind that
Chris was referring to early with all the dots on
the person. Those systems tend to have lots and lots
and lots of points of ref. The electro magnet ones
didn't tend to have as many points of reference because
the the software side of it, because you know, we

(07:10):
do have a hardware and a software side to this.
The software side would assume that the joints that these
sensors were attached to behaved the way they normally would
in humans, and that they don't have complete freedom of movement.
Most of us are not multi jointed in every joint,
so we can't you know, that we have a limitation

(07:30):
on how far we can move in certain directions with
these various joints. So taking that into account, you didn't
have to have sensors all over the body. You would
just have them in a few places, which was good
considering that there were these thick cables attached to the
sensors and then once you were done moving, then the
all that data would get be captured within the system

(07:52):
and could then be rendered into animation. Although this was
also a way that you could do real time animation
or digital a tree. Uh, it's not that different from
controlling a video game character with a controller. It's sort
of the same principle, except in this case the the
video game controller. Instead of it being something you hold
in your hands, it's something you were actually wearing. And uh,

(08:15):
I've seen plenty of instances of this. If you've ever
seen Turtle Talk with Crush over at Disney, that's what
they use. They use a digital you know, they use
digital poetry, and it's awesome by the way, I love that.
Well it uh, it would also seem that, um, you
would need to be aware of where those cables were going,

(08:37):
and it would it would also affect the way that
you would move. You wouldn't move as naturally if you
were wearing something like that as if you were, you know,
unencumbered by by that, which, um, sort of I think
would lend itself to to an upgrade, which is I
think why they were so keen on Well, it's also

(08:58):
also that's very true. It did limit what you could do,
it could live, It would limit your movement. I mean,
we've got these big cables attached to you. You You obviously
you can't just move freely within a space. Um, So
it did put some limitations on you. Their limitations to
the optical systems too, but we'll get into that. The
uh the other problem was that the sampling rate for

(09:19):
the magnetic systems was not as high as it is
for optical systems. And by sampling rate, what I mean
is that this the entire system as a whole, is
taking little measurements of from the sensors of you know,
the orientation of those sensors within the space, and it
does that several times every second. But the sample rate

(09:42):
of the magnetic motion capture systems was much lower than
what it was for than what it would be if
you were to use an optical system. So you're not
getting data as frequently. I mean still several times a second,
but it's not as precise as the optical system. So
not only where you limited in the kind of movements
you can make because you had these major cables attached

(10:05):
to you, but also you couldn't get really minute precise
measurements on every kind of movement, So it wasn't good
for things like sports. So, you know, something like throwing
a pitch in baseball, there are a lot of movements,
a little tiny motions that are involved in that. I mean,
anyone who's watched slow motion footage of a professional baseball

(10:30):
pitcher throwing a pitch, you can see that there are
some incredibly subtle movements that are involved in that. And uh,
and it takes place over a very short period of time.
I mean, it's a very fast thing to to to measure.
Using the magnetic motion capture system, you would probably one

(10:51):
slow the person down because they have all these cables
attached to them, and to not get enough data to
give an accurate representation of what had happened in the
virtual format. So if you were to say, create a
video game, a baseball video game, the picture would not
necessarily behave properly if all you did was directly port

(11:13):
the data you got from the motion capture into the game. Yeah.
Another drawback of the mechanical systems like that too, is
that that it's um it's the kind of system that
not only is cumbersome and inaccurate, but it has to
be calibrated fairly frequently. UM. And you know, there there's
a there's some work that you can do with this.
With the optical systems that they began to introduce UM,

(11:37):
you know, generally became an upgrade UM. The only there
is one big advantage that the mechanical systems do have, though,
and that is that light. The lighting will not necessarily
interfere with the different points of motion that are captured
by the mechanical system UM. And that can be an
issue with the optical systems UM. You know, because that's

(12:02):
that's why UM, they will be wearing the people, the
actors who will be UM having their motions captured by
the system, will be wearing you know, those bright dots
so that the computer can pick up on that. And
at the beginning, and these are these early systems, there
were only so many points action points that they could capture. UM.
They were very limited in what they could do at first,

(12:23):
but still you know, somewhat of an upgrade over the
mechanical Yeah. It also limited what you could have in
the background, obviously, because you could not have anything that
was going to be of a similar shade. Uh. You know,
usually where you're talking about reflective white substance used as
the um the points of of uh articulations. So the

(12:47):
little like white stickers is like what you were saying,
Chris Um, you couldn't have anything like that in the
background because it would confuse the optical system. So that's
why a lot of these motion capture scenes are shot
against a blue screen or green screen. It's so that
the background does not in any way interfere with the

(13:08):
motion capture. So if you've ever seen behind the scenes
footage of The Lord of the Rings movies is a
great example with Andy Serkis as Gollum or Sniegel if
you prefer, but he's wearing you know, a tight like
skin tight suit with these little white uh circles all
over it. Those are the points that the cameras track

(13:31):
to create the the performance of Gollum slash Sniegel. So
the performance is something that's being created not only by
the actor but also the animators because not We should
also point out that the motion capture stuff rarely is
motion capture uh completely. Uh. There's there's rarely a moment

(13:54):
where you don't have an animator step in and tweak
it somehow, like Uh, you don't normally have someone create
a physical performance and that physical performance is completely without
any tinkering represented in the final product. I mean it
can happen, there are instances of it, but it's more frequently, uh,

(14:14):
something where the motion capture performance goes to the animator,
who can then tweak things if the performance is not
exactly what needs to be, which is kind of nice.
You don't necessarily have that luxury with flesh and blood actors.
That's that's true. That's true. Well, especially with the earlier systems,
especially the electromagnetic systems. Uh, those were really noisy, not

(14:38):
literally noisy, but but digital noise. They weren't They weren't
really highly accurate. Um. The optical systems are are far
cleaner and give them more accurate representation. But you know there,
that's it's sort of falls in the realm of artistic license.
I would think, um, where they need to go in
and make subtle adjustments to make it look the way

(14:59):
they want it to look. Up. I should also point out,
now you just reminded me of something else another drawback
to the electromagnetic systems, which was you couldn't have anything
metal on the set because it would interfere with that
magnetic field and give incorrect readings to the system. So
you're you're virtual character would not move in the same
way as the physical one because there would be some

(15:20):
interference in that sense. So your set couldn't have anything
metal in it. The props didn't shouldn't have anything metal
in them, so that that limited to you as well.
So each system has its own limitations. Getting back to
the optical one, um, one of the other things you
have to remember is that in order to really capture
a a physical object moving through three D space and

(15:45):
to replicate that in virtual space, you need multiple cameras
in that system. Because a single camera, assuming that's a
regular video or film camera, something that does not have
three capability, pointing that at an object. It's creating a
two dimensional image of something that's moving in three dimensions.

(16:09):
The camera can't necessarily tell where movements are happening within
the depth frame of of that of that image. Right, So,
if someone's moving in such a way where let's say
they're moving their head where it would be bobbing closer
to the camera, Uh, unless the size of the the

(16:31):
sensors is such that something that subtle could be picked
up by the camera system, you would lose that information.
So what you need are multiple cameras on the same
object so that you can compare that data from the
multiple angles to tell how this object is really moving
through this three dimensional space. So it's kind of like

(16:51):
the idea of having parallax with two eyes. You know,
our eyes are offset, so by looking at an object,
we can tell how far away it is in part
because of parallax. Uh. We also have other visual cues
that tell us about how far something is, you know,
things like how tall it is in relation to where

(17:11):
we are that kind of thing, or how tall it
is in relation to other objects that are within our
frame of vision. But parallax is very important. Same sort
of thing. With these optical systems, you would have multiple
cameras set up to try and capture the information that's
going on in the frame so that you could tell
exactly how it's moving through that three dimensional space. Yeah,

(17:33):
it seems like um. In order to capture the correct perspective,
you need that additional information, even though you may not
necessarily see it. UM. It helps the the animator do that,
and the optical system to allows you to work with
more than one actor um, which was not really an
option with some of the earlier systems. So in other words,

(17:56):
you can, although it requires more equipment, you know, just
simply out of necessity, the optical system is really affording
the animators a an opportunity to use a greater amount
of information um both you know, from the different the
different points of data they're getting from a single actor,

(18:16):
but from multiple actors on the set simultaneously, which enables
them to to create more complex work right and UH.
This also gives us a good example of how the
optical motion capture systems are a passive system because you
have these sensors you're wearing that are not necessarily or
not even sensors. They're they're reflective markers that you're wearing.

(18:39):
They aren't connected to any sort of electronic components at all,
versus the active systems like the electromagnetic one, where you
are generating data by moving through a magnetic field and
you have these big cables attached to it. Uh. With
the optical motion capture systems. Another thing that's kind of interesting,
I think is that a lot of least the early ones,

(19:01):
the cameras would have infrared l ed s uh so admitters,
really that we're emitting infrared lights. That's outside our our
visible spectrum. We cannot see infrared light. But by putting
an infrared filter on the camera, you could have the
camera pick up reflections of infrared light. And that was

(19:21):
a way of helping to identify the sensors that you
had put on the actor. The actors, the sensors would
be reflective specifically so that the infrared light would reflect
back toward the camera and give the most accurate rendering
of what's going on at any given moment within a scene.
So um, yeah, it's another way of making sure that

(19:45):
the data being captured is as precise as possible. I
mean that is, of course, the goal is to try
and recreate the physical movements as truthfully as you possibly
can given all the limitations involved. Yeah, and if you're
looking for a real life easy to find an example
of this, you would look no farther than your local

(20:08):
video game store. Um, because the Xbox Connect h uses
very much that that exact uh form of technology. It
is using an infrared emitter, um, and it has cameras
that it uses to pick it up uh, the information
pick the information up that is coming back from what
is being reflected around the room, and anybody who who

(20:29):
has one is also aware that lighting is very much
an issue. Um. The way that were room is let
affects the information that the Connect is able to refer
to the Xbox. Now, it's not, while it is sophisticated,
is not as sophisticated as the kind of equipment that
they might use in making a movie or making a
video game. But it is very very similar technology, and

(20:51):
in some ways I would argue that it's more sophisticated
than some of those early UH systems simply because it
is able to capture a lot of information uh, Whereas
you know, the very early optical systems were only using
a handful of data points. So um, it's it's a
pretty neat device. Um, you know, not only used for gaming.

(21:11):
Now the hacker community has fallen in love with it
too because it can do so much and can be
used for so many things and is you know, fairly inexpensive. Yeah.
The cool thing about the Connect is that rather than
have to obviously, if you've if you've ever played an
Xbox with the connect, you know, you don't have to
go out and buy a snug body suit covered in
reflective markers in order to play. I mean, it doesn't hurt,

(21:34):
but uh, you know, if you're if you can pull
that look off. There a very few of us who can.
I count myself among them. But you don't have to
do that because what it's doing is it's actually projecting
essentially a grid, uh in infrared light, so you can't
see the grid, but it's being projected into the room.
And then when you move uh within the space, you

(21:56):
are deforming that grid. You know, the camera that's picking
up the the reflections of that infrared light can detect
when the grid's being deformed by a physical object interrupting
the grid. So as you move, you interrupt different parts
of the grid, and it can start to interpret those
as motions and commands. It's not, uh, it's not as

(22:19):
precise as what we're talking about with the optical systems
that are used in movies and video games, uh, to
to create them, that is, not to to play them. Um,
it's not as precise as those, but it also has
other elements that help balance it out, Like it has
regular optical cameras that can have some other software that

(22:42):
aids it in recognizing things like facial recognition software, which
does not necessarily rely upon that infrared grid. It relies
more on the traditional camera functions, but has the software
included that, let's the the programs within recognize who is
standing in front of it, so that combination, uh increases

(23:04):
the precision, which of course is very important whenever you're
playing a game. I mean, anyone who's played any sort
of game where you're using a faulty controller, or it's
just a system that hasn't been fully uh it's not
finished yet, it's just in prototype stage or whatever, you
may have noticed that it could be very frustrating to
try and control something where the actual controller is not

(23:27):
as responsive as you would hope. It's um not a
fun experience. But anyway, that is kind of related to
this whole motion capture technology. UM, I'm sorry what you
were You look like you have something to say, Well, no,
I was, I was going to say that. Um, you know,
we really hadn't other than my earlier statement about sports. UM,

(23:51):
you know, we've we've been talking about it in an
entertain amount entertainment about the the ability to capture motion
to make care act is more realistic. And um, that
that is exactly what they want to do when they
are using this in sports medicine. UM. Jonathan alluded to
earlier the difficulty in UH and capturing all the little

(24:13):
subtle motions that go into UM into a Major League
baseball players pitching. UM. And you know when somebody, when
somebody gets hurt, UM, sometimes they go through uh extensive surgery.
The Tommy John procedures is UH famous. You know, they
do a ligament transplant to to help rebuild a picture's elbow,

(24:36):
and that can really throw off, um, the mechanics of
a pictures motion. So they use this motion capture technology
to really get an idea of how, UM, how that
person is is throwing going about the mechanics of their
typical game play. And and that's exactly the same kind
of thing that they're doing when they create these very
realistic sports games. UM. But you know, in this case,

(25:00):
they're using it for sports medicine to see if they can, UH,
they can go back and recreate some of the motions
that made them so successful before they were injured. Now, UM, ironically,
in in UH entertainment purposes, especially video, UM, you can
get too realistic. UM. The Japanese professor massa Hiro Mori

(25:22):
is famous for his Uncanny Valley UM, which has been
used in uses a robotics term for a robot that
looks so much and moves so much like a human
that it it creeps us out. It looks a little
too realistic. And I can think of we're actually recording
this in December of and um. One of the movies

(25:45):
that comes on about this time of year is The
Polar Express, which is known, loved and reviled both for
its story and it's um and the way that they
use motion capture because the characters and there are so
realistic they're downright creepy. Yeah, it's it's one of those
things where they are almost but not quite able to

(26:07):
pass for a real person, so that there's just enough
off about them to be unsettling. Now, this does bring
up something else that's kind of interesting. We have an
article on how stuff works dot com about motion scan technology,
which is, as I said earlier, a proprietary technology. It's

(26:29):
it's more specific than just motion capture. It's specifically meant
to capture facial motion activity. So when an actor is speaking,
when they're delivering lines, the way that they furrow their
brow or move their eyes or smile, or they give

(26:49):
a facial take, anything like that. This system is designed
to pick that up so that it can be recreated
virtually in a game, and it was used too great effect,
in my opinion, in L A Noir. Le Noir was
a video game that came out in two thousand eleven,
and it was a game in which you played a well,

(27:12):
you played a couple of different characters, but the one
you played for most of the game spoiler alert was
was a a police detective. And you're kind of rising
through the ranks uh in L A U during the
uh early part of the twentieth century. And it's it's um,

(27:33):
it's notable in that you are, uh, you're spending most
of the game looking at people's reactions. You know. The
idea behind L A Noir It was a new type
of video game where you would interrogate characters throughout your investigations,
and as you interrogate them, you had to watch the

(27:54):
characters facial reactions to kind of get an idea of
whether the character was trying to be evade se or
if they were telling the truth. And you would do
things like watch for their eyes and if they weren't
able to maintain eye contact, that was an indication that
perhaps they were being less than truthful. Or if they would,
you know, twitch their mouth or clench their jaw, these

(28:16):
would be little little hints that perhaps there's more going
on than what they're letting onto. And obviously, if your
gameplay depends upon trying to determine whether or not a
virtual character is telling the truth, you have to be
able to represent those facial expressions as closely to reality

(28:38):
as possible, or else the game does not work. So
they used this motion scan technology and the way that
they did this was that they had a very brightly
lit studio that had lights trained on an actor from
just about every angle and the purpose of that was
to try and eliminate shadows, because any sort of shadows

(29:00):
you would have there would of course affect the actual capture.
It was really all about the light. And they used
thirty two high definition cameras. So think about that, thirty
two high definition cameras just to capture and actor's facial
performance like that's it. There, there's no other movement. The
actor is seated at the time and um and had

(29:22):
to remain as still as possible and just do all
the acting with their face, which for anyone out there
who's done any sort of acting, you know, that's incredibly
challenging because actors are trained to use their whole body
when they are performance making a performance. They're trained to

(29:42):
to really think about movement. I mean, if you're if
you're really serious about acting, you've probably taken movement classes.
And to suddenly have all of that taken away and
all of your acting is restricted to just your face,
it's pretty that's pretty dramatic. It's tough to do, but anyway,
that's what the actors had to do. They had to
sit down and restrict their acting to just their facial

(30:05):
expressions without it going like over the top crazy, because
that would be just as distracting as not enough performance
at all. And these thirty two cameras were paired up,
so sixteen pairs of cameras. There's technically there was a
thirty third camera as well that the director used to
watch the scene and give directions to the actors um

(30:28):
but these these pairs of cameras were trained on all
these different angles of the face in order to capture
that that performance so that in the virtual world they
could recreate it accurately, which to me is phenomenal. And
apparently the way the system works is you get that
virtual version of the person's face and head almost instantly,

(30:54):
which is kind of creepy but also awesome. It's it's
funny too that uh they used that many cameras in
the creation of a video game, because uh, elsewhere in
that article that notes that um Circus who was playing
Gollum um only had only had cameras on on him,

(31:20):
but in doing so, they were able to, uh to
create roughly, you know, ten thousand different kinds or identify
ten thousand different kinds of facial movements that they could
use in in animating the character on screen. So um, clearly, uh,
you know, this is very very high tech and painstaking

(31:41):
procedure to do, but in doing so they can they
can create very very realistic movements. Yeah, there's a lot
of number crunching involved, and frankly, the the part that
takes place after you've captured the data is can be
dramatically different from one case to the next. In some cases,
may have already created uh an animated figure pretty much

(32:06):
from start to finish, you might not have completely put
textures on it or or something. But you might have
essentially the way the character is going to look in
the finished product, uh, and then you just map it
to the movements that you've captured and it's and there
it goes. And in other cases you might see that

(32:26):
what they do is they capture the motions and then
you essentially have what looks like a very primitive stick
figure skeleton that moves in the way that the actor moved,
but there's no definition, there's no character there yet. And
you may have animators who build the character somewhat based
upon the way the actor moved through the space, so

(32:47):
that perhaps the character's design is not finalized until you've
captured that that performance, and the performance helps guide the
design of the character. It all depends on the specific
technology that's being you and the preference of the crew
that's that's designing whatever it is that they're making, whether
it's a video game or movie, TV show, commercial, whatever

(33:08):
it happens to be. UH. In the case of digital puppetry,
obviously you would already have the the full character realized,
so that just by using whatever control mechanism happens to
be there, you would be able to make the puppet
move in real time, otherwise it's not really puppetry. Um.

(33:29):
And again that's sort of like the if you've been
to that that turtle talk thing I talked about, the
Disney World or Disneyland. Um. I'm sure there are other
similar ones. I think Monsters Inc. Laugh Factory has a
similar setup where you've got a digital character on a
screen that can react in real time to things that
are happening within the physical environment. So they interact with

(33:52):
the audience like they'll specifically single people out and chat
with people in the audience. And Um, to two kids,
this is amazing. I means the cartoon character acting in
real time, it's a real person. Now, Uh, two adults,
it's fascinating because they're like, how the heck did that happen? Um,
But yeah, that's it's all based on this same sort

(34:13):
of technology. UM. And It's it's really interesting to me
to see how the field is evolving over time, because
things like the connect show that we are adapting the
same sort of technology in different ways. We're using different
implementations to essentially do the same thing, and that perhaps
we will get to a point where we won't have

(34:35):
to worry about all the sensors so much. Um, you
can maybe have an actor who's not completely coded and
stickers perform and and you could capture all that data
without having to worry about, you know, tracking these little dots.
That might be something that we've seen in the future.
I mean the motion scan is kind of like that

(34:56):
because before motion scan with that facial acting uh technology. Uh,
whenever I saw anyone who was having their face tracked
for a performance, they always were wearing those tiny little
white stickers all over their face to track. I mean,
we've got a lot of muscles in our face. There's
something like nineteen muscles or something that you have to track,

(35:18):
so um, you would have all these little dots on
your face to track those motions. Well, with motion scan
you don't need those anymore. So maybe we'll see something
like that. Of course, that would really depend upon perhaps
the lighting, which could if you're shooting a virtual character
that's next to real characters like in The Lord of
the Rings, real being I guess you know, your mileage

(35:40):
may very I mean they're hobbits, but anyway, when you're
next to real people, clearly you can't mess with the
lighting too much or it'll just make the whole scene
look strange. Speaking of strange, UM, well, you might think
that the techniques used in motion capture uh, um, you know,

(36:00):
bringing film into a uh you know, adding a lot
of advancement to to film. Um basically uh, some people
sort of regardless as cheating. Yeah. I did some research
that that indicated that, um, although some other types of
animation are considered you know, considered more artful, UM, motion

(36:22):
capture is sort of not everyone. But some people say, well,
you know it's it's not oscar worthy because you were
using these computer add animation techniques that that really, um
are simulating human motion, and it's just it's just not real.
And uh. The argument that I've seen used against it is, well,
you consider rotoscoping, okay, why don't you consider motion capture,

(36:47):
which is a kind of descendant from this technology. Why
why isn't that okay to uh, you know, to consider
for um quality and and and for rewards. But um,
apparently it's a it's sort of a hot topic among um,
among movie makers. Yeah. I can see one animator a

(37:07):
traditional animator, or even a computer animator. I mean that's
closer and closer to becoming traditional already but either hand
drawn animation or computer animation. Someone who goes through the
trouble of animating these things and doing a lot of
this work. Uh, by hand seems like it's the wrong term,
but but personally going through and creating these performances, I

(37:30):
can see where they might feel that way. Um, I
have a completely different perspective on it. Of course, I'm
not an animator, so that's part of it, but I
think of it as creating a performance. And in the
sense of creating a performance, I think it's a completely
legitimate tool because you're still relying on an actor to
create a performance that that that people will relate to,

(37:54):
whether it's a character that you're supposed to love or
hate or fear, that all is dependent upon the animator
and the actor and several other people working to create
this this performance. And uh, I don't see anything wrong
with that. That to me is a completely legitimate form

(38:17):
of creating the art of entertainment. So um, I mean,
I do understand from an artistic perspective where some people
could have a problem with it. But but if you
take a bigger picture look and not not just you
know what technique you're using, but the end goal of creating,
whether you want to call it art or not, but
creating something that has an impact to the viewer or

(38:41):
player in the case of a video game, I think
that's more important. But then again, I'm like I said,
I'm not an animator, so I don't have that kind
of emotional attachment, you know, I'm not vested in it
in that way. So UM, I'll be curious to hear
what our listeners think if they think that there is
motion capture? Is that cheating? Is it? Uh? Is it

(39:03):
as Red versus Blue would have you say, a legitimate strategy?
What what do you think? What do you consider a
motion capture? You should less know? Yeah, I UM, I
do see where UM it might make a traditional animator concerned,
but I don't. I don't really think it diminishes their UM,

(39:25):
their artistic value, to to UM, to a work whatever
it may be that they are working on. UM. And
there are certain times I'm sure where uh you would
argue that using these techniques is completely inappropriate to what
they might do UM. But yeah, I mean it's it's
always a concern when UM you start saying, well, the

(39:47):
machine can do it, and we don't really need people
to do it, so get out. Yeah, I don't think
that's ever gonna be um always the fully the case,
because you're going to have certain characters within movies that
are going to be so different from the way humans
are built, so to speak, that that, uh, that motion

(40:09):
capture would not be practical. For example, like let's say
that the character that you're creating has really super long arms,
and you know, you've got an actor who's pretty lanky,
but but their arms are not as long as the
character's arms. Uh, if you were just to a direct
translation of the actor's movements into the animation, it might

(40:29):
not look right because the character has different dimensions, their
body is built differently than the actor. And so without
tweaking it, without having an animator go in there and
adjust this and make it look correct compared to what
the you know, the the vision is for the movie,
it doesn't come out correctly, it doesn't look right. So

(40:51):
I think there's very little risk of motion capture ever
taking that away completely. Plus, there is something to you know,
creating a performance through traditional animation that you know, it
does feel differently the motion capture, but that's not a
bad thing, Like it just depends upon the the vision

(41:13):
of the director and what the tone of the piece
needs to be. And in some cases motion capture is
going to be the best way to achieve that. In
other cases, motion capture would make it distracting. So um yeah,
I think as long as we maintain this desire for
different types of of entertainment and techniques, then there's not

(41:35):
really any risk of making one disappear. I agree completely,
I really do. It's it's just a different animals, yep, yep.
But hey, guys, if you want to chime in on
this motion capture discussion, please do. Let's know, send us
an email, are address this tech stuff at Discovery dot com,
or get in touch with us on Twitter or Facebook

(41:57):
our handle at both of those. It's text stuff. H. S.
W and Chris and I will talk to you again
really soon for more on this and thousands of other topics.
Is it how staff works dot com Brought to you
by the two thousand twelve Toyota Camra

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.