Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production of I Heart Radios
How Stuff Works. Hey there, and welcome to text Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
I Heart Radio and I love all things tech and
it's time for another classic episode of tech Stuff. This
(00:25):
episode originally published on December nineteen, two thousand and twelve.
It is titled how Motion Capture works and I love
mo cap when it's done well. When mo cap has
done well, you can get some phenomenal performances translated into
different types of media, including video games and c g
I films. It is an amazing tool and Chris Pullett
(00:49):
and I break down how it works. So let's listen
in on this classic episode. So today we thought we'd
talk a bit about a type of performance that is
relatively new as far as performance goes. Uh, something that uh,
I guess this falls into our movie making category, but
(01:09):
it's also something that's been used in things like video
games and and other forms of media as well. Motion capture. Yeah,
you'll even see it in UH in sports. They've been
talking about this for a while now. And if you've
ever seen the making of a a video or a game. Um,
(01:30):
or you know even in sports rehabilitation, in medicine, Um,
they where the people are wearing dots, little white dots
all over their clothing and sometimes their faces and hands. Um,
that's probably what they were doing. Either that or they
just really like stickers. Yeah, I mean who doesn't. I
remember being very competitive in elementary school in order to
(01:52):
get a sticker. And also this is a tangent but
a true story. I got a gold star sticker, uh
just last month, as from Tracy, the head of our
our site. So anyway, um, yeah, motion capture. Actually, there
are a lot of different terms that you can use
in this in this realm uh, motion capture am CAP
(02:15):
is probably the one I hear the most frequently, but
also things like performance animation, performance capture, digital puppetry, real
time animation, motion scanning, which is really more of a
proprietary thing, but these are The concept is pretty much
the same across the board. The idea is to capture
the physical representation of something and then converted into a
(02:38):
virtual format. So usually it's something that's in motion, but
it's not always that way. Uh. Since you know we're
talking about motion capture, that makes sense, but you're trying
to get uh, translates something that is moving through real
space into a digital format. And Uh, there's different ways
to do this. I mean you could do it the
(03:00):
really hard way, which is where you study something and
then you try to recreate it, uh, either by hand
or by or digitally, you know, by by programming movements
into an animated figure. But this is an idea that
kind of takes that step out where you are directly
porting the movements uh, something is making within physical space
(03:20):
into virtual space. Yeah, there was an early technique, UM,
And of course this is this is all an attempt
to to get as real as you can with with animation. UM.
And one of the earlier techniques that that was sort
of a predecessor to this is called rotoscoping. Uh. Ralph
Box She's Lord of the Rings had a lot of
(03:43):
rotoscoping in it. Well, what happens is UM, in that
case is that a real uh, a real human being
goes through the motions and they act through the parts
that are that you're going to see in the animation,
and they shoot that on film. Yes, yes, and then
the the animators basically are looking at that and are
(04:03):
drawing more or less on top of that, they see
a projection of that, and they are drawing, uh, the
animation over that to capture the way that person's body looks.
And this this was famous. You know, the Disney studios
were famous for this. We're studying models and then they
would do the rotoscoping technique to to try to make
(04:23):
their uh, their characters look more realistic. Yeah, and there
are some artists, like I said, like Bakshi who famously
would leave the film image as part of the animation,
so that you had this this weird effect where the
thing you were looking at was part uh well quote
unquote real image and part animated image, which was it
(04:46):
was an artistic choice, uh, definitely something that was not
meant to to necessarily fool you into thinking, oh, well,
that animated character is moving very realistically. It was done
on purpose, but it was. That's what I always think
of when I think rotoscope as I just think of
the different box sheet films, but in particular I think
of his Lord of the Rings adaptation, um, which, as
(05:06):
I recall, ended halfway through the Two Towers. So anyway,
that's just bringing back memories. But yeah, that was that
was sort of a precursor to motion capture motion capture itself.
There are many different ways of achieving this. For example,
there were it's not used very frequently now, but there
(05:29):
were mechanical systems where you had sensors that would be
attached to specific joints UH that would relay movement. And
usually it was kind of like a like an actor
would wear a physical metallic skeleton type device that would
have the sensors attached to the various joints, and as
(05:52):
the actor moved, the sensors would register the changes in
motion in this metallic skeleton and UH, and that would
be relayed through usually cables to a computer system that
would measure these or take the measurements from the sensors
and translated into movements for the virtual character. UH. It's
(06:15):
very limiting on this particular system. There was another one
that was a little more versatile, which was used electro magnets.
And in this case you talked about sensors that would
be attached by really thick cables that again would go
to a computer and there'd be a magnetic field and
(06:35):
by moving through this magnetic field, the sensors would pick
up alterations. You know, it would you know, moving through
magnetic field, you would get little electrical changes. We we've
talked a lot about electricity magnetism in general, moving through
uh fluctuating a magnetic field can induce electricity through a conductor,
(06:56):
or putting electricity through a conductor can induce a magnetic field.
So anyway, by moving these sensors through the magnetic field,
it would create these electronic fluctuations that would then be
measured and translated into movement. And again this was a
fairly effective way of picking up movements. It actually didn't
(07:17):
use as many points of contact as the optical systems
that we mostly think about, that was the kind that
Chris was referring to early with all the dots on
the person. Those systems tend to have lots and lots
and lots of points of reference. The electro magnet ones
didn't tend to have as many points of reference because
the software side of it, because you know, we do
(07:40):
have a hardware and a software side to this. The
software side would assume that the joints that these sensors
were attached to behaved the way they normally would in humans,
and that they don't have complete freedom of movement. Most
of us are not multi jointed in every joint, so
we can't you know that we have a imitation on
(08:00):
how far we can move in certain directions with these
various joints. So taking that into account, you didn't have
to have sensors all over the body. You would just
have in a few places, which was good considering that
there were these thick cables attached to the sensors. And
then once you were done moving, then the all that
data would get be captured within the system and could
(08:22):
then be rendered into animation. Although this was also a
way that you could do real time animation or digital puppetry. Uh,
it's not that different from controlling a video game character
with a controller. It's sort of the same principle, except
in this case the the video game controller instead of
being something you hold in your hands, it's something you
were actually wearing. And uh, I've seen plenty of instances
(08:46):
of this. If you've ever seen Turtle Talk with Crush
over at Disney, that's what they use. They use a
digital you know, they use digital poetry, and it's awesome,
by the way, I love that. Well, it it would
all so seeing that, um, you would need to be
aware of where those cables were going, and it would
(09:07):
it would also affect the way that you would move.
You wouldn't move as naturally if you were wearing something
like that as if you were, you know, unencumbered by
by that, which, um sort of I think would wind
itself to an upgrade, which is I think why they
were so keen on optical system Well, it's also also
(09:28):
that's very true. It did limit what you could do,
it could live, It would limit your movement. I mean,
when you've got these big cables attached to you, you you
obviously you can't just move freely within a space. Um.
So it did put some limitations on you. Their limitations
to the optical systems too, but we'll get into that.
The The other problem was that the sampling rate for
(09:49):
the magnetic systems was not as high as it is
for optical systems. And by sampling rate, what I mean
is that this the entire system as a whole is
taking little measure sure mints of from the sensors of
you know, the orientation of those sensors within the space,
and it does that several times every second. But the
(10:10):
sample rate of the magnetic motion capture systems was much
lower than what it was for than what it would
be if you were to use an optical system. So
you're not getting data as frequently, I mean still several
times a second, but it's not as precise as the
optical system. So not only were you limited in the
kind of movements you can make because you had these
(10:33):
major cables attached to you, but also you couldn't get
really minute precise measurements on every kind of movement. So
it wasn't good for things like sports. So you know,
something like throwing a pitch in baseball, there are a
lot of movements, a little tiny motions that are involved
(10:53):
in that. I mean, anyone who's watched slow motion footage
of a professional baseball pitcher throwing a pitch, you can
see that there are some incredibly subtle movements that are
involved in that. And uh, and it takes place over
a very short period of time. I mean, it's a
very fast thing to to to measure. Using the magnetic
(11:17):
motion capture system, you would probably one slow the person
down because they have all these cables attached to them,
and to not get enough data to give an accurate
representation of what had happened in the virtual format. So
if you were to say, create a video game, a
baseball video game, the picture would not necessarily behave properly
(11:41):
if all you did was directly port the data you
got from the motion capture into the game. Yeah. Another
drawback of the mechanical systems like that too, is that
that it's UM. It's the kind of system that not
only is cumbersome and inaccurate, but it has to be
calibrated fairly frequently. UM. And you know, there there's there's
(12:01):
some work that you can do with this. But the
optical systems that they began to introduce UM, you know,
generally became an upgrade UM. The only there is one
big advantage that the mechanical systems do have, though, and
that is that light. The lighting will not necessarily interfere
(12:22):
with the different points of motion that are captured by
the mechanical system UM. And that can be an issue
with the optical systems UM. You know, because that's that's
why UM. They will be wearing the people, the actors
who will be UM having their motions captured by the
system will be wearing you know, those bright dots so
that the computer can pick up on that. And at
(12:44):
the beginning, and these are these early systems, there were
only so many points action points that they could capture. Um.
They were very limited in what they could do at first,
but still you know, somewhat of an upgrade over the
mechanical Yeah. It also limited what you could have in
the background, obviously, because you could not have anything that
was going to be of a similar shade UH. You
(13:07):
know usually where you're talking about reflective white substance used
as the um the points of of UH articulations. So
the little like white stickers is like what you were saying, Chris, Um,
you couldn't have anything like that in the background because
it would confuse the optical system. So that's why a
(13:28):
lot of these motion capture scenes are shot against a
blue screen or green screen. It's so that the background
does not in any way interfere with the motion capture.
So if you've ever seen behind the scenes footage of
The Lord of the Rings movies is a great example
with Andy Sarkis as Gollum or Sniegel if you prefer,
(13:48):
But he's wearing you know, a tight like skin tight
suit with these little white UH circles all over it.
Those are the points that the camera track to create
the the performance of gallam slash Sniegel. So the performance
is something that's being created not only by the actor
(14:11):
but also the animators because not we should also point
out that the motion capture stuff rarely is motion capture
uh completely. Uh there's there's rarely a moment where you
don't have an animator step in and tweak it somehow,
Like you don't normally have someone create a physical performance
(14:32):
and that physical performance is completely without any tinkering represented
in the final product. I mean it can happen, there
are instances of it, but it's more frequently, uh, something
where the motion capture performance goes to the animator who
can then tweak things if the performance is not exactly
what needs to be, which is kind of nice. You
(14:54):
don't necessarily have that luxury with flesh and blood actors.
I'm gonna stop motion ure in mid motion right now
so that we can take a quick break to thank
our sponsor. Well, especially with the earlier systems, especially the
(15:15):
electromagnetic systems, Uh, those were really noisy, not literally noisy,
but digital noise. They weren't they weren't really highly accurate. Um.
The optical systems are are far cleaner and give them
more accurate representation. But you know that that's it sort
of falls in the realm of artistic license. I would think, um,
(15:35):
where they need to go in and make subtle adjustments
to make it look the way they wanted to look. Ye.
I should also point out, now you just reminded me
of something else another drawback to the electromagnetic systems, which
was you couldn't have anything metal on the set because
it would interfere with that magnetic field and give incorrect
(15:56):
readings to the system. So you're you're virtual character would
not move in the same way as the physical one
because there would be some interference in that sense. So
your set couldn't have anything metal in it. The props
didn't shouldn't have anything metal in them, so that that
limited to you as well. So each system has its
own limitations. Getting back to the optical one, UM, one
(16:18):
of the other things you have to remember is that
in order to really capture a a physical object moving
through three D space and to replicate that in virtual space,
you need multiple cameras in that system because a single camera,
assuming that's a regular video or film camera, something that
(16:40):
does not have three D capability, pointing that at an
object it's creating a two dimensional image of something that's
moving in three dimensions. The camera can't necessarily tell where
movements are happening within the depth frame of of that
of that image, right, So if someone's moving in such
(17:02):
a way where let's say they're moving their head where
it would be bobbing closer to the camera. Uh, unless
the size of the the sensors is such that something
that's subtle could be picked up by the camera system,
you would lose that information. So what you need are
multiple cameras on the same object so that you can
(17:25):
compare that data from the multiple angles to tell how
this object is really moving through this three dimensional space.
So it's kind of like the idea of having parallax
with two eyes. You know, our eyes are offset, so
by looking at an object, we can tell how far
away it is in part because of parallax. Uh. We
(17:47):
also have other visual cues that tell us about how
far something is, you know, things like how tall it
is in relation to where we are that kind of thing,
or how tall it is in relation to other objects
that are within our frame of vision. But parallax is
very importan and same sort of thing with these optical systems,
you would have multiple cameras set up to try and
capture the information that's going on in the frame so
(18:09):
that you could tell exactly how it's moving through that
three dimensional space. Yeah, it seems like um, in order
to capture the correct perspective, you need that additional information,
even though you may not necessarily see it. UM. It
helps the the animator do that, and the optical system
to allows you to work with more than one actor um,
(18:31):
which was not really an option with some of the
earlier systems. So in other words, you can although it
requires more equipment, you know, just simply out of necessity,
the optical system is really affording the animators a an
opportunity to use a greater amount of information um both
(18:52):
you know, from the different the different points of data
they're getting from a single actor, but from multiple actors
on this set simultaneously, which enables them to to create
more complex work. Right and uh. This also gives us
a good example of how the optical motion capture systems
are a passive system because you have these sensors you're
(19:15):
wearing that are not necessarily or not even sensors, they're
they're reflective markers that you're wearing. They aren't connected to
any sort of electronic components at all, versus the active
systems like the electromagnetic one where you are generating data
by moving through a magnetic field and you have these
big cables attached to it. Uh with the optical motion
(19:38):
capture systems. And another thing that's kind of interesting I
think is that a lot of at least the early ones,
the cameras would have infrared l e ed s uh
so emitters. Really that we're emitting infrared lights that's outside
are our visible spectrum. We cannot see infrared light. But
by putting an infrared filter on the camera, you could
(19:59):
have the camera pick up reflections of infrared light. And
that was a way of helping to identify the sensors
that you had put on the actor. The actors, the
sensors would be reflective specifically so that the infrared light
would reflect back toward the camera and give the most
accurate rendering of what's going on at any given moment
(20:21):
within a scene. So, um, yeah, it's another way of
making sure that the data being captured is as precise
as possible. I mean, that is, of course, the goal
is to try and recreate the physical movements as truthfully
as you possibly can given all the limitations involved. Yeah,
and if you're looking for a real life easy to
(20:44):
find an example of this, you would look no farther
than your local video game store. Um, because the Xbox
connect uses very much that that exact form of technology
is using an infrared emitter UM, and it has cameras
that it uses to pick it up uh, the information,
(21:06):
pick the information up that is coming back from what
is being reflected around the room. And anybody who who
has one is also aware that lighting is very much
an issue. UM. The way that were room is let
affects the information that the connect is able to refer
to the Xbox. Now, it's not, while it is sophisticated,
is not as sophisticated as the kind of equipment that
they might use in making a movie or making a
(21:29):
video game. But it is very very similar technology, and
in some ways I would argue that it's more sophisticated
than some of those early UH systems simply because it
is able to capture a lot of information. UM. Whereas
you know, the very early optical systems were only using
a handful of data points, right, So UM, it's it's
(21:50):
a pretty neat device. Um, you know, not only used
for gaming. Now the hacker community has fallen in love
with it too because it can do so much and
can be used for so many things and is you know,
fairly inexpensive. Yeah. The cool thing about the Connect is
that rather than have to obviously, if you've if you've
ever played an Xbox with a connect, you know you
don't have to go out and buy a snug body
(22:12):
suit covered in reflective markers in order to play. I mean,
it doesn't hurt, but you know, if you're if you
can pull that look off. There are very few of
us who can. I count myself among them. But you
don't have to do that because what it's doing is
it's actually projecting essentially a grid uh in infrared light,
so you can't see the grid, but it's being projected
(22:33):
into the room, and then when you move uh within
the space, you are deforming that grid. You know, the
camera that's picking up the reflections of that infrared light
can detect when the grid's being deformed by a physical
object interrupting the grid. So as you move, you interrupt
different parts of the grid, and it can start to
(22:54):
interpret those as motions and commands. It's not uh, it's
not as precise as what we're talking about with the
optical systems that are used in movies and video games. Uh,
to to create them, that is, not to to play them. Um,
it's not as precise as those. But it also has
other elements that help balance it out, Like it has
(23:18):
regular optical cameras that can have some other software that
aids it in recognizing things like facial recognition software, which
does not necessarily rely upon that infrared grid. It relies
more on the traditional camera functions, but has the software
included that lets the the programs within recognize who is
(23:42):
standing in front of it, so that combination increases the precision,
which of course is very important whenever you're playing a game.
I mean, anyone who's played any sort of game where
you're using a faulty controller, or it's just a system
that hasn't been fully uh it's not finished yet, it's
just in prototypes, dage or whatever. You may have noticed
that it could be very frustrating to try and control
(24:05):
something where the actual controller is not as responsive as
you would hope. It's um not a fun experience, but anyway,
that is kind of related to this whole motion capture technology. UM.
I'm sorry what you were You look like you have
something to say, Well, no, I was, I was going
(24:26):
to say that. UM. You know, we really hadn't other
than my earlier statement about sports. UM. You know, we've
we've been talking about it in an entertain amount entertainment
about the the ability to capture motion to make characters
more realistic. And UM that that is exactly what they
(24:46):
want to do when they are using this in sports medicine. UM.
Jonathan alluded to earlier the difficulty in uh in capturing
all the little subtle motions that go into um, into
a Major League Baseball players pitching. UM. And you know
when somebody, when somebody gets hurt, UM, sometimes they go
(25:06):
through uh extensive surgery. The Tommy John procedures is UH famous.
You know, they do a ligament transplant to to help
rebuild a picture's elbow, and that can really throw off, um,
the mechanics of a pictures motion. So they use this
motion capture technology to really get an idea of how, um,
(25:28):
how that person is throwing going about the mechanics of
their typical game play. And and that's exactly the same
kind of thing that they're doing when they create these
very realistic sports games. UM. But you know, in this case,
they're using it for sports medicine to see if they can, uh,
they can go back and recreate some of the motions
that made them so successful before they were injured. Now. UM. Ironically,
(25:52):
in in UH entertainment purposes, especially video, UM, you can
get too realistic. UM. The Japanese professor massa Hiro Mori
is famous for his Uncanny Valley UM, which has been
used in uses a robotics term for a robot that
(26:13):
looks so much and and moves so much like a
human that it it creeps us out. It looks a
little too realistic. And I can think of we're actually
recording this in December of and UM. One of the
movies that comes on about this time of year is
The Polar Express, which is it's a nightmare known loved
(26:35):
and reviled both for its story and it's um and
the way that they use motion capture because the characters
and they are so realistic, they're downright creepy. Yeah. It's
it's one of those things where they are almost but
not quite able to pass for a real person, so
that there's just enough off about them to be unsettling. Now,
(26:58):
this does bring up something else that's kind of interesting.
We have an article on how stuff works dot com
about motion scan technology, which is, as I said earlier,
a proprietary technology. It's it's more specific than just motion captured.
Specifically meant to capture facial motion activity. So when an
(27:19):
actor is speaking, when they're delivering lines, the way that
they furrow their brow or move their eyes or smile,
or they give a facial tick, anything like that, this
system is designed to pick that up so that it
can be recreated virtually in a game, and it was
(27:41):
used to great effect, in my opinion, in l A Noir.
Le Noir was a video game that came out in
two thousand eleven, and it was a game in which
you played a well, you played a couple of different characters,
but the one you played for most of the game
spoiler alert was was a a police detective. And you're
(28:03):
kind of rising through the ranks uh in L A
h during the uh early part of the twentieth century,
and it's it's um. It's notable in that you are uh,
You're spending most of the game looking at people's reactions.
(28:24):
You know, the idea behind l A Noir. It was
a new type of video game where you would interrogate
characters throughout your investigations, and as you interrogate them, you
had to watch the characters facial reactions to kind of
get an idea of whether the character was trying to
be evasive or if they were telling the truth. And
(28:44):
you would do things like watched for their eyes, and
if they weren't able to maintain eye contact, that was
an indication that perhaps they were being less than truthful.
Or if they would, you know, twitch their mouth or
clench their jaw, these would be little little pints that
perhaps there's more going on than what they're letting onto.
(29:05):
And obviously, if your gameplay depends upon trying to determine
whether or not a virtual character is telling the truth,
you have to be able to represent those facial expressions
as closely to reality as possible, or else the game
does not work. So they used this motion scan technology,
(29:26):
and the way that they did this was that they
had a very brightly lit studio that had lights trained
on an actor from just about every angle, and the
purpose of that was to try and eliminate shadows because
any sort of shadows you would have there would of
course affect the actual capture. It was really all about
(29:46):
the light. And they used thirty two high definition cameras,
So think about that, thirty two high definition cameras just
to capture and actor's facial performance like that's it. There,
there's no other move meant the actor is seated at
the time and um and had to remain as still
as possible and just do all the acting with their face,
(30:10):
which for anyone out there who's done any sort of acting,
you know, that's incredibly challenging because actors are trained to
use their whole body when they are performance making a performance.
They're trained to to really think about movement. I mean,
if you're if you're really serious about acting, you've probably
taken movement classes. And to suddenly have all of that
(30:32):
taken away and all of your acting is restricted to
just your face, it's pretty that's pretty dramatic. It's tough
to do, but anyway, that's what the actors had to do.
They had to sit down and and restrict their acting
to just their facial expressions without it going like over
the top crazy, because that would be just as distracting
(30:53):
as not enough performance at all. And these thirty two
cameras were paired up, so six pairs of cameras. There's
technically there was a thirty third camera as well that
the director used to watch the scene and give directions
to the actors. Um. But these these pairs of cameras
were trained on all these different angles of the face
(31:15):
in order to capture that that performance so that in
the virtual world they could recreate it accurately, which to
me is phenomenal. And apparently the way the system works
is you get that virtual version of the person's face
and head almost instantly, which is kind of creepy but
(31:38):
also awesome. Chris Bilette and I have a little bit
more to say about how motion capture works, but first
let's take another quick break. It's funny too that they
used that many cameras in the creation of a video game, because, uh,
(32:00):
as elsewhere in that article that notes that um Circus,
who was playing Gollum Um only had only had cameras
on on him, but in doing so, they were able
to uh to create roughly, you know, ten thousand different
(32:21):
kinds or identify ten thousand different kinds of facial movements
that they could use in animating the character on screen.
So um, clearly, uh, you know, this is very very
high tech and painstaking procedure to do, but in doing
so they can they can create very very realistic movements. Yeah,
there's a lot of number crunching involved, and frankly, the
(32:44):
the part that takes place after you've captured the data
is can be dramatically different from one case to the next.
In some cases, you may have already created uh, an
animated figure any much from start to finish. You might
not have completely put textures on it or or something,
(33:05):
but you might have essentially the way the character is
going to look in the finished product, uh, and then
you just map it to the movements that you've captured
and it's and there it goes. And in other cases
you might see that what they do is they capture
the motions and then you essentially have what looks like
(33:25):
a very primitive stick figure skeleton that moves in the
way that the actor moved, but there's no definition, there's
no character there yet, And you may have animators who
build the character somewhat based upon the way the actor
moved through the space, so that perhaps the character's design
is not finalized until you've captured that that performance, and
(33:47):
the performance helps guide the design of the character. It
all depends on the specific technology that's being used and
the preference of the crew that's that's designing whatever it
is that they're making. There's a video game or a movie,
TV show, commercial, whatever it happens to be. Uh. In
the case of digital puppetry, obviously you would already have
(34:08):
the the full character realized, so that just by using
whatever control mechanism happens to be there, you would be
able to make the puppet move in real time. Otherwise
it's not really puppetry. Um. And again that's sort of
like the if you've been to that that turtle talk
thing I talked about, the Disney World or Disneyland. UM.
(34:31):
I'm sure there are other similar ones. I think Monsters Inc.
Laugh Factory has a similar setup where you've got a
digital character on a screen that can react in real
time to things that are happening within the physical environment.
So they interact with the audience like they'll specifically single
people out and chat with people in the audience. And um,
(34:52):
to two kids, this is amazing. I mean, it's a
cartoon character acting in real time. It's a real person now, Uh.
Two adults. It's fascinating because they're like, how the heck
did that happen? Um? But yeah, that's it's all based
on the same sort of technology. UM. And it's it's
really interesting to me to see how the field is
(35:14):
evolving over time because things like the connect show that
we are adapting the same sort of technology in different ways.
We're using different implementations to essentially do the same thing,
and that perhaps we will get to a point where
we won't have to worry about all the sensors so much. Um.
You can maybe have an actor who's not completely coded
(35:38):
and stickers perform and and you could capture all that
data without having to worry about, you know, tracking these
little dots that might be something that we've see in
the future. I mean the motion scan is kind of
like that, because before motion scan with that facial acting
uh technology, Uh, whenever I saw anyone who was having
(36:00):
their face tracked for a performance, they always were wearing
those tiny little white stickers all over their face to track.
I mean, we've got a lot of muscles in our face.
There's something like nineteen muscles or something that you have
to track, so um, you would have all these little
dots on your face to track those motions. Well, with
motion scan you don't need those anymore. So maybe we'll
(36:20):
see something like that. Of course, so that would really
depend upon perhaps the lighting, which could if you're shooting
a virtual character that's next to real characters like in
The Lord of the Rings real being. I guess you know,
your mileage may vary. I mean, they're hobbits. But anyway,
when you're next to real people, clearly you can't mess
with the lighting too much or it'll just make the
(36:42):
whole scene look strange. Speaking of strange, UM, while you
might think that the techniques used in motion capture, um
you know, bringing film into it, you know, adding a
lot of advancement to to film, um basically uh, some
people sort of regardless as cheating. Yeah. I I did
(37:06):
some research that that indicated that although some other types
of animation are considered you know, considered more artful, UM,
motion capture is sort of not everyone. But some people say, well,
you know, it's it's not Oscar worthy because you were
using these computer add animation techniques that that really um
(37:29):
are simulating human motion and it's just it's just not real.
And the argument that I've seen used against it is, well,
you consider rotoscoping, okay, why don't you consider motion capture,
which is a kind of descendant from this technology. Why
why isn't that okay to uh, you know, to consider
for um quality and and and for rewards. But um,
(37:53):
apparently it's a it's sort of a hot topic among
um among movie makers. Yeah, I can see why animator
a traditional animator or even a computer animator. I mean
that's closer and closer to becoming traditional already, but either
a hand drawn animation or computer animation. Someone who goes
through the trouble of animating these things and doing a
(38:16):
lot of this work. Uh by hand seems like it's
the wrong term, but but personally going through and creating
these performances, I can see where they might feel that way. Um.
I have a completely different perspective on it. Of course,
I'm not an animator, so that's part of it. But
I think of it as creating a performance. And in
the sense of creating a performance, I think it's a
(38:36):
completely legitimate tool because you're still relying on an actor
to create a performance. That that that people will relate
to whether it's a character that you're supposed to love
or hate or fear. That all is dependent upon the
(38:57):
animator and the actor and several other people working to
create this this performance. And uh, I don't see anything
wrong with that. That to me is a completely legitimate
form of creating the art of entertainment. So um, I mean,
I do understand from an artistic perspective where some people
(39:20):
could have a problem with it. But but if you
take a bigger picture look and not not just you
know what technique you're using, but the end goal of creating,
whether you want to call it art or not, but
creating something that has an impact to the viewer or
player in the case of a video game, I think
that's more important. But then again, I'm, like I said,
(39:42):
I'm not an animator, so I don't have that kind
of emotional attachment, you know, I'm not vested in it
in that way. So um, I'd be curious to hear
what our listeners think if they think that is motion capture?
Is that cheating? Is it? Uh? Is it, as Red
versus Blue would have you say, a legitimate strategy? What
(40:02):
what do you think? What do you consider a motion
capture you should less know. Yeah, I UM, I do
see where UM it might make a traditional animator concerned,
but I don't. I don't really think it diminishes their UM,
their artistic value, to to UM, to a work whatever
(40:24):
it may be that they are working on. UM. And
there are certain times I'm sure where uh you would
argue that using these techniques is completely inappropriate to what
they might do. UM. But yeah, I mean it's it's
always a concern when UM you start saying, well, the
machine can do it, and we don't really need people
to do it, so get out. Yeah, I don't think
(40:47):
that's ever gonna be UM always fully the case, because
you're going to have certain characters within movies that are
going to be so different from the way humans are built,
so to speak, that that, uh, that motion capture would
not be practical. For example, like let's say that the
character that you're creating has really super long arms, and
(41:12):
you know, you've got an actor who's pretty lanky, but
but their arms are not as long as the character's arms. Uh.
If you were just to a direct translation of the
actor's movements into the animation, it might not look right
because the character has different dimensions, their body is built
differently than the actor, and so without tweaking it, without
(41:34):
having an animator go in there and adjust this and
make it look correct compared to what the you know,
the the vision is for the movie, it doesn't come
out correctly, it doesn't look right. So I think there's
very little risk of motion capture ever taking that away completely.
Plus there is something too, you know, creating a performance
(41:56):
through traditional animation that you know, it does feel differently
the motion capture, but that's not a bad thing, like
it just depends upon the vision of the director and
what the tone of the piece needs to be. And
that wraps up another classic episode of text Stuff. Hope
(42:17):
you guys enjoyed it. If you have any suggestions for
future episodes of tech Stuff, feel free to get in
touch with me. You can send an email to tex
Stuff at how stuff Works dot com, or you can
drop me a line on Facebook or Twitter to handle it.
Both of those is text Stuff HSW. You can pop
on over to our website that's tech Stuff Podcast dot com.
You're gonna find a link to our archive where we
(42:39):
have every episode we've ever published right there searchable, so
we can go check that out, and you can also
find a link to our online store, where every purchase
you make goes to help the show. We greatly appreciate it,
and I'll talk to you again. Releasing text Stuff is
a production of I Heart Radio's How Stuff Works. For
(43:02):
more podcasts from I heart Radio, visit the i heart
Radio app, Apple Podcasts, or wherever you listen to your
favorite shows.