All Episodes

April 3, 2015 55 mins

Jonathan, Joe and Lauren answer a listener mail request. Could we use robots as avatars while we explore distant locations, even across the galaxy?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, the
podcast that looks at the future and says I was
made to build things, and I build them quite well.
I'm Jonathan Strickland and I'm Joe McCormick, and today we're

(00:23):
going to be tackling a listener request. Yes, this is
a listener request. That's from Chris, and Chris sent us
an epic email. I mean it's it's an incredible email, fantastic,
Thank you so much, Chris. It's it's long. It's long,
but we're so we're not gonna read the whole thing,
but I'll give you the gist of what Chris was

(00:43):
talking about. Chris was talking about the sort of the
the combination of multiple disciplines to allow us to explore
the universe ultimately with robots that would give us a
telepresence in wherever those robots could go, so that we
could kind of experience that ourselves. So we're talking about
a combination of different factors advances in robotics where the robots,

(01:07):
particularly humanoid robots, would be able to go around and
explore things, the ability to send those signals back to
us using a user interface that's similar to virtual reality
or augmented reality headsets, and the ability for us to
control those robots in real time or as close to
real time as possible, so that we can experience what

(01:27):
it's like being on these remote places, including places across
the galaxy, right right, This is like a step above
rovers and into full on robotic avatars. Yes, So we
wanted to talk about the possibility of that and some
of the challenges that we face in order to get
to a point where we could have robotic avatars. And

(01:49):
in some ways we're really close, and in other ways,
for some applications there's some problems that may in fact
be insurmountable. Yeah, well, we may be coming up against
problems that we often encounter when trying to uh interface
between technology and our brains. Yeah, exactly. But we've talked

(02:10):
about this on the show before, but maybe we should
just do a real quick refresher on why it is
you'd want to have robots in space exploration as opposed
to human astronauts, right right, and why you would want
to have a robot, you know, avatar in the first place. Now,
there are some reasons why you would want one just
here on Earth. And one of the great stories will
be covering later is looking into research of using robotic

(02:33):
avatar for people who are otherwise incapable of moving right.
It gives them the ability to have a robotic body
do things on their behalf, and they can control it
in various ways. So that's one reason, but the other
one being for space exploration or other means. Here on Earth.
Robots can go places we can't. Yeah, we've got these delicate,

(02:55):
squishy human bodies that don't breathe the vacuum of space.
And you, Yeah, cold, really super cold temperatures kind of
mess with us. Radiation is bad. Super hot temperatures mess
with us. We're just not that great. Yeah, organic matter,
organic matter has limitations, and some of those are limitations
that mechanical or electro mechanical or synthetic materials either do

(03:20):
not experience or experience to a lesser degree. Well, yeah,
there there are a couple ways you want to look
at this. One of them is simply that it can't
like a robot can do things a human couldn't do.
It can actually accomplish a mission that we would not
be able to complete. The other thing is that we
don't have to value robots as much like it's okay
to send a robot on a suicide mission. You don't

(03:42):
have to feel bad about it. Right, It may end
up being a large financial obligation, but still that's very
different than putting a human life in danger. Sure. Sure,
and you know, sentencing someone to never see their family again,
or never use Facebook again, or right right, or just
never see Gremlin's too again. Actually, I'm getting into an
area where I wouldn't mind giving those things up. But

(04:04):
Kremlins Too is still amazing. Uh No, it's it's one
of those It's also one of those things where we
could send them to, even if it's not a one
way mission, we could send them to places that would
be extremely risky for humans. And that that includes places
here on Earth. Obviously we use robots for that now,
shure for deep sea exploration. Yeah, and there are also
reasons a robot could complete a mission that humans couldn't complete.

(04:26):
Apart from the fact that it like the environment, might
kill a human, it can also be that robots can
have scientific precision that humans don't naturally have unless we
have some kind of tool or something. Yeah, robots can
move on a level of precision that that is far
greater than what humans can accomplish, and we've been putting

(04:48):
that to good use here on Earth as well. Also,
robots can have an array of sensors on them that
give them access to a lot more information than what
we humans can can act us with just our our
natural senses. Right, So they're they're robots are capable of
quote unquote seeing in a spectrum that's much more broad
than our visible spectrum that we're limited to because of

(05:11):
our dumb media. And they can also quote unquote here,
I mean pretty much any sense you can think of.
We can build sensors that are much more sensitive to
that information than our natural sense of The robot can
have an onboard mass spectrometer, which I mean we have
a tongue. But yeah, it's just better better to use
the spectrometer than to go tasting the dust of Mars.

(05:33):
This taste radioactive. Yeah, that's not not necessarily a good thing. Okay, Well,
so obviously it's not a new idea to use robots
controlled by humans in certain conditions where they would be
you know, preferable or more useful or at least you
know better to put in harm's way. For example, one
thing I've seen plenty of pictures and video of before

(05:54):
his bomb disposal robots. This was total sense. This is
one of the ones that leapt to mind when I
was thinking about this, sort of like the predecessors to
the types of robots that Chris was asking about in
the email we received. So, bomb disposal robots have been
around for a while. The earliest one I could find
was designed in nineteen seventy two by Lieutenant Colonel Peter Miller.

(06:16):
I say lieutenant because he was in the British Army.
His bomb disposal robot was built as a means of
a matter of necessity. At the time, the British Army
was dealing with a lot of car bombs that were
being left by the Irish Republican Army and so or
rather the Irish republic Army, and so he wanted to

(06:36):
find a way to be able to to pull cars
that might have explosive devices in them to a safe
zone surrounded by sandbags, that sort of thing, so that
it could be detonated without putting people in harm's way.
And a lot of the times before he had made
the first bomb disposal robot. It was essentially the job

(06:58):
of somebody who was very brave and padded down as
much as possible, armored up to physically go look into
a car, sometimes having to open a car door, which
could be a triggering device to get access to a bomb.
So they were trying to find a way to make
this safer and he came up with the idea when
he was thinking back on something else he had done.

(07:19):
It turned out that that Peter Miller was something of
a tinkerer and someone who often would think, how can
I make this easier? Uh? And one of the things
he thought about was a way to make it easier
for him to mow the lawn so that he didn't
have to do it personally. And what it did was
he he did something really simple. He had a lawnmower
that had powered wheels so it could move on its

(07:40):
own once you apply you know, gas to it. Essentially,
he tied a rope to it, put a stake in
the ground that the rope was attached to, and let
it go so it would go in a circle. And
every time it completed a circle, it was it was
winding the rope around the steak. It was tall enough
in it there would be a shorter length of rope
next go around, so it was concentric circle. So it's

(08:03):
like an early mowing room. It's brilliant in theory for
some reason that it seems like it wouldn't actually work
in practice. It worked for him enough for him to think,
maybe I can come up with something similar, but use
it for this this very dangerous activity. And he thought, well,
the lawnmower wouldn't really work as well. And he saw

(08:24):
someone using a motorized wheelbarrow, and he literally went to
the garden center, bought a motorized wheelbarrow, brought it back
to headquarters. They were given carte blanche to do whatever
they needed in order to actually put this thing together.
So they disassembled the wheelbarrow so they just had the
chassis and they created a robot that had a toe

(08:48):
hook on it. It was essentially a grappling hook and
they could power it remotely. And by remotely, I mean
they tied ropes to the controls. By power you mean
human power or yeah, there was a human standing several
feedback using ropes to guide and power the willbarrow as
it would move to a car. And in fact they

(09:10):
had to test this out almost immediately. They had to
go to Belfast with it. They used it to hook
a car and it started to tow the car back, however,
and it was actually there was another vehicle I believe
that was connected to it to help it pull back
because the wheelbarrow wasn't strong enough to tow a car.
I don't believe. However, what happened was the wilbarrow unfortunately

(09:32):
fell over during that first real life It wasn't even
a test, this was a real life use of it.
This was before they had really had chance to test
this thing. It phil fell over on its side and
they ended up deadening the car by rockets where it
what where it ended up after it'd been towed just
a short distance. But they still considered it a success
because it didn't it didn't require someone to get into

(09:55):
harm's way to pull the car further away from it
was actually at a car dealership is where the car
was so uh they considered as a success, and in fact,
Peter Miller would go on to oversee the development of
more sophisticated robots, ones that had a true remote controllability
not just with ropes, and ones that had more like

(10:15):
an articulated arm so it could do things like pick
up a suspicious package and move it physically to a
different location. Uh, you know, ended up being sort of
the genesis of bomb disposal robots. And so we've got
a lot more advanced ones now, including ones that have
cameras so you can operate remotely and see what quote
unquote see what the robots sees. Yeah, yeah, and uh,

(10:38):
of course there's lots of other military use robots. A
lot of them are used for for for simple reconnaissance missions.
In fact, the company that makes room bas has military
reconnaissance robots that are pretty awesome. Um. But uh, that
gets into our next bullet here, which is reconnaissance drones. Yeah,

(10:59):
so drones obviously being used for reconnaisance around the world
right now. Uh, the earliest drones were really kind of
electronic drones. That we were the earliest electronic drones, I
should say, because you could count like unmanned balloons from
the nineteenth century and the really tragic time that they
strapped bombs to bats and yeah, yeah, that could be

(11:21):
considered part of it too, But in this case, we're
talking about electronic drones that were just used as target
practice UH for combat pilots and anti aircraft gunners, and
then eventually in the nineteen sixties, the US Air Force
funded development of unmanned reconnaissance drones like the Ryan Model
one seven, also known as the Lightning Bug, which had

(11:42):
to be launched from a Lackey d C one thirty
Hercules airplane. UH. They had cameras aboard the drones, but
they were just taking series of photographs and had to
be retrieved so that you could see what they had gotten.
The nineteen sixties, they didn't have a satellite up link
to automatically, you weren't getting a live need from the drone,
so you actually had to You had to find the

(12:02):
drone after it landed and retrieve it so that you
could get the information that was captured. But this meant
that you had an unmanned vehicle as opposed to someone
flying like a Youtubo spy plane, which you know had
previously been shot down. But so the Union, so that
was a big issue. UH. In nineteen seventy two, the
Ryan Model one four seven SC TV was introduced, which

(12:24):
had a TV camera that sent a live feed to
the d C one thirty drone controller. Aircraft. No telling
if the sc TV also had Canadian humor on it.
Oh yeah, how many of you are familiar with that
sketch show. That's a dated reference right there. At any rate,
today we've got lots of different drones with cameras on them.

(12:44):
They can give you a live feed, including consumer drones,
the type of stuff like the Parrot a r drone
where you can control it with a smartphone, get a
live feed of the camera to your smartphone. You can
play games with other people who owned them, and you
do a little like kind of tag like games using
your smartphone as the user interface. So these are a
commonplace now, Yeah, yeah, it's it's kind a long way baby. Yeah.

(13:05):
And so again this is a way of us extending
our presence beyond our immediate surroundings, right, although still fairly uh,
I mean, with the use of the consumer drones is
still fairly uh modest. You know, something that is a
lot like a direct robotic avatar would be something like
robotic surgery. Yeah, because that involves directly translating a humans

(13:31):
movements into the movements of robotic arms and tools. That's
sort of like just giving you this tiny direct presence
inside somebody else's body, which is creepy yeah, but also
cool sure, And like all robotics, this can work in
a few different ways. Most of the robotic surgery things
that exist today are shared control systems, and and that

(13:54):
just means that a doctor is yes, is directly using
robotic equipment to perform a surgery. The robot in question
here isn't really thinking and it can't really react to
anything that happens. The robot doesn't decide where to cut
right exactly. Um Now, tell usurgery and doctor supervised robotic

(14:15):
surgery systems are are in development. But you know, tell
usurgery requires really good, really stable Internet connections. No one
wants something to go terribly wrong in someone's body because
the Internet went down. Can't. You clearly can't have any
latency issues either, because you need to. For example, a
lot of these systems have warning warning algorithms in place

(14:36):
so that if you start straying too close to a
very sensitive very Let's say that you are performing a
surgery to remove a cancerous tumor for a sample, and
you're starting, you would start to get to a point
where you're going to be cutting into healthy tissue. A
lot of these have a system involved where you get
a feedback to the surgeon, so the surgeon knows, even
though they're not in the same space as the patient,

(14:58):
that they can't go any further without potentially harming the patient, right, right,
And there's a large process of scanning and uh, data
collection that happens before any of these surgeries would start,
so that frequently they'll kind of mark off areas around
the edges where you're supposed to cut, so that right,
so that you get that feedback and get you get

(15:18):
the margin, I think is what they call it. Yeah, yeah,
and you know, sometimes they will even have stops in
place to tell the robot like no, you do not
move past this point, please and thank you, which makes
a lot of sense. And then uh, doctor, supervised robotic
surgery requires just so much of that scanning and programming beforehand.
And you know, like we've just been saying, these machines

(15:41):
are not the computer programs really aren't complex enough to
stop or change plans if something goes wrong. So there's
just an element of of oh involved. Yeah, yeah, I
think a lot of the questions about robot surgery are
also involved sort of in the idea we discussed not
too long ago about what happens if a robot breaks
the law. The underlying question is who's responsible for the

(16:03):
autonomous actions of a robot? Uh, And so people probably
just aren't yet comfortable with the idea of giving a
robot full free reigin, even when it might do as
well as a human in practice. Yeah. Even scarier than
a pushing robot is a robot that's already performing lapard
scopic surgery on your heart. H But but the concept

(16:26):
is definitely awesome. You know, using tiny endoscopic cameras, a
robot can show a surgical team in two D or
even three D what's going on inside of a patient.
And and robots can make such small, controlled movements, um
so consistently, you know, without fatigue, no handshaking, right right,
you know, some surgeries take hours like half a day

(16:48):
even And and robots don't get tired and don't get
shaky hands when they miss their three pm snap time,
right exactly. And in that way, this really is a
lot more like the idea of a robotic avatar than
some of the other things we might consider, because the
tools are becoming extensions of the doctor's hands, right and
and they are enhancing the doctors already formidable h experience

(17:12):
and abilities to the point where the doctor is capable
of carrying out moves that may not have been possible
under his or her own normal, you know, human motions.
So it also one thing that this Another huge benefit
this gives us is minimally invasive surgery where you don't
have to do the the you know, you may not

(17:34):
have to make a big incision in order to perform
some of these surgical procedures because you're able to make
these very precise movements with tools that can go into
a relatively small uh cut. And that also means that
you have a faster healing time and less chance for infections.
So there are other benefits as well, right, and so

(17:55):
the question of have we ever used anything like a
robe abotic avatar and space exploration is a little different
though than the kind of examples we've talked about before.
So if you are controlling a bomb disposal robot or
a robotic surgery tool like the Da Vinci robot, or
even a drone that's you know, pretty far away, it's

(18:17):
still close enough that you can have direct interaction and control.
Um it might not be so like fully immersive, like
you're becoming the robot. But there's some kind of analogy
between that and having a robotic avatar. Space exploration is
a little different for a few reasons, right, Yeah. Yeah.
So one of those is that if we look at
the history of using robots in space, you could argue

(18:40):
that things like probes kind of sort of fit the
bill in the sense that they have sensors on them
that extend our ability to look or whatever into space,
depending upon the sensors. Uh. They typically have an antenna
that will allow them to transmit information back to Earth
and occasionally receive messages saying, hey, you need to do

(19:01):
a small course correction. That that's about the extent of
the interaction between probe and Earth. It's mostly a data
gathering system. There's a lot of autonomy and disconnectedness. Yeah.
Rovers however, those are much more recent. Uh, they date
really back to the nineteen seventies when the Leno cod

(19:21):
rover launched by the then Soviet Union to the Moon.
Um That was a remote controlled rover equipped with a
camera that sent back images of the Moon. The first
unmanned rover to land successfully on Mars was the sojourner
in that was carried aboard the Mars Pathfinder spacecraft, and
we've since used a few other rovers to explore Mars

(19:43):
UH and also we've used it to look at other
you know, rovers to look at other bodies as well. However,
the further out we get, the more we're no longer
talking about real time, and that's the big change between
the ones we were talking about earlier, the ones where
we're on the same planet as the robots. So the
lag time and communication tends to be either undetectable or

(20:06):
so short that we can we can adapt to it. Sure, sure,
a few seconds is adaptable. If you're talking about the
fourteen minute lag tend to get a message back and
forth to Mars, or is it? It can be even
longer than that. It's fourteen minutes. It was fourteen minutes
when when the Curiosity rover landed to get a message
one way, but that was just because of the positions

(20:28):
of Earth and Mars. It can actually be much longer
because Mars can be and Earth can be on opposite
ends of UH from the Sun, and to bounce a
message back to it is is obviously twice the going rate.
So yeah, twenty eight minutes between um, your your robot
and yourself is a really long time. You're like, all right,

(20:49):
I'm driving towards this cliff. Oh crap, Like, oh I
fell off that cliff half an hour ago. I wasn't
aware of it. Yeah, um yeah, that's it's actually a
real issue. So we'll talk a little bit more about
the latency and lag in just a second. But before
we do that, let's talk about another thing that's really cool.

(21:09):
We have a story about something that is much closer
to having a robotic avatar than any of these. I mean,
it's it's actual purpose is to have that sense of
of of ability to move around in an environment on
behalf of someone. Right, Well, a a true robotic avatar,
kind of like you've seen I don't know, in the

(21:30):
movie Avatar or something. What it is that the one
with him night shy, I'm along the little kid who
can control you can stop talking air. Uh So, in
any of these movies where you have somebody who assumes
sort of a robot body, like they plug their brain
into something and now they're controlling a robot, that's sort

(21:52):
of the idea of you know, you have a robot
avatar acts on your behalf, you sort of become the
robot mentally. Uh, nothing like that exists today, but people
are working on things like that. For example, robots that
are sort of on one hand, giving you feedback through
at least visual kind of screen information. Ideally it could

(22:14):
be some kind of virtual reality or augmented reality feedback.
But then you can also control them with your mind. Yeah.
So this is referencing a really cool report that we
saw in I O nine. Researchers at the c n
R S A I S T Joint Robotics Laboratory and

(22:35):
the c n R S L I R M M
Interactive Digital Human Group they need to work on some
of those easily pronounceable acronyms. Learn it's a great acronym.
What are you talking about? Iced and learn, which sounds
like the aliens from Futurama. So they these two groups

(22:55):
have collaborated on a project that could allow for thought
controlled robotics. And there have been lots of experiments in
recent years about thought controlled robotics. Yeah, exactly. Uh, some
of which require invasive surgery in order to implant electrodes,
some of which use more uh more of a contact
thing like an e G cap, right exactly. So typically

(23:16):
what we see today is that you can do a
lot more with surgical implants, but the goal that people
are working on is to try to be able to
have significant thought control of for a brain computer interface
with noninvasive methods like an e G cap on your head,
so you don't have to get surgery if you want
to control robot. Right, It's a lot easier sell to say, like, hey,

(23:37):
you might want to shave a little patch on your
head than it is to say like, hey, can we
drill a hole in your skull? Will fix it? We promise? Yeah? Yeah.
It is definitely a big leap there, right, So this
is this particular one they're calling robotic re embodiment and uh. Again,
the person who is controlling the robot wears an e
G cap to send commands to the robot. So there's

(23:59):
a trade meaning process obviously where you have to both,
you know, train the robot how to respond to to
certain stimuli that are created by the thoughts you know
that they're converted into electrical signals, and then you also
have to train the person who's wearing the caps so
that they are in fact concentrating on whatever whatever task

(24:20):
they want the robot to do. Now, this particular approach
is intended for people who have suffered paralysis. So again
the idea of giving them some more control over their
lives by having this robot be able to do things
on their behalf, by them actually controlling it with their thoughts.
And uh, there's even an AI component to it, which

(24:42):
makes sense because otherwise the person wearing the cap would
have to do everything to control the robot. And we
haven't reached that point where we can feel like our
brains are actually inside this robotic body. The robotic body
is really just an extension of our own abilities. Yeah,
and it's usually very sim book commands, Yeah, are e g.
Reading is not that good. Yeah, So for something like

(25:05):
walk down the hall, you might you know, you're looking
at a screen that's showing you what the robot quote
unquote can see based upon cameras they're in the robot. Uh,
and the picture that they had was of a humanoid robot,
so that was pretty cool. You could look at say
a point down the hall, and the AI would take
over and walk the robot for you down the hallway,

(25:27):
so you didn't have to think, all right, now I
have to lift the left leg of the robot and
set it down. It would do all of that automatically,
which makes sense. So it's a it's marrying remote control
with a I some some autonomous behavior so it can
complete certain tasks. But then other things you could, you know,
think I want to pick up that glass, and the

(25:48):
robot would, presumably if it's everything's working properly, reach out
an arm, pick up the thing you wanted and bring
it to you. Another thing in that realm that's obviously
much simpler is telepresence robots. Yeah. I mean these are
almost when you when you see how some of them
are implemented, it's kind of like, oh, that counts as
a robot. Yeah, but it does. I mean imagine like,

(26:10):
for example, a an iPad doing FaceTime on top of
a scooter scoots around like a little segue. I mean,
that's a telepresence robot. And I've seen those. I mean,
you know, some of them look look like it looks
you know, it looks like it's a handle that's connected
to two wheels, so it doesn't even look like it's
a Segway type thing even, but the handle has a

(26:33):
frame into which you can put a tablet like an iPad.
You run the software which allows for two way communications.
So the person who wants to control the robot would
use their own mobile device or computer or whatever, which
would have a camera trained on them, so their face
would show up in that iPad. You would have a

(26:53):
little framed picture of whomever is using it. Like I
use the example of your boss just as general. Your
boss who goes and travels a lot, uses this to
check up on employees and could control the movements of
the robot remotely, So you would have your own little
on your device. If you're the boss, you would have
some sort of controls to guide where the robot could go,

(27:16):
and you would have a view from the camera, the
ford facing camera on whatever tablet or whatever device you
have plugged into that. And then you could roll up
and interrupt people at work and ask them what they're
doing and make sure that they're being you know, productive
and not getting on Facebook for the four years time
in the row or whatever, or you know, you could

(27:36):
go up to the water cooler and participate in a conversation. Yeah,
not participate in the water That would be that uh,
a bad choice for a robot. Can't wait till somebody
comes up with the great hacks for these things, where
they can make it seem like you're working through your
telepresence robot because it plays video of you, like saying, Hi,

(28:00):
how are you doing as it rolls around the office,
But really you're out bowling well. And also, I mean
you can you can really defy the expectations of your
robo boss by simply walking into a room that has
a door and closing it behind you, because this is
a robot that's literally on wheels and nothing else. So

(28:21):
I read about these Actually they can knock on a
door just by ramming into the door. Att Yeah, the
other explanations I have seen have suggested that's perhaps uh
like really forward thinking offices, not our not our forward
digging office, but in general could have automatic doors installed

(28:42):
so that this problem. Yeah, there's a little motion sensor
and then it opens up automatically, and then the your
robo boss rolls on end to to chat with you.
Where we're ruining all of Joe's dreams by suggesting this
right now, he much prefers just the picture of our
boss ramming too door repeatedly, get it open at any rate.

(29:06):
This telepresence approach is something that already exists. Again, it
doesn't it doesn't give you, uh, the the experience that
Chris was talking about. Obviously. Well, part of that is
the fact that our virtual reality or augmented reality systems
are not up to snuff enough to to make us
feel like we are that robot. Yeah, to embody a

(29:28):
robot or to embody yourself within a robot. How would
be the best way to say that, I don't know
how to become a robot. To become the robot, you
really do need some kind of VR, like face time
is not good enough. Yeah, yeah, I mean that's VR
is one of those things. It's funny because the earliest
some of the earliest implementations of virtual reality were uh

(29:49):
were military implementations, and they were to give people a
better view of what was around, uh, whatever vehicle they
might have been in. So islets were using this, for example,
in order to get a good view underneath the aircraft
if they had to drop something off at a particular target,
or you know, people inside an armored vehicle. Obviously, the

(30:14):
armored vehicle is an important element of keeping personnel safe
and if you were to I don't know, put a
lot of windows in it, it becomes less safe. So
what you do is you end up mounting cameras on
the actual armored vehicle that point out in different directions,
and then you can have a head mounted display and

(30:36):
by turning your head, it actually automatically signals which camera
view you start to get and and can access them
in you know, hypothetically in three sixty degrees around. Yeah,
I've seen some really cool implementations of this where it
has that sort of seamless overlapping technology. It's kind of
it's it's not that different from stitching together photographs to

(30:57):
make a panoramic image, except it's doing it in video,
which is kind Yeah, but but these are are examples
of again kind of marrying that idea of uh a
mechanical presence and allowing you the ability to perceive as
if you were that thing, because it's not. It's not

(31:19):
your view of the of your surroundings. Your view of
the surroundings is the interior of that vehicle. It's the
view of the vehicle and within its surroundings. So that's
kind of you know, you extend that to a robot,
like a humanoid robot, and you could say, Oh, I
that would totally work if I have a head mold
display with head tracking, and when I turned my head,
the robot changes its perspective, whether it turns ahead or

(31:42):
just a camera rota exactly. Yeah, that's kind of one
argument for the fact that if we were to try
to create robotic avatars, they should probably be basically humanoid robots. Yeah,
because you're not going to be able to, you know,
inhabit the headspace of something that's not at least sort
of shaped like a human. The further away you get

(32:03):
from the human shape, the more effort it's going to
take on the part of the human controlling it to uh,
to do so in a way that feels natural now
it's not. And that's I don't think that's an insurmountable challenge,
and we'll get into that a little bit. But I
think it's probably easier in the long run to get
human pilots used to controlling a a roving, a rolling

(32:27):
robot than it is to create a bi peedle. Yeah.
Building a robot that's humanoid, that is capable of doing
things like I was thinking of picking itself up when
it fell over. Really that's a big challenge. Those Boston
Dynamics people are Yeah, those robots, those robots are like

(32:50):
paper forward. No, No, they're working on two legged robots.
That's true. There is the one that was running on
the treadmill. Well, at any rate, let's talk about some
of those leunges. Yeah, well one of the big ones obviously,
if we're so. So, what did Chris ask about Because
we were talking about robotic avatars specifically mentioned for space exploration,
right right, and Chris had said, uh, perhaps we would

(33:14):
be able to use something like quantum communication to get
around the lag and latency issues we have. So, for example,
the Curiosity Rover when it when we were landing that
we as in when the team was landing that I
had nothing to do with it. I was covering it,
which was cool, but I had no direct involvement. When
the team was landing, they are are observing the Curiosity

(33:36):
Rover's landing. We were doing that knowing that the events
were watching had already happened. Right, That had taken that
fourteen minutes for information to get back to us, And
technically we're looking at what happened fourteen minutes ago. So
there was a time where the robot was either safe
on the surface of Mars or had crashed and was uh,

(33:57):
you know, completely lost to us, and we had no
way of knowing. We just had to wait for that
time to catch up so that we would find out
what had happened, which is kind of crazy when you
think about it, You're like, oh, this is an event
that had occurred one way or another, but we had
to wait until we until the information could get to
us before we learned it very briefly with short Rover. Yeah,

(34:18):
that's true. It kind of really was short and Rover.
So that that's a great way of saying. What about
quantum communication? Now, I'm not entirely certain what Chris was
trying to say because there wasn't a full explanation of it.
Uh So I might be misinterpreting what Chris was saying,
and if so, I do apologize. However, I gotta address

(34:41):
this in that the way we use the term quantum
communication does not mean instantaneous communication, which seemed to be
what Chris was saying. But I'm not entirely certain, and
this is partially just a terminology thing, but but yeah, so,
so what does quantum communication mean? All? Right? Now? Generally
that refers to using the principles of quantum physics, which

(35:02):
we recently talked about in our Random Number Generator episode UH,
where the using quantum physics to create like truly random numbers,
but at least they appear to be truly random again
based upon all of our observations right as as far
as we can tell, it's truly random, and it's to
use those to create keys for cryptographic purposes. So you

(35:26):
can be assured that any messages you send to someone
using this methodology are completely secure across the channel as
long as that session is active, but it doesn't speed
up that transmission. That transmission itself still takes place over
more classical communication media, so you're limited by the speed
of light. Ultimately, you can't go faster than that. We're

(35:46):
pretty fast, it's but over long distances. It's not really
fast enough, and it's not instantaneous yet we can't. We
can't use quantum communication for really long distances. We're limited
by the actual limitations of quantum mechanics. In this case,
the truly optimistic believe that maybe we're talking five hundred

(36:07):
kilometers range max. UH. In real life, we're talking closer
to hundred fifty two d kilometers or so being worked
on now, And I would point out that that although
longer than a car, is still shorter than the distance
to other solar systems. Yeah, significantly not interstellar range here. Um.

(36:30):
You know. One idea that this might be related to, though,
is the one of quantum entanglement, and that might be
what Chris was referring to. And again, if I'm wrong,
I do apologize, But there has been discussion of quantum entanglement,
which is a truly odd thing to you know, concept,
to us on the cissy, it's spooky action at a distance,

(36:51):
as of certain Albert Einstein would have would have said
and did say. In fact he didn't, not would have
he did. Uh. So the cons up here is is
a little mind bending, right, So you've got I think,
also sometimes misunderstood. It can very easily be misunderstood. So
it's kind of weird. Yeah. Yeah. So so on the

(37:12):
quantum level, you can get these quantum particles, sub atomic
particles that are entangled with one another, so their states
correlate with one another. They don't, they don't necessarily match,
but if you know something about one, you automatically know
something about the other. One right exactly. So you know,
there are a lot of different quantum states we could
talk about, like the polarization of light or the spin

(37:34):
of an electron. Let's go a spin. So let's say
that you could describe spin as being either up or down.
That's just two directions that we could talk about, but well,
we'll limit it for this purpose. And uh, if you
have two entangled electrons, one is spinning and you measure
one and it's spinning up, you know that the other one,
because it's entangled, is spinning down. It's the opposite direction.

(37:56):
You know that because of the fact that they are entangled. Now, uh,
here's where it gets kind of kind of crazy. These
two particles will remain entangled no matter how far apart
they are. So you could take those two entangled particles,
take one to one side of the galaxy, the other

(38:17):
to the other side of the galaxy. They're separated by
the entire Milky Way, and if you measure one, you
know what the other one state was at the moment
that you measured yours because they were entangled. So some
people have suggested that this is faster than like communication,
But if you really stretch your mind, you realize that

(38:38):
locality has not necessarily been violated because you're you're really
only you already knew that the two were entangled, right,
You already knew that, and by measuring it, all you've
done is determined what state they were in at that moment.
And if you wanted to try and do an experiment

(38:59):
where you communicated something some way using this, it all
falls apart. So let's talk about an example, because otherwise
this is going to get super super complicated, and then
I'll talk about some of the arguments about quantum entanglement
because it's not a settled issue at all. Uh. I
want to use this example though. All right, So I

(39:20):
give Lauren an electron. Don't say I never gave you anything.
I give Joe an electron that's entangled with Lauren's electron.
That's pretty cool, Yeah, always putting burdens on me. I
ship you guys to oppositends of the galaxy, and at

(39:42):
this point I am no longer involved in the in
the communication of the two of them, however, So so
Lauren decides she's going to measure her electron see which
way it's spinning. She sees that spinning up, she knows
that Joe's electron therefore is spinning down. Now, Joe, oh,
if you were to measure yours and you saw it
was spinning down, you would know that Laurence was spinning up.

(40:04):
But you haven't communicated anything yet. Yeah, And what we
really want to say to each other at this point
is like, man, Jonathan sucks. Right, So let's say that
Lauren sends the message man Jonathan sucks along with Oh.
By the way, your electron is spinning down, it would
have to go across classical communications channels, thus traveling at
at best the speed of light across the galaxy. So

(40:26):
even though you could measure the electron and know what
state Joe's electron was in, the actual message about that
information would still be limited by the speed of light.
If you wanted to change the spin of your electron,
as if you could, like, hey, Joe, whenever you're electron

(40:46):
is spinning down, it means this, and whenever it's spinning
up and means this. I'll control my electron from over here.
You just watch yours. Everything will be fine. If you
try to change the state of your electron, Lauren entanglement breaks,
so Joe could still be observing the electron, and it
could still that spin could be changing, but it would
no longer be connected to the state of Lauren's electron.

(41:08):
It would just be random. So no longer in superposition
it has been affected by a macro scale system. And
now Joe would be thinking that Laurence talking crazy talk,
because it would just be random messages based upon whatever
code you guys had worked out previously. So so there
is no way to send the message. Man, Jonathan sucks
through not through quantum entanglement as far as we understand

(41:30):
right now. However, that being said, this is far from
a settled matter in quantum physics. Yeah, so first we've
got so you're admitting everything you just said is wrong.
I'm admitting that everything I said could be wrong. Man,
I am allowing for the first of all, I am
said accept the burden of proper scientific humility on so

(41:55):
so it could be wrong. So, first of all, this
was an idea that Einstein hated. He did not the
idea of particles being able to uh be entangled with
one another, and that this could potentially lead to something
like faster than like communication. He called it spooky action
from a distance. Nature doesn't owe us liking it. That's true.
That's true, And there have been people who pointed out

(42:18):
that there are loopholes within quantum mechanics that could potentially
allow for a more classical explanation of the behaviors of
those sub atomic particles that does not involve entanglement. In
other words, we have the illusion of entanglement, but in
reality something else is happening. So for example, there there's

(42:41):
a story about how you could use an instrument to
measure uh the various uh states of quantum particles and
thus see the entanglement. But there have been there's been
a point, as people have pointed out that perhaps the
the instrument of measurement itself has communicated the state of
one to the other using classical means. So it's the

(43:05):
speed of light is your limiting factor. So in other words,
we're talking about non sentient things essentially communicating with one another.
That's what happens at the quantum level, guys. So in
other words, the whole thing ends up being what some
folks would call a conspiracy. That one sub atomic Yeah,
so that one subotomic particle, once you've determined what spin

(43:26):
it is, the other one is told quote unquote that
through the instrumentation and ends up spinning the other way.
Now there's also the talk about how perhaps there could
be some set of events that creates this illusion of
the two particles being entangled with one another, and that

(43:47):
in fact, you could almost think of it in a
in terms of fate, that the the the steps you
take to make those measurements in fact determine the effect
you get. And so you see the effect of entanglement,
but the effect really isn't there. You have created it yourself.
And this is kind of a concept called setting independence,

(44:09):
Like how do you determine that your actions did not
lead to this? So there's a proposed um experiment. I
read about this was in an article that I read
back in November fourteen. That's kind of crazy. And here's
the here's the proposed test. You've got a particle detector
and it has different settings that you can use to

(44:29):
measure sub atomic particles, right, and if you were to
choose which setting you wanted, like quote unquote consciously choose,
you could be setting into motion the series of events
that determined that both of these particles appear to be entangled. So,
in other words, you are the cause it's not truly entangled.

(44:50):
You have caused this to happen. So they wanted us
to say, let's take all of this chance out of it.
What we're going to do is we're going to look
for the oldest light we can detect in the universe. So,
in other words, the furthest light source we can possibly
find as soon as it hits Earth for the first time.
So I don't know if they're looking at cosmic background

(45:14):
radiation or not, but what they are doing is they're
looking for uh, bright lights that are as furthest the
furthest away from us as possible. In other words, we've
just detected the light for the first time, we've never
detected it before, and they're looking for a couple of
different sources and uh preferably those sources have to be
far enough from each other so that their light could

(45:37):
never have touched one another before that moment, so that
means this light could not have interacted with anything before
that specific moment. And then taking basically the millisecond of
when it hits as like, if it's an even number,
then it's a zero. If it's not number, it's a one,

(45:58):
or something along those lines. You've then determine what setting
you use your particle detector. And the argument is that
if in fact this means that events have come together
to cause the illusion of entanglement to happen, they must
date all the way back to the Big Bang. So
either either of the particles are truly entangled, or everything

(46:22):
has been predestined from the Big Bang, or they're not
truly entangled by that big turtle. Possibly, So that's kind
of crazy. But um, you know, at any rate, even
that's one of the weirdest things we've ever talked about.
I think, Yeah, it's pretty weird. So there are also

(46:42):
some physicists who are looking into the possibility of maybe
there is some way of working around this quantum entanglement
for communications purposes. But from why I understand, the prevailing
thought in in the discipline is that it is not
likely to be possible, that it's it's really just an
interesting phenomenon and uh is perhaps not practical anyway. Yeah. Well, also,

(47:08):
I mean you have to take relativistic physics into account.
Where didn't Einstein or some other physicist experiment sort of
in their thoughts with the idea of the tachon telephone
the tachyons are the particles that travel faster than light,
and if you could send a signal to someone through
take eons, apparently would arrive before it was sent. Now,

(47:31):
take eons, of course, are hypothetical. We don't know evidence
that actually it's it's it's convenient for math right now.
So another challenge besides the communication one. I mean, obviously
that was huge, right because if we are to send
robots out into the galaxy to explore, then if we
are limited by the speed of light for communication purposes,

(47:54):
we're never going to get that real time experience. Well,
I want to propose a solution that I think may
have talked about a long time ago when we were
talking about space exploration earlier on in the existence or
the history of this podcast, which is sort of robotic
avatars enabled by teams of astronauts who don't exactly go

(48:14):
straight into the monster's mouth. So, so you've got to
explore the surface of some moon or or asteroid or
something like that, and you want to do it with
a robotic avatar, something that has the judgment and uh
and foresight and control of a human while having all

(48:35):
those wonderful advantages that robots provide you could, for example,
have a spacecraft orbiting this object, and then from that
spacecraft you could have a human who dons you know,
control gloves or an exoskeleton of some kind and VR
headset and then becomes that robotic probe and then's down

(48:56):
on the surface and is close enough to communicate without
too much late And Chris actually brought that to light
as well, saying that that could be a possibility for
something like exploring Mars or even setting up a colony
for Mars in the future where you know, we talked
about the Mars one proposed plan, which we need to
do an update on, I think at some point, just

(49:17):
to talk about some of the information that's come to
light since the plan started, wonderfully sketchy information. Yeah, Yeah,
there's there's a lot to talk about. I think maybe
we do need to revisit that at some point, but
at any rate, Uh, there there's talk of there was
talk of using robots to build the habitats that the
Martian colonists would be living in. Yeah, and and this

(49:39):
this approach would make a lot of sense to being
able to have that real time control or close to
real time control of robots by using uh an orbiting
space station of some sort, or even just a spacecraft
with people on it that would then be directly controlling
the robots on the surface. That delay would be much shorter. Yeah,

(50:00):
it's a lot safer to that way. You don't have
human construction workers down there breathing in all of those
delicious perchlorates that would be good Martian soil. Yeah, that's
not not great for your lungs breathing all that in.
So we talked about licking Mars dust earlier, Yes, we did. Yeah,
that was one of the first things you said, Joe. See,

(50:22):
I think the communication challenge is probably the biggest one
that we have, especially once you start about talking about
going beyond our solar system. I don't know that we're
going to solve that without some other breakthrough discovery that
allows for some faster than like communication. But there are
some other challenges to We need to make sure that
whatever robots we create are safe for whatever their purposes. So,

(50:45):
for the example of robots here on Earth, if we
want to have a robot like the one that was
reported to reported by in a an I O nine,
we need to make sure that it's going to be
safe to be around, you know, as with other humans, right,
you don't want a robot that would be capable of
harming someone through uh and you know, whatever action it
may take. You know, like like if you ever go

(51:07):
to a manufacturing plant with one of those giant industrial robots,
they have these enormous areas set up around them to
prevent people from getting too close because it's deadly and
bumpers and stuff like that. Sure, sure, which is less
of a problem if you're right on Mars because no
one else is walking. Yeah. Yeah, what are you gonna do?
Shove over a Martian? I mean's robot? I hope not

(51:30):
that Martian. We're going to start a world war because
Q thirty eight space modulator or whatever it was. In reality,
I would only make peace loving robots. You're the one
who had all the shoving ones. I'm just saying mine. Okay,
it's just hypothetical robots. Pathetical robots that would probably belong
to Jonathan. All right, let's be fair. All right. So anyway, um,

(51:52):
you know, you all obviously would need to build a
robot that was going to be uh, you know, well
designed for whatever environ it it was going to go
into this kind of goes back to, you know, having
a robot with wheels as opposed to legs because it's
more stable. Yeah or uh. And it may not have
the same number of limbs that we have, So it

(52:13):
may be that it has a single articulated arm, or
it could have a whole bunch of articulated arms. The
user interface would have to be designed to allow for that, right,
depending upon how much autonomy you gave the robot versus
remote control. If you want true, the true sense of
being present in that robot, then you want as much
of that control as possible. Otherwise it feels like you're

(52:34):
just a passive audience member um watching a movie or something,
as opposed to the person who's actually making things happen um.
And we'd have to make an interface that makes sense
based upon what the robot is capable of. The robot
is able to gather a lot more information than our
human senses can detect. We have to have that information
presented in a way that makes sense to us, the

(52:56):
human operator. Right, because we can't see in every ectroum,
or we can't hear certain noises. Do we have that
converted into what we can here? How do we indicate
that it would be normally beyond our range of sensing.
These are little questions that we have to answer in
order for this to make sense. And then, of course
we just have to train people how to use these

(53:16):
robots whenever we do develop them, and that's probably the
easiest because humans are pretty pretty plastic with the brains, right, yeah, yeah,
And you know, just just looking at the different number
of video games that any given human can play, I'd
say that we're pretty capable of moving through an environment
with a avatar that does not move exactly how Yeah,

(53:37):
we do. I agree. Yeah, that's a good example, although
it does mean that whenever you switch from one type
of robot to another, you've got to have that, you know,
five to ten minutes of oh it's right be in
this one means that I end up shocking a person
as opposed to handing them the cup of hot coco
I made them. That would be important to Yeah, and
the left shoulder trigger in this one isn't poke the button,

(53:59):
it's I are a rocket launcher. That's good to remember.
Good to remember. I hope whenever they design robotic avatars
they don't have any controls that rely on joystick clicks.
I mean, how does the word how do you sprint?
Click the left thumb stick. Oh well, that that wraps

(54:19):
up this discussion. I want to thank Chris for that
amazing It was really a phenomenal, phenomenal message, and I
wish we could have read the whole thing, but that
would we would have had to have a second episode. Uh,
but I do welcome all those kinds of messages to
come on in because it's fantastic to hear from you.
We love knowing what you guys want to hear more about,

(54:41):
and uh, we welcome that kind of message. So you
can send us an email. That email addresses FW thinking
at how Stuff Works dot com, or you can drop
us a line on Facebook, Twitter, or Google Plus. At
Twitter and Google Plus, we are FW thinking over on Facebook.
Just search f w thinking in a search bar will
pop right up. Leave us a message, and we'll talk

(55:03):
to you again really soon. For more on this topic
in the future of technology, visit Forward Thinking dot com
h brought to you by Toyota. Let's go Places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.