Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow your Mind from how Stuff
Works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Julie Douglas. And
as of this recording, the robots singularity has probably not occurred.
Mm hmmm as far as we know. As far as
(00:25):
we know, but we're not really sure how to define that, right, right,
It's kind of an evasive term. I mean, roughly, we're
talking about computers robots reaching the level where they not
only um um meet us on the same mental plane,
but they exceed us and they become greater than us,
and they and then in various scenarios could conceivably play out,
(00:49):
people really like to have fun with fun with the
darker images, they just leave us in the dust, like literally, yeah,
like they just they take over the world, or they
just decided to destroy all humans, or or our lives
become so robotic and so enhanced by robotics that that
the the human part of our life becomes kind of
a kind of a core to it. You know, it's
(01:11):
just but not a necessity of it. Well, I think
I love talking about the singularity because it does um,
it brings up so many different points of view, and
you and I have talked about it plenty and always
feel like you have like this very um positive outlook
on the Singularity, whereas sometimes I get paranoid and I think,
(01:31):
oh no, to be taken over says when you don't
even come into work and we have to call and
se who we are because it's the Singularity and anxiety
is getting taking over. Yes, those are the days when
I've locked myself in my bunker. Yeah. Whereas I tend
to look at it like, you know, if the if
the coffee machines become as smart as humans, then we're
just gonna have a really great coffee. I mean, that's sea.
(01:54):
I love that that that's the sort of positive pluck
that I'm talking about here. Um, But in that context,
you know, it does it seems Michael, Okay, this is
something that could be far away, or it could be
something that's really near. We don't know, but robots are
having a fundamental impact on our lives in myriad ways, right. Yeah.
I remember I was looking into some information about recruiting
(02:17):
into computing programs and they pointed out that there's virtually
no field of study anymore that does not contain programming
and uh and computerization, and it's going to get to
where there's going to be no part of of our
society that isn't touched by robotics in some way, shape
or form robotics or you know, or or really high
end programming or us even being touched by robots, right,
(02:40):
I mean we've talked about Roxy the sex Spot literally
touched by it. Well, I don't know if the robots
doing in this situation, the robots is being touched, I think, Oh,
but they are, I'm sure are very soon there will
be touching. Yeah, but it's very hard to argue this
consensual that's true unless it's programmed. I supposed Yeah, that's
a whole party. Um, And we will get to that definitely.
(03:03):
But again, thinking about robots and how they're being used
in day to day life, we actually turn to Dr
Ronald Arkin. He's the Regent's Professor in the School of
Interactive Computing and is the director of the Mobile Robot
Laboratory at the Georgia Institute Technology. Yeah, it's not a
mobile lab though. It doesn't you know, it's not like
(03:23):
it doesn't break away from the rest of the building
or anything. And uh, galvanting around Atlanta. No, No, but
he and his team, along with research engineer Alan Wagner,
they've blazed the way when it comes to robotics and
in particular thinking about robotic ethics, and they have created
(03:44):
a robot or robots that can deceive. Yeah, And it
all boils down to an algorithm, or more specifically a
cognitive deception modeling algorithm or or a series of algorithms,
and it involves not not only only giving the robot
the capacity to fool another robot or a human, but
(04:06):
giving it the ability to decide when to fool another human.
Because that's the important thing. We all have the ability
as humans. We all have the ability to see the
ability to lie. And one of the the the the
core things we have to decide in life is when
to tell the truth and when to lie. And if
it's if you're out of balance on that, life can
get pretty messy and nobody will want to hang out
(04:28):
with you either because you've never um, you never code
anything um for consumption, you never you know, dish out
a white liar two, or you just lie all the
time and are kind of a jerk. Yeah. Yeah, So
I mean we wouldn't want to create jerk robots. But
the problem is that that robots in their simple form
are they're they're robotic, they're they're they're they're not gonna
(04:49):
they're not gonna code anything um to make it go
down easier. Yeah. Yeah, So it is really interesting, like
how do you how do you do this? How do
you what? What is deception? Anyway, even when when you're
talking about a robot And so we actually talked to
Dr Arkin about this, and this is what he had
to say about deception and what it is defined as
(05:10):
in robotics. It's actually a definition that we borrowed from
cognitive science, which is a false communication that tends to
benefit the communicator. So the important thing is that you
will benefit by saying something that is untrue. Now in
the process has often been referred to as lying robots
and the like as well too. It's a false communication.
(05:32):
Is lying leaving a trail on the ground. I don't know,
that's a verbal act. I could argue this is more
expansive than simply lying. In this case, it's basically trying
to create a deception to make someone believe something that
isn't true and act in a way. The work that
we looked at in the military. For the military was
(05:53):
more a fundamental question of what deception is and how
it could be employed. So it's not soly writing up
look up tables and saying these are the sets of actions,
as I think I mentioned. If I didn't, there's an
entire field manual for the U. S. Army on deception
deceptive techniques, because it's important, it's crucially important in an
(06:14):
appropriate way to conduct warfare. Uh. But this isn't just
saying do this in this set of circumstances, do this
in this set of circumstances. This is I have to
understand you. I have speaking as a Roman. I have
to understand you and learn what your potential outcomes are.
And one of the other interesting factors in that paper,
(06:34):
which I was about to mention as well too, is
that it was observed, at least in this narrow case
that we studied, the more sensors that you have, the
more the easier it is to deceive you, which it
was not intuitively obvious. You think, gosh, you know, if
I have more ways to understand what's around me, I
would be more likely to get the right action. But
(06:58):
deception works in this particular, it appears due to the
fact that we can exploit multiple channels of information. You know, um,
hearing that, I can't help but be reminded of Isaac Asimov.
He of course had the classic book I Robot, which
is a series of short stories about about robots, and
they're very they're very analytical. They're all about, you know,
(07:20):
robots of being the three rules of robotics, and humans
trying to figure out how the robots are are are behaving.
And there's a story in there called Little Lost Robot.
And and in this a researcher loses his temper and
swears that a particular robot and tells it to get lost,
and it does so, and then the chief robo psychologist, Dr.
Susan Calvin comes in to find it. Um, So it
(07:43):
couldn't help it be reminded of that. It's a fairly
lighthearted tale. Yeah. Actually, that's that's interesting because just in
in the context of the experiment that Arkin and Wagner did,
they used interdependence theory and game theory to develop the
algorithms that tested the value of deception in a specific situation,
(08:04):
and the situation had to satisfy two key conditions. To
warrant deception. There has to be a conflict between the
deceiving robot and the seeker, and the deceiver must benefit
from the deception. So we're talking about here, is like
an elaborate hide and seek game or not so elaborate,
essentially um to test or algorithms. They ran twenty hide
(08:25):
and seek experiments with two autonomous robots, and they had
colored markers which they lined up along three potential pathways
to locations where the robot could hide. And so the
hider robot randomly selected a hiding location from the three
location choices and then moved toward that location, knocking down
colored markers along the way. So once it reached a
point past the markers, the robot change course and then
(08:47):
hid in one of the other two locations, and the
presence or the absence of standing markers indicated the hyder's
location to the seeker robot. Okay, well this is this
is like a very basic form of deception. Uh. We
see this in movies all the time where the hero
is running through an office building trying to escape somebody.
So what does he do? He runs? He or she
runs to this door at the end of the hallway,
(09:07):
opens it real wide, let's it shut, but then goes
off in another direction. The guys they're chasing him work
hows come around the corner and see the door slamming.
They're like, ah, they went that way, chase them. Or
it's like if you if you're in the forest and
you you're making your way past these branches, if you
started snapping branches along one direction and then backtracked and
went somewhere else to and then that your pursuers would
(09:30):
come through and they'd say, well, look, clearly they went
this way because this is where the branches are snapped off. Right, right,
It seems basic, right, I mean, and it is and
um in the sense the physical sense that we think of,
But think about all of that information, all that data
that you have to absorb, and then the choices that
you have to make on that that we take that
for granted right with our brains, but trying to program
(09:50):
program that in a robot is particularly interesting and quite
a challenge. And so what seems rudimentary now and is
rudimentary in terms of robotic we know, will become much
more nuanced later on. And so that's why a lot
of people are looking at this deception, even if it
is on this level sort of hide and seek level,
and saying wow, okay, you can do that in robots.
(10:12):
Now what what is that going to look like in
five years? And so um ar can actually talked a
little bit about the attention the experiment received in that
regard and how programming a robot to deceive is really
similar to thinking like a con man a significant amount
of press. I guess it's the best way to put
an attention is another way to put it, which I
(10:33):
would contend is probably somewhat disproportionate to the results that
we obtained, but it's it is a controversial piece of work. Nonetheless,
it was the first in depth study of the phenomena
of robot deception and the ability of a robot to
deceive other robots or potentially human beings as well. UH
(10:57):
and we look very closely at interdependent theory, a cognitive
science model as well as game theory as the basis
for understanding two things about robotic deception. The first is
when is it appropriate to deceive? Because you don't want
a robot to be deceiving people all the time, else
(11:18):
it would not have any value and we were typically
looking at in this early work or would'll all call
one shot deception trying to deceive someone just once that's
good enough, kind of like a conman. The second is
how to be able to accomplish that, And I could
talk a little bit more in depth about those, but
I should say that much of the work that was
(11:40):
based on some of Alan's earlier work on trust, because
as any good command knows, a precursor to deception is
the establishment of trust. But we were interested in that
particular case of learning how a robot could trust a
human being, rather than the classic case for man machine systems,
(12:01):
how can a human learn to trust a robot? There
are many instances such as UH nine eleven, for example,
when people act in ways that are improper to say
the least, and automation should be able to override them
when they are doing that. Machines should know when not
(12:24):
to trust the human being. And using the same kind
of models and the same kind of situational analysis, but
now looking at ways in which we could induce the observer,
which will call the mark in this particular case, which
is the language you would use in this UH, that
we could induce a outcome belief in the mark that
(12:46):
in action they would take would be more beneficial to
them then would be if they took another action, But
it actually ends up being more beneficial to the robot
in this particular case is the UH strategy that we
used to address the how imagine a military situation and
the robot has valuable information or in and of itself,
(13:08):
it's a valuable resource and does not want to be
captured or reverse engineered or whatever the case may be. Now, granted,
you could put self destruction capabilities in it, so it
would not be the case, but that destroys the asset
that you would want to preserve, and it may have
valuable information of its own in its own right. Uh. Strangely,
are are two D two and the original Star Wars
(13:30):
comes to mind. You know, get this to Obi wan Kenobi.
That wasn't our motivation? Um so what not just playing
hide and seek in the sense of some other folks
have done this with robots. Is robot finds a good
place to hide and hopes for the best. But in
this case, the robot models the pursuer using a very
(13:52):
crude version of what's called theory of mind, tries to
establish what would the pursuer do in these particular sets
of circumstances and leaves a as a consequence of that
leaves a false trail. It's kind of like uh, putting
mud down or tracks down and saying I'm over here,
but then backtracking and hiding in a different location. So
it is the belief that the robots belief that if
(14:12):
the pursuer sees that trail, the pursuer will move in
that direction. And if that is indeed the case, the
robot highs and some other location, the pursuer goes down
that trail and then the robot can escape after that.
So that's the notional aspect in that particular case. So
in a way, we're kind of talking about robot original
sin here. You know, up until this point, they've we've
(14:34):
just had robots doing what the programmed to do. And
if they say an industrial robot just you know potent, say,
you know, welds a man's face off, it's not doing
that with any intent to weld someone's face off. But
but in this scenario, we're we're creating a robot that
can lie and and I find it really interesting that
it's you know, we're talking about a very simple form
(14:55):
of lying, a very simple form of deception that will
grow into more complicated forms of behay. Here, it's it's
kind of like looking at, say the first time, like
a child lies, or even just how lies themselves have
the tendency to start very small and then and then,
if unresolved, to grow and grow and grow and to
become this greater, more complicated and complex thing. Yeah, it is,
(15:18):
but when you think about lies, you also know that there's, uh,
there's another side to it. That lives can be really helpful,
right they can. Actually we do it for many reasons.
But sometimes we do it to spare people's feelings, or
there might be a dangerous situation and you need to
lie about something. I don't know. Paramedics arrive on a
scene the you know, the guy asked, K, doc, am
(15:39):
I gonna make it that the duck's not gonna say maybe,
or you know, the duck's gonna say hanging there. We
can make it right, right, So, I mean, they're essential
to our own existence, and they're certainly essential to warfare.
And so that's when you look at something like these
what they're being called the septicons, these deceiving robots UM
that they could actually be very useful in search and
(15:59):
rescue mission UM. And of course there's the idea that
they could be more fully engaged in battle. But of
course the challenge is to be able to give robots
the ability to judge a situation, act accordingly and as
ethically as possible. So again that's that's where these um,
these sort of uncomfortable bits of information start to butt
(16:19):
up against each other because it's like, here, on the
one hand, we have this great technology that can do this.
On the other hand, you know, it's you know, work humans,
and we know that we are programmed not necessarily to
be the most altruistic beings. UM, So how can we
create something that's not necessarily in our image, but it's
(16:40):
better than us. H. It's an interesting proposition. So there's
a good scenario of of that sort of judgment center
that we have that we're trying to actually finance, hopefully
UH in robots. And the scenario at a robotic level
is UM what Arkin is talking about when he talks
(17:01):
about instilling in robins a more sophisticated He wants a
way to assess the situation and act accordingly. UH. This
is from a New York Times article. It's called a
soldier taking orders from its ethical judgment center. Uh. They
talked to Dr Arkin about this computer model, and it's
a robot pilot who flies past a cemetery in spots
a tank at the entrance, and this is the target
(17:22):
in this scenario. But there's a group of civilians in
this computer model, and they're gather that the cemetery and
the robot pilot considers the data and decides to keep going.
But soon it spots another tank out in the field
all alone, and it decides to fire on it. So again,
here are these different models that scientists and researchers are
(17:43):
trying to put together in order to you know, assemble
this sort of judgment center for robots themselves. Uh. And
there's obvious limitations to the technology right now as it stands.
This presentation is brought to you by Intel sponsors of tomorrow.
(18:08):
But there's the possibility that machines could one day assess
situations with the advantage of not engaging in our something
like our own confirmation by us, you know, where we
sort of perpetuate information that's faulty information because we wanted
to fit with our worldview. Um, and then also we
get impassioned about things, right, We don't always think clearly,
(18:30):
particularly in cases of war. Right. Yeah, So we taught
to Arkin, and we wanted to to find out why
people might be frightened of this part, this proposition where
we might have a technology that could essentially help us
one day to make better choices in in the war.
(18:51):
But also you know why we might be a little
bit freaked out by that. Well, I mean, the basic
prospect is making war easier to wait, like easier to
decide to wage war. I mean, that's one of the
big arguments against robotic warfare is that it becomes if
there are no human if there are fewer human lives
on on our side on the line, and fewer chances
(19:11):
of there being some sort of a horrible headline grabbing
scenario in terms of enemy casualties, then why not declare
war right over any little thing, over energy crisis, you
need some oil, Just declare war because the robots. You're
not gonna lose anybody. You're not gonna have grieving families
voting uh, voting against you in an upcoming election. And
(19:35):
if your robots can, you know, are really good at
at not um causing a whole bunch of civilian deaths.
Then uh, it's more surgical and strategic. Ah see. And
this this is what is so interesting that Arcan has
to say about this UM, particularly when you talk about
protocols of war UM. And this is what he has
(19:55):
to say about it. Well, we need to talk about it.
Is the most important thing is I've offered said that
the research I have done in that particular space UM
is one aspect, But the discussion that that research engenders
is as important as the research itself, at least to me.
And as such, I spent a lot of time talking
about it, not only to the media, but also at
(20:16):
military bases, at philosophy conferences, at the International Committee of the
the Red Cross where they developed the Geneva Conventions just
last month, a variety of different locations, ethics groups and
the like as well too too answer that very question.
So it would be really presumptuous at me to say, uh,
(20:39):
this is perfectly safe, don't worry about it, or the
sky is falling. The sky is falling, like some of
my colleagues are doing as well too. Uh. What I
do say is that if we are going to allow
these robots to make these life or death situations decisions.
If we're going to allow these robots to make these
life or death decisions on their own to some to rate,
(21:00):
I mean, they're not just going to go out and
start looking at someone and say should I go that individual.
They will be tasked by a human being in a
mission context and then make that particular decision. But if
we're going to do that, we need to understand what
it is that we're doing, what if any bounds are appropriate,
And from my perspective, if we're going to do that,
(21:20):
they must adhere to the existing laws of war and
the rules of engagement, as we as a society already do.
And so that's what my work is about. I have
often said that while I do believe that we can
actually do better than human beings in these situations, that
if it ended up that intelligent lethal robots were banned
(21:43):
from the battlefield, I'm not adverse to that, although I
still do believe that we can actually do better than
human beings and making the right decisions regarding lethal application
of force under certain circumstances. And it's really important to
understand that I am not talking about placing a soldier
with a robot. I'm talking about augmenting soldiers with robots
(22:05):
and using them in highly specialized missions such as building
clearing operations, counter sniper operations, things of that particular sort,
which is quite different than saying, uh, here's a terminator, uh,
and uh that's going to replace that particular soldier. Very
narrow well defined situations where something called bounded morality applies.
(22:29):
And that's what makes attractable, because human morality is extremely complex,
involving multiple brain systems and other aspects of that. Machine
learning deliberation, all sorts of things could be brought to
bear in applying it in that context. But what's interesting
is the lethal application of force is relatively low hanging
(22:49):
fruit for a person dealing with computational morality for a
couple of reasons. The primary one is that philosophers have
been thinking about it for thousands of years, under what
circumstances is it appropriate to kill someone? And we have
as a civilization, a Western civilization, uh codified the laws
(23:12):
of war through the Hague and Geneva Conventions and others
as well too, and said when you kill someone, this
is how you do it, and this is what you
don't do. And we tell our soldiers we don't hand
them a rifle and say figure out what the morality
is in the battlefield and say go and uh conduct
(23:34):
your mission. We train them and we say this is
what you do and this is what you don't do.
I also want to preface this. I have the utmost
respect for our war fighters are young men and women
in the battlefield. It's crucially important that that be understood.
But human beings, not all, but many perform outside the
bounds of what is prescribed due to reasons such as frustration, anger, fear.
(23:59):
Uh send a real fulfillment, which is what you were
talking about earlier, where you might believe something is going
to occur and then discard new incoming evidence of cognitive phenomena.
So how how we cope with all these different kinds
of things leads to in certain cases the Commission of
war crimes. And if you look at the Surgeon General's
(24:19):
report which came out in two thousand six of fire
member correctly in studying the data of the self reported
data from the soldiers in Operation Iraqi Freedom or I
think that was the one racky freedom um. The numbers
are staggering in terms of what was believed to be
(24:43):
unethical behavior, in terms of the performance, their inability to
report on unethical actions of their colleagues, their inability to
distinguish between insurgents and severe almost considered you're either UH
(25:05):
an enemy or you're not uh and UH. There was
nothing in between in that case. So they didn't understand
the notion of noncombatants considered non combatants insurgents under the
success and the data went on or not, which was
that was mind blowing if you want to talk about that,
at least to me in terms of the potential room
for improvement. So he makes a really interesting case in
(25:26):
developing the technology, but stuffing back from it in saying
that you were at a point in history we can
actually look at it and say, you know, ethically, should
we do this, should we consider the following things? Um?
And I think that that's that's interesting because in science
we've always sort of rushed forward, right historically, because we've
(25:47):
been really excited about what we could make, what we
could create, what we could do, and then after the
facts step back and went, WHOA, you know, maybe maybe
this was misused or this this was a misused application. Right,
So I think it's really heartening that he is UM
someone who's at the forefront of the ethics and is
really trying to tell people like, you know, let's let's
(26:10):
think about this, let's be smart about this. Yeah, and
especially in informing the people that are in the position
to make decisions, who are many times a pretty pretty
big distance away from from actual understanding of of what's
possible and what's not possible in computing in robotics. So
you don't want somebody that's, you know, some politician that's
(26:31):
far removed from it saying, yeah, robot soldiers, that sounds great,
We're not gonna have so many soldiers die. Cool, let's
do that. But the need that, but they need to
understand on some level that what the what's at stake
ethically and what the ethical arguments are. Yeah. And what
ar Can also had pointed out to UM when he
and I talked to is that the lawmakers most of
(26:52):
the time aren't even really aware of what's available technologically
out there. So a lot of this is trying to
UM educate the public and policymakers and make them aware
so that they can understand what's at stake. So it's
very interesting and I have to say that again, You've
(27:12):
always taken attack of being more positive when it comes
to singularity, and I've always kind of been, uh, they're
going to take us over. Um, you know, I have
my days. But in talking to ark And, I thought
that he had a really interesting perspective on that as well. Um,
and something that he talks about robots in the context
(27:33):
of them filling an ecological niche for us. So let's
listen to this bit of information about the technological singularity,
and they lay definition is referred to as they point
where machine intelligence exceeds human intelligence? Are we there already
(27:55):
lots of one? And Jeffard right, so, uh, you know
it's a question of how you define these things. Uh
as well, at some levels we're already there. So is
it when we have data? You know? I don't. I
don't even understand exactly what the singularity is, um. And
I do believe the machines will get smarter and smarter.
(28:16):
I actually don't believe they necessarily should be compared to
human intelligence. I think robotic intelligence, just as dog intelligence
and ape intelligence and intelligence, is different than human intelligence.
I believe that robotic intelligence will be something, if allowed
to be instead of forced into the human paradigm, something
different than human intelligence, and they feel what are referred
(28:37):
to as ecological niches, places within the world where they
can survive and prosper and grow. And why on Earth
would we want or in space for that matter, where
I anywhere would we want to create something that is
completely and utterly competing with us in the same ecological
niche that can lead to extinction in that case where
you will be displaced. I don't think that's the Wines
idea personally, but that doesn't mean we shouldn't do robotics.
(28:59):
We just need to do the right kind whatever that
happens to be. We can't all go off in to
our corner, raise our flag and say yes, yes, no,
no uh. We have to talk with each other and
understand and by each other. I don't mean a bunch
of roboticist, I mean policymakers. I mean roboticists, I mean
(29:21):
the military, I mean civilian populations, I mean theologians. I
mean stakeholders, is what they're referred to. All the stakeholders
have to get together and come to grips with what
it is that we're creating and start to think, first
of all, what needs to be regulated, and once that
is determined, how much regulation is appropriate under these sets
(29:44):
of circumstances. And this is an ongoing process. It's not
going to be a set of commandments. It says this
is it, this is the way it will always be.
It's a living documents in the right and some of
my colleagues are doing that in other spaces as well.
But I've been very pleased with attraction that robot ethics
is beginning to get uh. It was it felt a
(30:06):
little bit of times in the early days of a
voice crying in the wilderness, but more and more people
are getting involved. There's a new special issue on robot
ethics coming out in the elect Tripoli magazine and Robotics
and Automation. So I see many of my colleagues starting
to say things about it and to take it seriously
(30:27):
and to speak their minds as to what they think
is right as opposed to what I as a scientist
myself when I was a bench chemist, or whether I
was a roboticist. We're driven purely by curiosity driven research
where we just want to understand something, whether it's a
how to make a better UH molecule, or whether it's
(30:50):
how to understand that principles of intelligence and view them
in the machine. Concurrently, we need to understand how what
the consequences of that are. That is uh, and once
we do that, I think we can truly call ourselves
responsible scientists. Okay, so how do you feel about that?
(31:11):
To make you feel a little easier about the robots
gaining power? Or or yeah, actually did it made me
understand it more in the context of Okay, well, you know,
it would be really and we've said this before, really
stupid for us to create something that destroys us. But
I mean it's not. It's not. Yeah, yeah, we could
(31:34):
do it. But knowing that the discussion is going on
and hearing or can talk about um robots really enhancing
our intelligence or even making us rethink or intelligence is
for me sort of a paradigm shift, and that you know,
humans don't become the other, um, but the other in
(31:56):
the sense that you know, this is perhaps some sort
of technology that can continue to help us evolve, not
necessarily be trampled upon and then become human servants to
our robot overlords. Yeah, I guess, I guess people fear
it in a way. It reminds me when I when
as a kid and used to live in in Tennessee,
(32:16):
and there was this, um, there's this trailer. We would
drive by, and it started as a single trailer, and
then they built onto the side of it like a
big room here in a big room there, and then
there was there was more and more, until you couldn't
see the trailer anymore. And then one day they pulled
the trailer out of the middle of it and uh,
and I guess filled it in uh and just made
it a house. So I guess one could fear what
(32:37):
if we were the trailer in this scenario where we
continually augment human life and the human experience and in
our an entire culture here on Earth and beyond, and
then we reached the point to where the trailer is
no longer necessary part of the equation. But I'm not
saying I share that idea, but I like that idea.
I thought you were going to start off with some
(32:59):
sort of boo rat lay story there as as perhaps
embodying the the robot, and we just didn't know the
robots well enough yet. You know, No, growing up, we
did have the town robot, but it was largely um.
It's its main duties were just cleaning the streets and
apprehending straight criminals. You know, I see, but I mean
(33:19):
that's everybody's small town experience, of course. Yeah's Paris, Texas
best robots ever. Um, Well, this was really interesting to
to talk to Dr Arkin, and we actually have a
lot of other information, um that he shared with us
so very generously. He talks about robot ethics in the
(33:40):
context of the class that he teaches, which is Robots
in Society, and he talks UM with his students about
human students, his human students, about actual human relations with robots,
which we've talked about before, sex with spots. Um. That's
not his entire class, um. There are many other ethical
situations that he talks about, but we will definitely do
(34:03):
some follow up with Dr Arkin and some other topics.
It was really fascinating to to learn more about what
he's doing here in Atlanta at Georgia Tech, and we'd
like to thank him for taking the time to speak
with us. UM. And just so you know a little
bit more about Dr Arkan, he served as a founding
co chair of the Triple E Robotics and Automation Society
Technical Committee on Robotic Ethics from two thousand four and
(34:27):
two nine and is the co chair of the Society's
Human Rights and Ethics Committee, as well as the Triple
E r a s Liaison to the Society on Social
Implications of Technology. Well, hey, we have some listener mail,
so I think I'm gonna jump into that. Um. Of
all the bits that I have here are actually a
follow up to our Pope on a Cosmic Rope episode, Yes,
(34:50):
which we received a number of comments about UM and
UH and seemed to get a lot of people thinking.
And oddly enough, I don't think we received any hate
mail over that. I mean, not that actually that we
were fishing for it, but no, I mean, yeah, after all,
we did sort of hint that they might be wearing
skinny jeans under their vestments. Yes, but sure someone would
strike out at us against that. But you know, but
(35:11):
you know, I think, you know, obviously, people people of
faith have a sense of humor and uh, and we
actually I heard from a few of them here as
well as UH. One person who just has a reading recommendation.
Our listener, Albert writes in and says, enjoyed your podcast.
I was reminded of a book by Maria Doria Russell.
The Sparrow, which fits into your theme space exploration funded
(35:33):
by the Vatican, encounters Aliens. That's a bare bones description.
It's dark and deals with complex, they logical issues. That
sounds good, all right, I may have to add that
to the list of things to read. The next one
comes from Matthew Um who is a scientist from New Jersey,
and Matthew writes to Robert and Juley. A great fan
(35:53):
of the podcast, but when I heard your pope on
a cosmic rope, I cringed a little on your idea
of the Catholic Church. The Catholic Church is not a democracy,
so what's popular or in style doesn't change the doctrines.
I can assure you. No great principles get changed without
careful religious deliberations. And in fact, extraterrestrial life has never
been contradictory to Catholicism. Thanks for reading my input and
(36:16):
keep up the good work. All right. Well cool, what, Robert,
you haven't said a thing about the hair shirt that
I'm wearing. Oh well, if go ahead and talk about
the hair shirt. I had been just being polited about it. Well,
I know, and I thank you for that. You're very
sensitive but um, it is probably the second or third
time I've been wearing this hair shirt. But now I'm
(36:37):
wearing it because I believe that it was me who
called the clostridium officially a virus when it is indeed bacteria.
Did the bacteria itself right in it? Did it'? I
know that's awful when the actual bacteria writes in and
says I am no virus um, and I would like
(36:59):
for you to go ahead and and make the world
know that that's the case. And so I apologize to deff.
All right, Well, hopefully that will that will appease any uh,
both virus and bacteria that are listening to us. Uh.
One more bit of listener mail, this one from listener
um Ophelia from Devon, United Kingdom. Uh. Where I think
(37:22):
I've actually been. Um, it's like I think it's on
the on one of the canals or something. Yeah. Anyway,
if it's the talent I thinking about, it's really really
cool little town. Anyway, She writes in and says, I
really enjoyed your podcast on the relationship between science and
the Catholic Church. I've wondered since I was a little
kid what Catholics would do if aliens were discovered, so
I really got to kick out of the topic. I'man
(37:43):
about Catholic studying archaeology, so the conflict between the Church
and science is an important one to me, especially concerning evolution.
It's really encouraging to hear about people like Guy counsil
Mango who are passionate about both religion and science. And
what he said about fundamentalist was excellent and act. While
they have certainly been that way in the past, the
modern Catholic Church is not nearly so harsh about its
(38:04):
views as a lot of people seem to think. They
have been okay with and even encouraged the old stem
cell research for many years, and Pope on Paul's second
published an encyclical stating that much of evolution area theory
is not indirect contradiction with the views of the Church.
So I'm really glad to hear that the Church has
been making efforts to make their views better known. I
hope they continue with this because it would make me
(38:25):
feel much more confident in being a Catholic archaeologist. Keep
going with the great podcasts. I always enjoy them when
I'm taking a break from work or going to bed.
And I like how I get to learn something new
even when I'm relaxing. Cheers, cheers to you, Ohilia, thank
you for writing. Yes. And if anybody else out there
has stuff you want to share with us, I encourage
you to visit Facebook and Twitter. We're blow the Mind
(38:46):
on both of those. Uh, find us on Facebook, push
like to join us, follow us, and so who we're
up to. And on Twitter, I encourage you to share
a cool links that you may find are cool acts
that that worm their way into your life by just
throwing it up on Twitter with the hashtag blew my mind.
One word the hashtag hash blew my mind and uh,
(39:10):
and then we'll see it and we'll get to share
it and uh, because I think that's a hashtag we
really can reclaim as listeners. That's right. We can find
some very cool mind blowing stuff to share with each other.
And if you would like to share some other mind
blowing stuff with us, you can always do so via
email at Blow the Mind at how stuff works dot com.
(39:33):
For more on this and thousands of other topics, visit
how stuff works dot com. To learn more about the podcast,
click on the podcast icon in the upper right corner.
Of our homepage. The How Stuff Works iPhone app has
a ride. Download it today on iTunes