Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, welcome to Stuff to Blow Your Mind. My name
is Robert Lamb and I'm Joe McCormick, and it's Saturday.
Time for an episode from the Vault. This is the
episode Punish the Machine, Part two. It's all about about
machine moral agency, legal agency and culpability, or what we
make of this emergent world when when machines are acting
in an increasingly autonomous way. Let us delay no further.
(00:32):
Welcome to Stuff to Blow Your Mind production of My
Heart Radio. Hey, welcome to Stuff to Blow Your Mind.
My name is Robert Lamb and I'm Joe McCormick, and
we're back for part two of our talk about Punishing
the Robot. We are we are back here to uh
to tell the robot he's been very bad now. In
(00:55):
the last episode, we talked about the idea of legal
agency and culpability for robots and other intelligent machines, and
for a quick refresher on on some of the stuff
we went over. We talked about the idea that as
robots and AI become more sophisticated and thus in some
ways or in some cases more independent and unpredictable, and
(01:16):
as they integrate more and more into the wild of
human society, there are just inevitably going to be situations
where AI and robots do wrong and cause harm to people. Now,
of course, when a human does wrong and causes harm
to another human, we have a legal system through which
the victim can seek various kinds of remedies. And we
talked in the last episode about the idea of remedies
(01:39):
that the simple version of that is the remedy is
what do I get when I win in court? So
that can be things like monetary rewards. You know, I
ran into your car with my car, so I pay
you money, Or it can be punishment, or it can
be court orders like commanding or restricting the behavior of
the perpetrator. And so we discussed the idea that as
(01:59):
roots become more unpredictable and more like human agents, more
sort of independent, and more integrated into society, it might
make sense to have some kind of system of legal
remedies for when robots cause harm or commit crimes. But also,
as we talked about last time, this is much easier
(02:20):
said than done. It's going to present tons of new
problems because our legal system is in many ways not
equipped to deal with with defendants and situations of this kind,
and this may cause us to ask questions about how
we already think about culpability and blame and and punishment
in the legal system. And so in the last episode
(02:42):
we talked about one big legal paper that we're going
to continue to explore in this one. It's by Mark A.
Limley and Brian Casey in the University of Chicago Law
Review from twenty nineteen, called Remedies for Robots. So I'll
be referring back to that one a good bit throughout
this episode two. Now, I think when we left off
last time, we had mainly been talking about sort of
(03:02):
trying to categorize the different sorts of harm that could
be done by robots or AI intelligent machines. And so
we talked about some things like unavoidable harms and deliberate
at least cost harms. These are sort of going to
be unavoidable parts of having something like autonomous vehicles, right,
if you have cars driving around on the road, Like,
even if they're really really good at minimizing harm, there's
(03:25):
still going to be some cases where there's just no
way harm could be avoided because their cars. Another would
be defect driven harms. That's pretty straightforward. That's just where
the machine malfunctions or breaks in some way. Another would
be misuse harms. That's where the machine is used in
a way that is harmful, and in those cases it
can be usually pretty clear who's at fault. It's the
(03:46):
person who misused the machine. But then there are a
couple of other categories that where things get really tricky,
which are unforeseen harms and systemic harms. And in the
case of unforeseen harms, one example we talked about in
the last episode was the drone that invented a wormhole. So,
you know, people were trying to train a drone to
(04:07):
move towards like an autonomous flying vehicle, to move towards
the center of a circular area. But the drone started
doing a thing where when it got sufficiently far away
from the center of the circle, it would just fly
out of the circle altogether. And so it seems kind
of weird at first, like, Okay, why would it be
doing that, But then what the researchers realized was that
(04:27):
whenever it did that, they would turn it off, and
then they would move it back into the circle to
start it over again. So from the machine learning point
of view of the drone itself. It had discovered like
a like a time space warp that you know. So
so it was doing this thing that made no sense
from a human perspective, but actually was it was following
its programming exactly. Now for an example, uh, sort of
(04:50):
a thought experiment of how this could become lethal. There's
an example that is stuck in my head. I can't
recall where I heard this, who presented this idea? And
I kind of had it in my head that it
came from Max tag Mark. But I did some searching
around in my notes and some searching around and uh,
one of his books, and I couldn't find it. Perhaps
you can help refresh me. You remember maybe you remember
(05:12):
this joke, but the idea of the the AI that
is running deciding how much oxygen needs to be in
a train station, it didn't given time. Oh this sounds familiar.
I don't know the answer, but a lot of these
thought experiments tend to trace back to Nick Bostrom, so
I wouldn't be surprised if in there. But but go ahead,
right okay, as I remember it. The way it works
is you have you have this AI that's in charge
(05:34):
of of making sure there's enough oxygen in the train
station for when humans are there, and it seems to
have learned this fine, and when humans are there to
get on the train, everything goes goes well, everybody's breathing fine.
And then one in one day, Uh, the train arrives
a little late or it leaves a little right late,
if you get which whatever it is, and there's not
(05:54):
enough oxygen and people die. And then it turns out
that the train was not based saying its decision on
when people were there, but it was basing it on
a clock in the train station, like what it was. Um.
And I may be mangling this horribly, but you know,
another way of illustrating the point that machine learning could
end up, you know, latching onto shortcuts or heuristic devices
(06:19):
that would just seem completely insane to a quote unquote
logical human mind, but might make sense within the framework
of the AI. Right, they worked in the training cases,
and it doesn't understand because it doesn't have common sense,
it doesn't understand why they wouldn't work. In another case,
there was actually a real world case that we talked
(06:39):
about in Part one where there was an attempt to
do some machine learning on what risk factors would would
make a pneumonia case admitted to the hospital have a
higher or lower chance of survival. And one thing that
a machine learning algorithm determined was that asthma meant that
you were you better off when you got pneumonia if
(07:02):
you had asthma. But actually the reason for that that
that isn't true. Actually the reason for that is that
if you have asthma, you're a higher risk case for pneumonia,
so you've got more intensive treatment in the hospital and
thus had better outcomes on the data set that the
algorithm was trained on. But the algorithm came up with
this completely backwards uh failure to understand the difference between
(07:23):
correlation and causation. There it made it look like asthma
was a superpower. Now, of course, if you you take
that kind of shortsighted algorithm and you make it god,
then it will say, oh, I've just got to give
everybody asthma, so so we will have a better chance
of surviving. The point is it can be hard to
imagine in advance all the cases like this that would
(07:43):
arise when you've got a world full of robots and
and AI is running around in it that are trained
on machine learning. Basically, they're just a number of less
sort of soft sky nets that you couldn't possibly predict,
you know, like the sky net scenario being sort of
like a robots decide that one to end all war
and um, you know, humans causal war, therefore end all humans,
(08:05):
that sort of thing. But there's so many different like
lesser versions of it. It It could also be destructive or
annoying or just get in the way of effectively using
AI for whatever we turn to it for. Uh yeah, yeah.
To come to an example that is definitely used by
Nick Bostrom, the paper clip maximizer. You know, a robot
that is designed to make as many paper clips as
(08:27):
it can, and it just it looks at your body
and says, hey, that's full of matter. Those could be
paper clips. Yeah that yeah, yeah, that would be that
would be quite an apocalypse. Now, before we get back
into the main subject and talking about this limly in
case paper with with robots as offenders, there was one
thing that was interesting I came across. It was just
(08:48):
a brief footnote in their paper, but about the question
of what about if the robot is the plaintiff in
a case? Uh, They said, it's it is possible to
imagine a robot as a plaintiff in a court case,
because of course robots, you know, can be injured by humans.
And they cite a bunch of examples of news stories
of humans just intentionally like torturing and being cruel two
(09:12):
robots like that. They cite one news article from eighteen
about people just aggressively kicking food delivery robots, and then
they share another story. I actually remember this one from
the news from about a Silicon Valley security robot that
was just violently attacked by a drunk man in a
parking garage. I don't remember this one, but I can
(09:34):
imagine how it went down. Yeah, exactly. So they say
that in a case like this, this is actually pretty
straightforward as a property crime. I mean, unless we start
getting into a scenario where we're really seeing robots as
like human beings with their own like consciousness and interests
and all that, the attacks against robots are really probably
(09:55):
just property crimes against the owner of the robot. It's like,
you know, attacking somebody computer or their car or something potentially.
But we'll get into some stuff a little later that
I think shows some other directions that could go in
as well, you know, um, you know, especially consider the
awesome possibility of robots owning themselves. Yeah, and and that's
obviously a very different world. I mean, where you get
(10:18):
into the idea like does a robot actually have rights? Um,
which is not. That's sort of beyond the horizon of
what's explored in in this paper itself. This paper is
more focused on like the kinds of robots that you
can practically imagine within the next few decades. And and
in those cases, it seems like all of the really
thorny stuff would probably be in robots as offenders rather
(10:40):
than robots as victims of crimes. Right. But to your point,
like the the initial crimes against robot that we can
robots that we can imagine would be stuff like drunk
people pushing them over things like that, Yeah, or just
like a human driver and a human powered vehicle hitting
an autonomous vehicle, you know, right. Aw As I mentioned
in the last episode, this is a very big paper
(11:02):
and we're not gonna have time to get into every
avenue they go down in it. But I just wanted
to go through, uh and and mention some ideas that
stuck out to me is interesting that they discuss. And
one thing that really fascinated me about this was that
the idea of robots as possible agents in in a
legal context uh brings to the for a philosophical argument
(11:27):
that has existed in the realm of substantive law for
a while. UH. And I'll try not to be too
dry about this, but I think it actually does get
to some really interesting philosophical territory. UH. And this is
the distinction between what Limly and Casey call the normative
versus economic interpretations of substantive law. Again complicated philosophical and
(11:47):
legal distinction. I'll try to do my best to sum
it up simply. So. The normative perspective on substantive law
says that the law is a prohibition against doing something bad.
So in so thing is against the law. That means
you shouldn't do it, And we would stop the offender
from doing the thing that's against the law if we could.
(12:09):
But since we usually can't stop them from doing it,
often because it already happened, the remedy that exists. You
know that maybe paying damages to the victim or something
like that is a is a an attempt to right
the wrong in other words, to do the next best
thing to undoing the harm in the first place. So
(12:29):
basically getting into the idea of negative reinforcement. Somebody or
something did something bad, we can't we couldn't stop them
from doing something bad, but we can try and and
give them stimulus that would make them not do it again.
Be that economic or otherwise. Well, yes, but I think
what you're saying, uh, could actually apply to both of
(12:49):
these conditions I'm going to talk about. So I think
maybe the distinction comes in about whether that whether there
is such a thing as an inherent prohibition. So the
thing that's operative in the normative view is that the
thing that's against the law is a thing that should
not be done, and thus the remedy is an attempt
(13:10):
to try to fix the fact that it was done
in the first place. The the economic view is the
alternative here, and the way they sum that up is
there is no such thing as forbidden conduct. Rather, a
substantive law tells you what the cost of the conduct is.
Does that distinction make any more sense? Yes? Yes, So
(13:31):
it's basically the first version is doing crimes is bad. Um,
the second one is doing crimes is expensive. So it's
the first is crime should not be done, and the
second one is crimes can be done if you can
afford it. Yes, exactly so. In Limley and Casey's words, quote,
damages on this view, the economic view are simply a
(13:52):
cost of doing business. One we want defendants to internalize,
but not necessarily to avoid the conduct altogether. And now
you might look at this and think, oh, okay, well,
so the economic view is just like a psychopathic way
of looking at things. And in a certain sense you
could look at that as like, if you're calculating what's
the economic cost of murder, then yeah, okay, that does
(14:15):
just like that's evil, that's like psychopathic. But they're actually
all kinds of cases we're thinking about. The economic view
makes more sense of the way we actually behave And
they use the example of stopping at a traffic light.
So to read from Limely and Casey here quote Under
the normative view, a red light stands as a prohibition
(14:35):
against traveling through an intersection, with the remedy being a
ticket or a fine against those who are caught breaking
the prohibition. We would stop you from running the red
light if we could, But because policing every intersection in
the country would be impossible. We instead punish those we
do catch in hopes of deterring others. So in this
first case, you running a red light is bad, you
(14:58):
should not do it, and the cost of doing it,
you know, the punishment you face for doing it is
an attempt to right that wrong. But then they say,
under the economic view, however, and absolute prohibition against running
red lights was never the intention. Rather, the red light
merely signals a consequence for those who do, in fact
choose to travel through the intersection. As in the first instance,
(15:20):
the remedy available is a fine or a ticket. But
under this view, the choice of whether or not to
violate the law depends on the willingness of the lawbreaker
to accept the penalty. So in the case of a
red light, well that that might make more sense if
you're like sitting at a red light and you look
around and there are no other cars anywhere near you,
and you've you've got a clear view of the entire
(15:42):
intersection and the red lights not changing, and you think
maybe it's broken, and you're just like, okay, I'm I'm
just going to drive through. Well, if if you reach
that point where you're like, I think it's broken. That
I feel like that's a slightly different case. But if
you're just like nobody's watching, I'm gonna do it, um
and and and the light isn't taking an absurd amount
of time or longer than you're you're accustomed to. Yeah,
(16:05):
I don't know how the the belief that the light
is broken would factor into that, but yeah, it is.
I Mean, one thing that I think is clear that
in it's that in many cases there are people, especially
I think companies and corporations that operate on the economic view,
and it is something that I think people generally look
(16:25):
at and say, okay, that that's kind of grimy. Like
it like a company that says, okay, there is a
fine for not obeying this environmental regulation, and we're going
to make more money by violating the regulation than we
would pay in the fine anybody, So we're just gonna
pay it. Yeah, you hear about that. With factories, for instance,
where where there'll be there'll be some situation where that
the fine is not significant enough to really be at
(16:48):
a turrent, it's just a for them breaking that that
mandate being called on it. Occasionally, it's just the cost
of doing business. Right. Uh. So, there's a funny way
to describe this point of view. The the authors bring
up here that they called the bad man theory. And
this comes from Justice Oliver Wendell Holmes, who is a
US Supreme Court justice. Uh And he's talking about the
(17:10):
economic view of substantive law. Uh And. Holmes wrote, quote,
if you want to know the law and nothing else,
you must look at it as a bad man who
cares only for the material consequences which such knowledge enables
him to predict, not as a good one who finds
his reasons for conduct, whether inside the law or outside
of it, in the vaguer sanctions of conscience. Uh. And,
(17:33):
so they write, the measure of the substantive law, in
other words, is not to be mixed up with moral qualms,
but is simply coextensive with its remedy. No more and
no less. It just is what the remedy is. It's
the cost of doing business. Now, of course, there are
plenty of legal scholars and philosophers who would dispute how
Holmes thinks of this. But the interesting question is how
does this apply to robots. If you're programming a robot
(17:57):
to behave well, you actually don't get to just sort
of like jump over this distinction the way humans do
when they think about their own moral conduct. Right, Like,
you're not sitting when you're trying to think what's a
good way to be a good person. You're not sitting
around thinking about well, am I going by the normative
view of morality or the economic view of morality? You know, Um,
(18:19):
you just sort of act a certain way whatever it
seems to you the right way to do. But if
you're trying to program a robot to behave well, you
have to make a choice whether to embrace the normative
view or the economic view. Does a robot view a
red light, say, as a firm prohibition against forward movement,
it's just a bad thing and you shouldn't do it
to drive through a red light? Or does it just
(18:41):
view it as a substantial discouragement against forward motion that
has a certain cost, and if you were to overcome
that cost, then you drive on through. Yeah, this is
a great, great question because I feel like with humans
we're probably mixing a match and all the time, yes,
you know, perhaps the same law breaking behavior. You know,
we may do both on on one thing, and we
(19:03):
do one on another thing, and then the other one
on its still a third thing, but with the robot
it seems like you're gonna deal more or less with
kind of an absolute direction. Either they're going to be
um either that the law is is to be obeyed
or the law is to be taken into your cost analysis. Well, yeah,
so they talked about how the normative you is actually
(19:23):
very much like uh Isaac Asimov's Laws of Robotics inviolable rules,
and the Asimov story is doing a very good job
of demonstrating why inviolable rules are really difficult to implement
in the real world. Like you know that they Asimov
explored this brilliantly. And along these lines, the authors here
(19:45):
argue that there there are major reasons to think it
will just not make any practical sense to program robots
with a normative view of legal remedies. That probably when
people make ais and robots that that have to take
these kinds of things into account, they're a most definitely
going to program them according to the economic view. Right. Uh.
They say that quote encoding the rule don't run a
(20:07):
red light as an absolute prohibition, for example, might sometimes
conflict with the more compelling goal of not letting your
driver die by being hit by an oncoming truck. So
the robots are probably going to have to be economically
economically motivated to an extent like this, um, But then
they talk about how you know, this gets very complicated
(20:29):
because robots will calculate the risks of reward and punishment
with different biases than humans, or maybe even without the
biases that humans have that the legal system relies on
in order to keep us obedient. Humans are highly motivated,
usually by certain types of punishments that like, you know,
humans like really don't want to spend a month in jail,
(20:52):
you know, most of the time. And you can't just
rely on a robot to be incredibly motivated by something
like this, first of all, because like it wouldn't even
make sense to send the robot itself to jail. So
you need some kind of organized system for making a
robot understand the cost of bad behavior in a systematized
way that made sense to the robot as a as
(21:14):
a demotivating incentive. Yeah, Like shame comes to mind as
another aspect of of all this, Like how do you
shame a robot? You have to program a robot to
feel Shane and being uh you know, made to give
a public apology or something. Yeah. Uh so, so they
argue that that it really only makes sense for robots
to look at legal remedies in an economic way, and
(21:34):
then they write quote, it thus appears that Justice Holmes
archetypical bad man, will finally be brought to corporeal form,
though ironically not as a man at all. And if
Justice Holmes metaphorical subject is truly morally impoverished and analytically deficient,
as some accused, it will have significant ramifications for robots.
But yeah, thinking about these incentives, it gets more and
(21:57):
more difficult. Like the more you try to imagine the
particular humans have self motivations, you know, pre existing motivations
that can just be assumed. In most cases, humans don't
want to pay out money, humans don't want to go
to jail. How would these costs be instantiated as motivating
for robots? You would have to you would have to
basically force some humans I guess meaning the programmers or
(22:21):
creators of the robots, to instill those costs as motivating
on the robot. But that's not always going to be
easy to do because, Okay, imagine a robot does violate
one of these norms and it causes harm to somebody,
and as a result, the court says, okay, uh, someone,
you know, someone has been harmed by this negligent or
failed autonomous vehicle, and now there must be a payout.
(22:46):
Who actually pays? Where is the pain of the punishment located?
A bunch of complications to this problem arise, like it
gets way more complicated than just the programmer or the owner,
especially because in this age of artificial intelligence, there is
a kind of there's a kind of distributed responsibility across
many parties. The authors write, quote robots are composed of
(23:08):
many complex components, learning from their interactions with thousands, millions,
or even billions of data points. And they are often designed, operated,
least or owned by different companies. Which party is to
internalize these costs. The one that designed the robot or
AI in the first place, and that might even be
multiple companies, The one that collected and curated the data
set used to train its algorithm in unpredictable ways, the
(23:32):
users who bought the robot and deployed it in the field.
And then it gets even more complicated than that, because
the authors start going into tons of ways that we
can predict now that it's unlikely that these costs will
be internalized in commercially produced robots in ways that are
socially optimal. Because if if you're you're asking a corporation
(23:55):
that makes robots to take into account some type of
economic disincent of against the robot behaving badly, other economic
incentives are going to be competing with those disincentives, right.
Uh So the author's right. For instance, if I make
it clear that my car will kill its driver rather
than run over a pedestrian, if the issue arises, people
(24:17):
might not buy my car. The economic costs of lost
sales may swamp the costs of liability from a contrary choice.
In the other direction, car companies could run into pr
problems if their cars run over kids. But simply it
is aggregate profits, not just profits related to legal sanctions,
that will drive robot decision making. And then there are
(24:38):
still a million other things to consider. I mean, one
thing they talk about is the idea that even within
corporations that produce robots and AI, uh, the parts of
those corporations don't all understand what the other parts are doing.
You know, they say, workers within these corporations are likely
to be siloed in ways that interfere with effective cost internalization. Uh.
(25:00):
Quote machine learning is a specialized programming skill, and programmers
aren't economists. Uh. And then they talked about why in
many cases, it's going to be really difficult to answer
the question of why an AI did what it did?
So can you even determine that the AI, say, was,
was acting in a way that wasn't reasonable? Like, how
could you ever fundamentally examine the state of mind of
(25:23):
the AI well enough to to prove that the decision
it made wasn't the most reasonable one from its own perspective.
But then another thing they raised, I think is a
really interesting point, And this gets into one of the
things we talked about in the last episode where thinking
about culpability uh for AI and robots actually makes us
(25:44):
is going to force us to re examine our ideas
of of culpability and blame when it comes to human
decision making. Because they talk about this the idea that quote,
the sheer rationality of robot decision making may itself provoke
the ire of humans. Now, how would that be? It
seems like we would say, okay, well, you know we
(26:05):
want robots to be as rational as possible. We don't
want them to be irrational. But it is often only
by carelessly putting costs and risks out of mind that
we are able to go about our lives. For example,
people drive cars, and no matter how safe of a
driver you are, driving a car comes with the unavoidable
(26:28):
risk that you will harm someone uh the right quote.
Any economist will tell you that the optimal number of
deaths from many socially beneficial activities is more than zero
where it Otherwise, our cars would never go more than
five miles per hour. Indeed, we would rarely leave our
homes at all. Even today, we deal with those costs
(26:48):
and remedies law unevenly. The effective statistical price of a
human life in court decisions is all over the map.
The calculation is generally done ad hoc and after the
fact that it allows us to avoid explicitly discussing politically
fraught concepts that can lead to accusations of trading lives
for cash. And it may work acceptably for humans because
(27:10):
we have instinctive reactions against injuring others that make deterrence
less important. But in many instances robots will need to
quantify the value we put on a life if they
are to modify their behavior at all. Accordingly, the companies
that make robots will have to figure out how much
they value human life, and they will have to write
(27:32):
it down in the algorithm for all to see, at
least after extensive discovery, uh, referring to like you know
what the courts will find out by looking into how
these algorithms are created. And I think this is a
fantastic point, Like, in order for a robot to make
ethical decisions about living in the real world, it's going
to have to do things like put a price tag
(27:54):
on you know, what kind of risk to human life
is acceptable in order for it to do anything. And
we don't. That seems monstrous to us. It does not
seem reasonable for any percent chance of harming a human,
of killing somebody to be the unacceptable risk of your
day to day activities. And yet it actually already is that,
(28:16):
you know, it always is that way whenever we do anything,
but we just like have to put it out of mind,
like we can't think about it. Yeah, I mean, like,
what's the alternative, right A programming monstrous self delusion into
the self driving car where it says I will not
get into a wreck on my on my next route
because I cannot That cannot happen to me. It has
(28:39):
never happened to me before, it will never happen. You know,
these sorts of you know, ridiculous, not even statements that
we make in our mind. It's just kind of like
assumptions like that's that's the kind of thing that happens
to other drivers, and it's not going to happen to me,
even though we we've all seen the you know, the
statistics before. Yeah, exactly, I mean, I think this is
a really good point. And uh so in this case,
(29:01):
the robot wouldn't even necessarily be doing something evil. In fact,
you could argue there could be cases where the robot
is behaving in a way that is far safer, far
less risky than the average human doing the same thing.
But the very fact of its clearly coded rationality reveals
something that is already true about human societies, which we
(29:23):
can't really bear to look at or think about. So
another thing that the author has explored that I think
is really interesting is the idea of how robot punishment
would make it like directly punishing the robot itself, whether
(29:46):
how that possibility might make us rethink the idea of
punishing humans. Uh, Now, of course, it's just the case
that whether or not it actually serves as any kind
of deterrent, whether or not it actually rationally reduces is harm.
It may just be unavoidable that humans sometimes feel they
want to inflict direct harm on a perpetrator as punishment
(30:10):
for the crime they're alleged to have committed, and that
may well translate to robots themselves. I mean, you can
imagine we we've all I think, raged against an inanimate
object before. We wanted to kick a printer or something
like that. Uh. And we talked in the last episode
about some of that psychological research about how people mindlessly
apply social rules to robots. The authors here right, Certainly
(30:32):
people punch or smash inanimate objects all the time. Juries
might similarly want to punish a robot not to create
optimal cost internalization, but because it makes the jury and
the victim feel better. The authors write later towards their
conclusion about the idea of directly punishing robots that quote,
this seems socially wasteful. Punishing robots not to make them
(30:54):
behave better, but just to punish them is kind of
like kicking a puppy that can't understand why it's being hurt.
The same might be true of punishing people to make
us feel better, but with robots, the punishment is stripped
of any pretense that it is sending a message to
make the robot understand the wrongness of its actions. Now,
I'm pretty sympathetic personally to the point of view that
(31:16):
a lot of punishment that happens in the world is
not actually uh, is not actually a rational way to
reduce harm, but just kind of like is uh. You know,
if it serves any purpose, it is the purpose of
the emotional satisfaction of people who feel they've been wronged,
or people who want to demonstrate moral approprium on on
(31:37):
the offender. But I understand that, you know, in some cases,
you could imagine that punishing somebody serves as an object
example that deters behavior in the future, and to the
extent that that is ever the case. If it is
the case, could punishing a robot serve that role, could
actually inflicting say like like punching a robot or somehow
(31:58):
otherwise punishing a robot serve as a kind of object
example that deters behavior in humans, say, say the humans
who will program the robots of the future. It's your
kind of symbolism to imagine. Yeah, I mean, when you
start thinking about, you know, the ways to punish robots,
I mean you think of some of the more ridiculous
(32:19):
examples that have been brought up in sci fi and
sci fi comedy, like robot hells and so forth. Um,
And or just the idea of even destroying or deleting
a robot that is faulty or misbehaving. Um. But but maybe,
you know, maybe it ends up being something more like
I think of game systems right where say, if you
(32:41):
accumulate too many of uh say madness points, your I
don't know, your movement is cut in half, that sort
of thing, and then that has a ramification on how
you play the game and to what extent you can
play the game well. And therefore, like playing into the
economic model, you know, it could it could have sort
of our officially constructed but very real consequences on how
(33:03):
well a system could behave, you know, But then again
you can imagine ways that an AI might find ways
to to circumvent that and say, well, if I play
the game a certain way where I don't need to
move at normal speed, I can just move at half
speed but have the benefit of getting to break these rules.
Then who knows, you know, it just I feel like
(33:24):
there it seems an inescapable maze. Yeah, well that's that's
interesting because that is edging toward another thing that the
authors actually talked about here, which is the idea of
a robot death penalty. Uh. And this is funny because
I again because personally, you know, I see a lot
of flaws in in applying a death penalty to humans.
(33:45):
I think that is a very flawed, uh judicial remedy.
But I can understand a death penalty for robots, Like
you know, robots don't have the same rights as human defendants.
If a robot is malfunctioning or behaving in a way
that is so dangerous as to suggest it is likely
in the future to continue to endanger human lives to
(34:08):
an unacceptable extent, then yeah, it seems to me reasonable
that you should just turn off that robot permanently. Okay,
But but then again, and then it raises the question, well,
what about what what led us to this malfunction? Is
there something in the system itself that needs to be
remedied in order to prevent that from happening. Again, that's
like very good point, and the authors bring up exactly
(34:30):
this concern. Yeah, so they say, well, then again, so
a robot might not have human rights where you would
be concerned about the death penalty for the robot's own good,
but you might be concerned about what you are failing
to be able to learn from. Allowing the robot to
continue to operate like that that could help you refine
AI in the future. Maybe not letting it continue to
(34:51):
operate in the wild, but I don't know, keeping it
operative in some sense because like, whatever it's doing is
something we need to understand better. With the robot prison
instead of robot death penalty. Um. And of course that
the human comparison to be made is equally as is
frustrating because you end up with scenarios where you'll have, um,
(35:12):
a society that's very pro death penalty. But then when
it comes to doing the same sort of backwork and saying, well,
what led to this case, what were some of the
systematic problems, uh, cultural problems, societal problems. I don't know,
you know, well, whatever it is that that led to
this case that needed to be remedied with death. Should
we correct those problems too, And in some cases the
(35:33):
answer seems to be, oh, no, we're not doing that.
We'll just we'll just do the death penalty as is necessary,
even though it doesn't actually prevent us from reaching this
this point over and over again. I mean, I feel
like it's one of the most common features of the
tough on crime mentality that it is resistant to the
idea of understanding why what led a person to commit
a crime. I mean, you've heard I'm trying to think
(35:56):
of an example of somebody, but I mean you've heard
the person say, oh, uh, you know, you're just gonna
give some sob story about what happened when he was
a child or something like that. Yeah, yeah, yeah, yeah,
I've definitely encountered that that that counter argument before. Yeah,
but yeah, I mean, I think we're probably on the
same page that it really probably is very useful to
try to understand what are the common underlying conditions that
(36:17):
you can detect when people do something bad. And of course,
the same thing would be true of robots, right, And
it seems like with robots there would potentially be room
for true rehabilitation with with with these things. If not,
I mean, certainly you could look at it in a
software hardware scenario where like, Okay, the software's something's wrong
with the software, Well delete that, put in some some
(36:39):
healthy software, um, but keep the hardware. Uh. You know
that's in a way, that's rehabilitation right there. It's a
sort of rehabilitation that's not possible with humans. We can't
wipe somebody's mental state and replace it with a new,
factory clean mental state. You know, we can't go back
and edit someone's memories and traumas and what have you. Uh,
(37:01):
But with machines, it seems like we would have more
ability to do something of that nature. Yeah. Though, this
is another thing that comes up, and I mean, of
course it probably would be useful to try to learn
from failed AI in order to better perfect AI and robots.
But on the other hand, in in basically the idea
of trying to rehabilitate or reprogram robots that do wrong, uh,
(37:24):
the authors point out that they're probably going to be
a lot of difficulties in enforcing say, the equivalent of
court orders against robots. So one thing that is a
common remedy in in legal cases against humans, as you
might get a restraining order, you know, you need to
stay fifty feet away from somebody right uh, fifty feet
away from the plantiff or something like that, or you
(37:45):
need to not operate a vehicle or you know something.
There will be cases where it's probably difficult to enforce
that same kind of thing on a robot, especially on
robots whose behavior is determined by a complex interaction of
rules that are not explicitly coded by humans. So, you know,
most AI these days is not going to be a
(38:06):
series of if then statements written by humans, but it's
going to be determined by machine learning, which can to
some extent be sort of reverse engineered and and somewhat
understood by humans. But the more complex it is, the
harder it is to do that. And so there might
be a lot of cases where you know, you say, okay,
this robot needs to do X, it needs to obit,
(38:26):
you know, stay fifty feet away from the plaintiff or something,
but the person you know, whoever is in charge of
the robot might say, I don't know how to make
it do that. Or the possibly more tragic or funnier
example would be the it discovers the equivalent of the
drone with the wormhole that we talked about in the
last episode, right where it's the robot is told to
(38:46):
keep fifty feet of distance between you and the plaintiff.
Robot obeys the role by lifting the plaintiff and throwing
them fifty feet away. So, to read another section from
Limley and Casey here, they're right to issue an effective
injunction that causes is a robot to do what we
want it to do and nothing else. Requires both extreme
foresight and extreme precision in drafting it. If injunctions are
(39:09):
to work at all, courts will have to spend a
lot more time thinking about exactly what they want to
happen and all the possible circumstances that could arise. If
past experiences any indication courts are unlikely to do it
very well. That's not a knock on courts. Rather, the
problem is twofold words are notoriously bad at conveying our
intended meaning, and people are notoriously bad at predicting the future. Coders,
(39:35):
for their part, aren't known for their deep understanding of
the law, and so we should expect errors in translation.
Even if the injunction is flawlessly written, and if we
fall into any of these traps, the consequences of drafting
the injunction incompletely maybe quite severe. So I'm imagining you
issue a court order to a robot to do something
or not do something. You're kind of in the situation
(39:57):
of like the monkeys Paw wish, you know, right, like, oh,
you shouldn't have phrased it that way, Now you're in
for real trouble. Or what's the better example of that
isn't There's some movie we were just talking about recently
with like the Bad Genie who when you phrase a
wish wrong, does you know works it out on you
in a terrible way. Um, I don't know. We were
(40:17):
talking about Lepricn or wish Master or something. Does LEPrecon
grant wishes? I don't remember LEPrecon granting any wishes? What's
he do? Then? I think the only one I've seen
is Lepricon in space, So I'm a little foggy on
the the logic. I don't think he grants wishes. He
just he just like rides around on skateboards and punishes people.
He just attacks people who try to get his gold
(40:39):
and stuff. Well, Lepricans in general are known for this
sort of thing, though, are they. Yeah, Okay, if you're
not precise enough, they'll work something in there to cheat
you out of your your your prize. I'm trying to think, so, like,
don't come within fifty feet of the plaintiff. And so
the robot I don't know, like it builds a big
yard stick made out of human feed or something. Yeah, yeah,
(41:00):
it has fifty ft long arms again to lifted them
into the air. Something to that effect, or say the
say it's uh, for some reason, schools are just too dangerous,
and this self driving car is not permitted to go
within um, you know some you know, so many blocks
of an active school, and so it calls in a
(41:22):
bomb threat on that school every day in order to
get the kids out so that it can actually go
by I don't know, something to that effect. Maybe, Well,
that reminds me of a funny observation that uh, not
that this is lawful activity, but uh, a funny observation
that the authors make towards their conclusion. They bring up
there are cases of of crashes with autonomous vehicles where
(41:48):
the autonomous vehicle didn't crash into someone the autonomous vehicle,
you could argue caused a crash, but somebody else ran
into the autonomous vehicle because the autonomous vehicle did something
that is legal and presumably safe, but unexpected. And examples
here would be driving the speed limit in certain areas
(42:11):
or coming to a complete stop at an intersection. And
this is another way that the authors are bringing up
the idea that UH, examining robot logic is really going
to have to cause us to re examine the way
humans interact with the law, because there are cases where
people cause problems that lead to harm by obeying the rules.
Oh yeah, Like I think of this all the time,
(42:33):
and imagine most people do when when driving for any
long distance. Because you have the speed limit as it's posted,
you have the speed that the majority of people are driving, um,
you know, you have that sort of ten mile over zone.
Then you have the people who are driving exceedingly fast.
Then you have that minimum speed limit that virtually nobody
(42:54):
is driving forty mile per hour on the interstate, but
it's posted. Uh, and therefore would be legal to drive
forty one per hour if you were a robot and
weren't in a particular hurry, and perhaps that's you know,
maximum efficiency for your travel. Uh yeah, there's so many,
so many things like that to think about. And I
think we're probably not even very good at at guessing
(43:16):
until we encounter them through robots. How many other situations
there are like this in the world where where you
can technically be within the bounds of the law, like
you're doing what by the book you're supposed to be doing,
but actually it's really dangerous to be doing it that way.
So how are you supposed to interrogate a robot state
of mind? And when it comes to stuff like that.
(43:37):
But so anyway, this leads to the author's talking about
the difficulties in in robots state of mind valuation, and
they say, quote, robots don't seem to be good targets
for rules based on moral blame or state of mind,
but they are good at data. So we might consider
a legal standard that bases liability on how safe the
robot is compared to others of its type. This would
(44:00):
be a sort of robotic reasonableness test that could take
the form of a carrot, such as a safe harbor
for self driving cars that are significantly safer than average
or significantly safer than human drivers, or we could use
a stick holding robots liable if they lagged behind their peers,
or even shutting down the worst ten percent of robots
(44:21):
in a category every year. So I'm not sure if
I agree with this, but this was an interesting idea
to me. So, instead of like trying to to interrogate
the underlying logic of a type of autonomous car, robot
or whatever, because it's so difficult to try to understand
the underlying logic, what if you just compare its outcomes
(44:44):
to other machines of the same genre as it, or
two humans. I mean, you can imagine this working better
in the case of something like autonomous cars then you can,
and you know other cases where the robot is essentially
introducing a sort of a new genre of agent into
the world. But autonomous cars are in many ways going
to be roughly equivalent in outcomes to human drivers in
(45:07):
in regular cars, and so would it make more sense
to try to understand the reasoning behind each autonomous vehicle's
decision making when it gets into an accident, or uh
to compare its behavior to I don't know, some kind
of aggregate or standard of human driving or other autonomous
vehicles or maybe we just we just tell it. Look,
(45:28):
most humans drive like selfish bastards, so just go do it,
do what you gotta do well. I mean, I would
say that there is a downside risk to not taking
this stuff seriously enough, which is uh, which is something
like that, I mean something like essentially letting robots go
hog wild because they can well be designed and not
(45:50):
saying that anybody would be you know, maliciously going wahaha
and rubbing their hands together while they make it this case.
But you know, you could imagine a situation where there
are more and more robots entering the world where the uh,
the corporate responsibility for them is so diffuse that nobody
can locate the one person who's responsible for the robots behavior,
(46:12):
and thus nobody ever really makes the robot, you know,
behave morally at all. So robots just sort of like
become a new class of superhuman psychopaths that are immune
from all consequences. In fact, I would say that is
a robot apocalypse scenario I've never seen before done in
a movie. It's always like when the robots are terrible
(46:32):
to us, it's always like organized, it's always like that,
you know, they Okay, they decide humans are a cancer
or something, and so they're going to wipe us out?
What if instead? It's the problem is just that robots,
sort of, by corporate negligence and distributed responsibility for their
behavior among humans, robots just end up being ultimately a
(46:53):
moral and we're flooded with these a moral critters running
around all over the place that are pretty smart and
really powerful. I guess there are You do see some
shades of this in um in some futuristic sci fi genres.
I'm particularly thinking of some of the models of cyberpunk genre,
where the the corporation model has been has been embraced
(47:16):
as the way of understanding the future of Aiyes, um,
but but yeah, I think I think for the most
part this this scenario hasn't been as explored as much.
We tend to We tend to want to go for
the evil overlord or the out of control kilbot rather
than this, right, yeah, you want you want an identifiable villain,
just like they do in the courts. But yeah, sometimes, uh,
(47:39):
sometimes corporations or manufacturers can be kind of slippery and
saying like whose thing is this? Than? So I was
thinking about all this, about the idea of you know,
particularly self driving cars being like the main example we
we ruminate on with this sort of thing. UM, I
(48:02):
decided to to look to the book Life three point
oh by Max teg Mark UM, which is a is
a really great book came out of a couple of
years back. And Max teg Mark is a Swedish American physicist, cosmologist,
and machine learning researcher. If you've been listening to the
show for a while, you might remember that I briefly
interviewed him, had like a mini interview with him at
(48:22):
the World Science Festival, UH several years back. Yeah, and
I know I've referenced his book Our Mathematical Universe in
previous episodes. Yeah, so so these are These are both
books intended for a wide audience, very very readable. UH.
Life three point oh does a does a fabulous job
of walking the reader through these various scenarios of UH
(48:43):
in many cases of of AI scendency and how it
could work. And he gets into this this topic of
UM of legality and UM and and AI and self
driving cars. Now, he does not make any allusions to
Johnny Cab until a recall, but I'm going to make
allusions to Johnny Cab in total recall is a way
of sort of putting a manic face on self driving cars.
(49:06):
How did I get here? The door opened, you got in,
It's sound reason. So, um, imagine that you're in a
self driving Johnny Cab and it recks. So the basic
question you might ask is are you responsible for this
wreck as the occupant? That seems ridiculous to think, so, right,
you weren't driving it, You just told it where to go. Um,
(49:29):
are the owners of the Johnny Cab responsible? Now this
seems more reasonable, right, but again it runs into a
lot of the problems we were just raising there. Yeah,
but tag Mark points out that there is this other
option and that American legal scholar David of Lattic has
pointed out that perhaps it is the Johnny Cab itself
(49:49):
that should be responsible. Now we've been already been discussing
a lot of this, like what does that mean? What
does it mean if a you have a Johnny Cab,
you have a self driving vehicle is responsible for wreck
than it is in what you know? How do we
even begin to make sense of that statement? Do you
take the damages out of the Johnny Cabs? Bank account. Well,
(50:10):
that's the thing. We we kind of end up getting
into that scenario because if the Johnny Cab has responsibilities
than than tech Mark writes, why not let it own
car insurance? Not only would this allow for it to
financially handle accidents, it would also potentially serve as a
design incentive and a purchasing incent incentive. So the the
(50:31):
idea here is the better self driving cars with better
records will qualify for lower premiums, and the less reliable
models will have to pay higher premiums. So if the
Johnny Cab runs into enough stuff and explodes enough, then
that brand of Johnny Cab simply won't be able to
take to the streets anymore. Oh this is interesting, okay,
(50:51):
So in order to mean this is very much the
economic model that we were discussing earlier. So when Schwarzenegger
hops in and Johnny Cab says, where would you like
to go? And he says, drive, just drive anywhere, and
he says, I don't know where that is. And so
so his incentive to not just like blindly plow forward
is how much would it cost if I ran into
something when I did that? Yeah? Exactly. But but Temark
(51:16):
points out that the implications of letting a self driving
car own car insurance it ultimately goes beyond this situation,
because how does the Johnny cab pay for its insurance
policy that, again it hypothetically owns in this scenario? Should
we let it own money in order to do this?
Does it have its own bank account like you alluded
to earlier, especially if it's operating as an independent contractor
(51:39):
of sorts, perhaps paying back certain percentages or fees to
a greater cab company, Like maybe that's how it would work.
And if it can own money, well, can it also
own property? Like perhaps at the very least it rents
garage space, Uh, but maybe it owns garage space for itself, um,
you know, or a maintenance facility or the tools that
(52:00):
work on it. Does it own those as well? Does
it own spare parts? Does it own the bottles of
water that go inside of itself for its customers? Does
it own the complementary wet towels for your head that
it keeps on hand? Yeah? Um, I mean, if nothing else,
it seems like if it owned things, like, the more
things it owns, the more things that you could potentially
(52:23):
um uh invoke a penalty upon through the legal system,
and if they can own money and property and again
potentially themselves. Then tech Mark takes it a step further.
He writes, if this is the case, quote, there's nothing
legally stopping smart computers from making money on the stock
market and using it to buy online services. Once the
(52:46):
computer starts paying humans to work for it, it can
accomplish anything that humans can do. I see. So you
might say that even if you're skeptical of an AI's
ability to have say the emotional and cultural intelligence to
uh to write a popular screenplay or you know, create
a popular movie, it just doesn't get humans well enough
(53:06):
to do that. It could, at least if it had
its own economic agency pay humans to do that, right,
right and um. Elsewhere in the book, tech Mark gets
into a lot of this, especially entertainment idea, presenting a
scenario by which machines like this could gain the entertainment
industry in order to to ascend to you know, extreme
financial power. A lot of it is just like sort
(53:28):
of playing the algorithms right, you know, like doing corporation
stuff and then hiring humans as necessary to to bring
that to fruition. You know, I mean, would this be
all that different from any of our like I don't know,
Disney or comic book studios or whatever exists today. Yeah, yeah, exactly. Um, so,
you know, we already know the sort of prowess that
(53:49):
computers have when it comes to the stock market. Tech
Mark you know, points out that, you know, you know what,
we have examples of this in the world already where
we're using AI, and he writes that it could lead
to a situation where most of the economy is owned
and controlled by machines. And this, he warns, is not
that crazy, considering that we already live in a world
where non human entities called corporations exert tremendous power and
(54:12):
hold tremendous wealth. I think there there is a large
amount of overlap between the concept of corporation and the
concept of an AI. Yeah and uh. And then there
are steps beyond this as well. If if machines can
do all of these things, So if they can, if
they can if a machine can own property, if it
can potentially own itself, if it can if it can
(54:33):
buy things, if it can invest in the stock market,
if it can accumulate financial power, if it can do
all these things then should they also get the right
to vote as well? You know, it's it's potentially paying taxes,
does it get to to vote in addition to that?
And then if not, why and what becomes the caveat
that determines the right to vote in this scenario? Now,
(54:55):
if I understand you, right, I think you're saying the
tech mark is is exploring these possibilities as stuff that
he thinks might not be as implausible as people would suspect,
rather than his stuff where he's like, here's my ideal world,
right right, He's saying like, look, you know this is
already where we are. We know what a I can do,
and we can easily extrapolate where it might go. These
(55:16):
are the scenarios we should, we should potentially be prepared
for in much the same way that nobody, nobody really
at an intuitive level, believes that a corporation is a
person like a like a human being as a person. Uh,
you know, it's at least done well enough at convincing
the courts that it is a person. So would you
not be able to expect the same coming out of
machines that were sophisticated enough, right, and convincing the court
(55:40):
is Uh. I'm glad you brought that up, because that's
that's another area that tech markets into. So what does
it mean when judges have to potentially judge Aiyes? Um?
Would these be specialized judges with technical knowledge and understanding
of the complex systems involved? Uh? You know? Or is
it going to be a human judge judging a machine
(56:01):
as if it were a human? Um? You know. Both
of these are possibilities. But then here's another idea that
tech marks discusses at length. What if we use robo judges? Um?
And this ultimately goes beyond the idea of using robo
judges to judge at the robots, but potentially using them
to judge humans as well. UM because while human judges
(56:22):
have limited ability to understand the technical knowledge of cases,
robo judges, tech Mark points out, would in theory have
unlimited learning and memory capacity. They could also be copied,
so there would be no staffing shortages you need to
judges today, We'll just copy and paste, right, uh and simplification.
But but you know, essentially, once you have one, you
(56:42):
can have many. Uh. This way justice could be cheaper
and just maybe a little more just by removing the
human equation, or at least so the machines would argue right,
But then the other the side of the thing is,
we've already discussed how human created AI is susceptible to
to bias. So we could potentially, you know, we could
(57:03):
create a robot judge, but if we're not not careful,
it could be bugged, it could be hacked, it could
be otherwise compromised where it just might have these various
biases that it is um that it is using when
it's judging humans or machines. And then you'd have to
have public trust in such a system as well, So
we run into a lot of the same problems we
run into when we're talking about trusting the machine to
(57:26):
drive us across town. Yeah, Like so if uh robot judge,
even if now I'm certainly not granting this because I
don't necessarily believe this was the case, but even if
it were true that a robot judge would be better
at judging cases than a human and like more fair
and more just, you could run into problems with public
trust and those kind of judges because, for example, they
(57:48):
make the calculations explicit, right, the same way we talked
about like placing a certain value on a human life. Uh,
it's something that we all sort of do, but we
don't like to think about it or acknowledge we do it.
We just do it at an intuitive level that's sort
of hidden in the dark recesses of the mind. And
and and don't think about it. A machine would have
(58:08):
to like put a number on that, and and for
public transparency reasons, that number would probably need to be
publicly accessible. Yeah, another area, and this is where this
is another topic and robotics that you know, we could
easily discuss at extreme length, but there's a robotic surgery
to consider. You know. While we continue to make great
strides and robotic surgery, and in some cases the robotic
(58:31):
surgery route is indisputably the safest route, there remains a
lot of discussion regarding UM. You know, how robot surgery is,
UM is progressing, where it's headed, and how malpractice potentially
factors into everything UM. Now to despite the advances that
we've seen, we're not quite at the medical droid level,
(58:52):
you know, like the autonomous UH surgical bot. But as
reported by Dennis Grady in the New York Times just
last year, AI coupled with new imaging techniques are already
showing promise as a means of diagnosing tumors as accurately
as human physicians, but at far greater speed. UM. So
it's interesting to UH to think about these advancements, but
(59:14):
at the same time realize that particularly in an AI,
we're talking more about AI, I mean, particularly an AI
and medicine. We're talking about AI assisted medicine or AI
assisted surgery. So the human AI relationship is in these
cases not one of replacement but of cooperation, at least
for the near term. Yeah, yeah, yeah, I see that,
(59:38):
because I mean, there are many reasons for that, but
one of the one of the reasons that strikes me
is it comes back to a perhaps sometimes irrational desire
to inflict punishment on a person who has done wrong,
even if it doesn't like help the person who has
been harmed in the first place. Um. There there are
certain just like intuitions we have, and I think one
(59:58):
of them is we we feel more confident if there
is somebody in the loop who would suffer from the
consequences of failure, you know, like the fit, Like it
doesn't just help that, Like oh no, I assure you
the surgical robot has you know, strong incentives within its
programming not to fail, not to botch the surgery and
(01:00:20):
take out your you know, remove one of your vital organs. Yeah,
Like on one level, on some level, we want that
person to know their career is on the line, or
the reputation is on the line. You know. I think
most people would feel better going under surgery with the
knowledge that if the surgeon were to do something bad
to you. It's not just enough to know that the
(01:00:42):
surgeon and surgeon is going to try really hard not
to do something bad to you. You also want the
like second order guarantee that, like, if the surgeon were
to screw up and take take out one of your
vital organs, something bad would happen to them and they
would suffer. But with a robot, they wouldn't suffer. It's
just like, oh, whoops. I wonder if we end up
(01:01:04):
reaching a point with this in this discussion where you know,
we're talking about robots hiring people, do we end up
in a in a position where Aiyes, higher humans not
so much because they need human um expertise or human
skills or human senses the ability to feel pain. Yeah,
and to be culpable, Like they need somebody that will,
(01:01:26):
like essentially aies hiring humans to be scapegoats in the
system or in their in the in the in their
particular job. Uh So they're like, yeah, we need a
human in the loop. Not because I need a human
in the loop. I can do this by myself, but
if something goes wrong, if you know, then there's always
a certain chance that something will happen, I need a
(01:01:47):
human there that will bear the blame. Every robot essentially
needs a human co pilot, even in cases where robots
far outperformed the humans, just because the human copilot has
to be there to ex upt responsibility for failure. Oh yeah.
In the first episode, we talked about the idea of
there being like a punchable um plate on a robot
(01:02:09):
um for when it for when we feel like we
need to punish it. It's like that, except instead of
a specialized plate on the robot itself, it's just a
person that the robot hired. A whipping boy. Oh this
is so horrible and and so perversely plausible. I can
I can kind of see it. It's like in my lifetime,
I can see it. Well, thanks for the nightmares, Robin, Well, no,
(01:02:34):
I think we've had plenty of potential nightmares discussed here.
But I mean we shouldn't just focus on the nightmares.
I mean, again, to be clear, Um, you know, so
the idea of self driving cars, the idea of robot
assisted surgery. I mean, we're ultimately talking about the aim
of of of creating safer practices of saving him and live. So, uh,
you know, it's all it's not all nightmares and um
(01:02:56):
robot health scapes. But we have to be realist stick
about the very complex UM scenarios and casks that we're
building things around and unleashing machine intelligence upon. Yeah, I
mean I made this clear in uh in the previous episode.
I'm not like down on things like autonomous vehicles. I mean, ultimately,
I think autonomous vehicles are probably a good thing. Um,
(01:03:20):
but I do think it's really important for people to
start paying attention to these, uh, these unbelievably complicated philosophical, moral,
and legal questions that will inevitably arise as more independent
and intelligent agents in filth radar our world. All right, Well,
on that note, we're gonna go and close it out.
(01:03:41):
But if you would like to listen to other episodes
of Stuff to Blow Your Mind, you know where to
find them. You can find our core episodes on Tuesdays
and Thursdays in the Stuff to Blow Your Mind podcast feed.
On Monday's we tend to do listener mail, on Wednesdays
we tend to bust out an artifact shorty episode, and
on Fridays we do a little weird house cinema where
we don't wantly talk about science so much as we
(01:04:03):
just talked about one weird movie or another, and then
we have a little rerun on the weekend. Huge thanks
as always to our excellent audio producer Seth Nicholas Johnson.
If you would like to get in touch with us
with feedback on this episode or any other, to suggest
a topic for the future, or just to say hello,
you can email us at contact at stuff to Blow
your Mind dot com. Stuff to Blow Your Mind is
(01:04:32):
production of I Heart Radio. For more podcasts for my
Heart Radio, visit the iHeart Radio app, Apple Podcasts, or
wherever you're listening to your favorite shows.