Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind, the production of
My Heart Radio. Hey, welcome to Stuff to Blow Your Mind.
My name is Robert Lamb and I'm Joe McCormick, and
we're back for part two of our talk about punishing
the robot. We're we're back here to uh to tell
(00:23):
the robot he's been very bad. Now. In the last episode,
we talked about the idea of legal agency and culpability
for robots and other intelligent machines, and for a quick
refresher on on some of the stuff we went over,
we talked about the idea that as robots and AI
become more sophisticated and thus in some ways or in
(00:43):
some cases more independent and unpredictable, and as they integrate
more and more into the wild of human society, they
are just inevitably going to be situations where AI and
robots do wrong and cause harm to people. Now, of course,
when a human is wrong and causes harm to another human,
we have a legal system through which the victim can
(01:05):
seek various kinds of remedies, And we talked in the
last episode about the idea of remedies that the simple
version of that is the remedy is what do I
get when I win in court? So that can be
things like monetary rewards. You know, I ran into your
car with my car, so I pay you money, or
it can be punishment, or it can be court orders
like commanding or restricting the behavior of the perpetrator. And
(01:28):
so we discussed the idea that as robots become more
unpredictable and more like human agents, more sort of independent,
and more integrated into society, it might make sense to
have some kind of system of legal remedies for when
robots cause harm or commit crimes. But also, as we
talked about last time, this is much easier said than done.
(01:51):
It's going to present tons of new problems because our
legal system is in many ways not equipped to deal
with with defendants and situations of this kind. And this
may cause us to ask questions about how we already
think about culpability and blame and and punishment in the
legal system. And so in the last episode we talked
(02:13):
about one big legal paper that we're going to continue
to explore in this one. It's by Mark A. Limley
and Brian Casey in the University of Chicago Law Review
from twenty nineteen called remedies for robots, So I'll be
referring back to that one a good bit throughout this
episode too. Now. I think when we left off last time,
we had mainly been talking about sort of trying to
(02:33):
categorize the different sorts of harm that could be done
by robots or AI intelligent machines, and so we talked
about some things like unavoidable harms and deliberate least cost harms.
These are sort of going to be unavoidable parts of
having something like autonomous vehicles, right if you have cars
driving around on the road, Like, even if they're really
really good at minimizing harm, there's still going to be
(02:56):
some cases where there's just no way harm could be
avoided because their cars. Another would be defect driven harms.
That's pretty straightforward, that's just where the machine malfunctions or
breaks in some way. Another would be misuse harms. That's
where the machine is used in a way that is harmful.
And in those cases it can be usually pretty clear
who's at fault. It's the person who misused the machine.
(03:19):
But then there are a couple of other categories that
where things get really tricky, which are unforeseen harms and
systemic harms and in the case of unforeseen harms. One
example we talked about in the last episode was the
drone that invented a wormhole. So, you know, people were
trying to train a drone to move towards like an
(03:39):
autonomous flying vehicle, to move towards the center of a
circular area. But the drone started doing a thing where
when it got sufficiently far away from the center of
the circle, it would just fly out of the circle altogether.
And so it seems kind of weird at first, like, Okay,
why would it be doing that? But then what the
researchers realized was that whenever it did that, they would
(03:59):
turn it off off and then they would move it
back into the circle to start it over again. So,
from the machine learning point of view of the drone itself,
it had discovered like a like a time space warp
that you know. So so it was doing this thing
that made no sense from a human perspective, but actually
was it was following its programming exactly. Now for an example, uh,
(04:20):
sort of a thought experiment of how this could become lethal.
There's an example that is stuck in my head. I
can't recall where I heard this who presented this idea?
And I kind of had it in my head that
it came from Max tag Mark. But I did some
searching around in my notes and some searching around and
one of his books, and I couldn't find it. Perhaps
you can help refresh me. You remember maybe you remember this, Joe,
(04:43):
But the idea of the the AI that is running
deciding how much oxygen needs to be in a train station,
it didn't given time. Oh this sounds familiar. I don't
know the answer, but a lot of these thought experiments
tend to trace back to Nick Bostrom, so I wouldn't
be surprised if in there. But but go ahead, right, okay,
as I remember it. The way it works is you have, um,
you have this AI that's in charge of of making
(05:05):
sure there's enough oxygen in the train station for when
humans are there, and it seems to have learned this fine.
And when humans are there to get on the train,
everything goes goes well, everybody's breathing fine. And then one day, uh,
the train arrives a little late or it leaves a
little right late. You get which whatever it is, and
there's not enough oxygen and people die and then it
(05:28):
turns out that the train was not basing its decision
on when people were there, but it was basing it
on a clock in the train station, like what train
it was? Um, And I may be mangling this horribly,
but you know another way of illustrating the point that
machine learning could end up, you know, latching onto shortcuts
or heuristic devices. Uh. That would just seem completely insane
(05:52):
to a quote unquote logical human mind, but might make
sense within the framework of the AI. Right. They worked
in training cases, and it doesn't understand because it doesn't
have common sense, it doesn't understand why they wouldn't work
in another case. There was actually a real world case
that we talked about in Part one where there was
(06:13):
an attempt to do some machine learning on what risk
factors would would make a pneumonia case admitted to the
hospital have a higher or lower chance of survival. And
one thing that a machine learning algorithm determined was that
asthma meant that you were you were better off when
you got pneumonia if you had asthma. But actually the
(06:34):
reason for that that that isn't true. Actually, the reason
for that is that if you have asthma, you're a
higher risk case for pneumonia, so you've got more intensive
treatment in the hospital and thus had better outcomes on
the data set that the algorithm was trained on. But
the algorithm came up with this completely backwards uh failure
to understand the difference between correlation and causation. There it
(06:56):
made it look like asthma was a superpower. Now, of course,
if you take that kind of shortsighted algorithm and you
make it god, then it will say, oh, I've just
got to give everybody asthma, so so we'll have a
better chance of surviving. The point is it can be
hard to imagine in advance all the cases like this
that would arise when you've got a world full of
(07:17):
robots and and AI is running around in it that
are trained on machine learning. Basically, they're just a number
of less sort of soft sky nets that you couldn't
possibly predict, you know, like the skynet scenario being sort
of like a robots decide that one to end all war,
and um, you know, humans causal war, therefore end all humans,
that sort of thing. But there's so many different like
(07:39):
lesser versions of it. It It could also be destructive or
annoying or just get in the way of effectively using
AI for whatever we turn to it for. Uh yeah, yeah.
To come to an example that is definitely used by
Nick Bostrom, the paper clip maximizer. You know, a robot
that is designed to make as many paper clips as
it can, and it just it looks at your body
(08:00):
and says, hey, that's full of matter. Those could be
paper clips. Yeah, yeah, yeah, that would be that would
be quite an apocalypse. Now, before we get back into
the main subject and talking about this limly in Casey
paper with with robots as offenders, there was one thing
that was interesting I came across. It was just a
brief footnote in their paper, but about the question of
(08:22):
what about if the robot is the plaintiff in a case? Uh.
They said, it's it is possible to imagine a robot
as a plaintiff in a court case because of course robots,
you know, can be injured by humans. And they cited
a bunch of examples of news stories of humans just
intentionally like torturing and being cruel two robots like that.
(08:43):
They cited one news article fromen about people just aggressively
kicking food delivery robots, and then they share another story actually,
remember this one from the news from about a Silicon
Valley security robot that was just violently attacked by a
drunk man in a parking garage. I don't remember this one,
(09:04):
but I can imagine how it went down. Yeah, exactly.
So they say that in a case like this, this
is actually pretty straightforward as a property crime. I mean,
unless we start getting into a scenario where we're really
seeing robots as like human beings with their own like
consciousness and interests and all that, the attacks against robots
(09:25):
are really probably just property crimes against the owner of
the robot. It's like, you know, attacking somebody's computer or
their car or something potentially. But that we'll get into
some stuff a little later that I think shows some
other directions that could go in as well, you know, um,
you know, especially considered the awesome possibility of robots owning themselves. Yeah,
(09:45):
and and that's obviously a very different world, I mean,
where you get into the idea like does a robot
actually have rights? Um, which is not. That's sort of
beyond the horizon of what's explored in in this paper itself.
This paper is more focused on like the kinds of
ro bots that you can practically imagine within the next
few decades. And and in those cases, it seems like
(10:06):
all of the really thorny stuff would probably be in
robots as offenders rather than robots as victims of crimes. Right.
But to your point, like the the initial crimes against
robot that we can in the robots that we can
imagine would be stuff like drunk people pushing them over
things like that. Yeah, or just like a human driver
and a human powered vehicle hitting an autonomous vehicle. You know,
(10:29):
right now, As I mentioned in the last episode, this
is a very big paper and we're not gonna have
time to get into every avenue they go down in it.
But I just wanted to go through uh and and
mention some ideas that stuck out to me is interesting
that they discuss. And one thing that really fascinated me
about this was that the idea of robots as possible
(10:50):
agents in in a legal context UH brings to the
for a philosophical argument that has existed in the realm
of substance of law for a while. Uh. And I'll
try not to be too dry about this, but I
think it actually does get to some really interesting philosophical territory. Uh.
And this is the distinction between what Limbly and Casey
call the normative versus economic interpretations of substantive law. Again,
(11:16):
complicated philosophical and legal distinction. I'll try to do my
best to sum it up simply. So. The normative perspective
on substantive law says that the law is a prohibition
against doing something bad. So when something is against the law,
that means you shouldn't do it, And we would stop
the offender from doing the thing that's against the law
(11:38):
if we could. But since we usually can't stop them
from doing it, often because it already happened, the remedy
that exists, You know that maybe paying damages to the
victim or something like that is a is a an
attempt to right the wrong, in other words, to do
the next best thing to undoing the harm in the
first place. So, basically, getting into the idea of negative reinforcement,
(12:02):
somebody or something did something bad, we can't we couldn't
stop them from doing something bad, but we can try
and and give them stimulus that would make them not
do it again. Be that economic or otherwise. Well, yes,
but I think what you're saying uh could actually apply
to both of these conditions I'm going to talk about.
So I think maybe the distinction comes in about whether
(12:25):
the whether there is such a thing as an inherent prohibition.
So the thing that's uh operative in the normative view
is that the thing that's against the law is a
thing that should not be done, and thus the remedy
is an attempt to try to fix the fact that
it was done in the first place. The the economic
view is the alternative here, and the way they sum
(12:48):
that up is there is no such thing as forbidden conduct. Rather,
a substantive law tells you what the cost of the
conduct is. Does that distinction make any more sense? Yes, yes,
So it's basically the first version is doing crimes is bad. Um.
The second one is doing crimes is expensive. So it's
(13:09):
the first is crimes should not be done, and the
second one is crimes can be done if you can
afford it. Yes, exactly so. In Limley, in cases words,
quote damages on this view the economic view are simply
a cost of doing business. One we want defendants to internalize,
but not necessarily to avoid the conduct altogether. And now
(13:31):
you might look at this and think, oh, okay, well,
so the economic view is just like a psychopathic way
of looking at things, And in a certain sense, you
could look at it that as like if you're calculating
what's the economic cost of murder? Then yeah, okay, that
does just like that's evil, that's like psychopathic. But they're
actually all kinds of cases we're thinking about. The economic
(13:52):
view makes more sense of the way we actually behave
And they use the example of stopping at a traffic light. Yes,
so read from Limely in casey here quote. Under the
normative view, a red light stands as a prohibition against
traveling through an intersection, with the remedy being a ticket
or a fine against those who are caught breaking the prohibition.
(14:13):
We would stop you from running the red light if
we could, but because policing every intersection in the country
would be impossible, we instead punish those we do catch
in hopes of deterring others. So in this first case,
you running a red light is bad, you should not
do it, and the cost of doing it, you know,
the punishment you face for doing it is an attempt
(14:34):
to right that wrong. But then they say, under the
economic view, however, and absolute prohibition against running red lights
was never the intention. Rather, the red light merely signals
a consequence for those who do, in fact choose to
travel through the intersection. As in the first instance, the
remedy available is a fine or a ticket. But under
this view, the choice of whether or not to violate
(14:56):
the law depends on the willingness of the lawbreaker to
accept the penalty. So in the case of a red light,
well that that might make more sense if you're like
sitting at a red light and you look around and
there are no other cars anywhere near you, and you've
you've got a clear view of the entire intersection and
the red lights not changing, and you think maybe it's broken,
(15:17):
and you're just like, Okay, I'm I'm just going to
drive through. Well, if if you reach that point where
you're like, I think it's broken, that I feel like
that's a slightly different case. But if you're just like,
nobody's watching, I'm gonna do it, um and and and
the light isn't taking an absurd amount of time or
longer than you're you're accustomed to, Yeah, I don't know
how the the belief that the light is broken would
(15:39):
factor into that, but yeah it is. I mean one
thing that I think is clear that in it's that
in many cases there are people, especially I think companies
and corporations that operate on the economic view. And it
is something that I think people generally look at and say, Okay,
that that's kind of grimy. Like it like a company
that says, Okay, there is a fine for not obeying
(16:02):
this environmental regulation, and we're going to make more money
by violating the regulation than we would pay in the fine. Anybody,
So we're just gonna pay it. Yeah, you hear about
that with factories, for instance, where where there'll, yeah, there'll
be some situation where that the fine is not significant
enough to really be at a turrent. It's just a
fit for them breaking that that mandate being called on it. Occasionally.
(16:24):
It's just the cost of doing business. Right. Uh. So
there's a funny way to describe this point of view
that the authors bring up here that they call it
the bad man theory. And this comes from Justice Oliver
Wendell Holmes, who is a U. S. Supreme Court justice. Uh.
And he's talking about the economic view of substantive law. Uh.
And Holmes wrote, quote, if you want to know the
(16:46):
law and nothing else, you must look at it as
a bad man who cares only for the material consequences
which such knowledge enables him to predict, not as a
good one who finds his reasons for conduct, whether inside
the law or outside of it, in the vaguer sanctions
of conscience. Uh. And so they write, the measure of
the substantive law, in other words, is not to be
(17:07):
mixed up with moral qualms, but is simply coextensive with
its remedy. No more and no less. It just is
what the remedy is. It's the cost of doing business. Now.
Of course, there are plenty of legal scholars and philosophers
who would dispute how Holmes thinks of this. But the
interesting question is how does this apply to robots? If
you're programming a robot to behave well, you actually don't
(17:31):
get to just sort of like jump over this distinction
the way humans do when they think about their own
moral conduct. Right, Like, you're not sitting when you're trying
to think what's a good way to be a good person.
You're not sitting around thinking about well, am I going
by the normative view of morality or the economic view
of morality? You know? Um, you just sort of act
(17:51):
a certain way whatever it seems to you the right
way to do. But if you're trying to program a
robot to behave well, you have to make a choice
whether to embrace the normative view or the economic view.
Does a robot view a red light, say, as a
firm prohibition against forward movement, it's just a bad thing
and you shouldn't do it to drive through a red light?
Or does it just view it as a substantial discouragement
(18:14):
against forward motion that has a certain cost, and if
you were to overcome that cost, then you drive on through. Yeah,
this is a great, great question because I feel like
with humans, we're probably mixing a match and all the time,
you know, perhaps you even done the same law breaking behavior.
You know, we may do both on on one thing,
and we do one on another thing, and then the
(18:34):
other one on as still a third thing. But with
the robot, it seems like you're gonna deal more or
less with kind of an absolute direction. Either they're going
to be um either the law is is to be
obeyed or the law is to be taken into your
cost analysis. Well, yeah, so they talk about how the
normative view is actually very much like uh, Isaac Asimov's
(18:57):
Laws of Robotics, inviolable rules, and the the Asimov story
is doing a very good job of demonstrating why inviolable
rules are really difficult to implement in the real world.
Like you know that they Asimov explored this brilliantly, and
along these lines, the authors here argued that there there
are major reasons to think it will just not make
(19:19):
any practical sense to program robots with a normative view
of legal remedies. That probably when people make AI s
and robots that that have to take these kind of
things into account, they're almost definitely going to program them
according to the the economic view, right. Uh, They say
that quote encoding the rule don't run a red light
as an absolute prohibition, for example, might sometimes conflict with
(19:42):
the more compelling goal of not letting your driver die
by being hit by an oncoming truck. So the robots
are probably going to have to be economically economically motivated
to an extent like this. Um. But then they talk
about how you know, this gets very complicated because robots
will calculate the risks of reward and punishment with different
(20:05):
biases than humans, or maybe even without the biases that
humans have that the legal system relies on in order
to keep us obedient. Humans are highly motivated, usually by
certain types of punishments that like, you know, humans like
really don't want to spend a month in jail, you know,
most of the time. And you can't just rely on
(20:26):
a robot to be incredibly motivated by something like this,
first of all, because like it wouldn't even make sense
to send the robot itself to jail. So you need
some kind of organized system for making a robot understand
the cost of bad behavior in a systematized way that
made sense to the robot as a as a demotivating incentive. Yeah,
(20:47):
Like shame comes to mind as another aspect of of
all this, Like how do you shame a robot? Do
you have to program a robot to feel shamee and
being uh, you know, made to give a public apology
or something? Yeah? Uh so, so they are you that
that it really only makes sense for robots to look
at legal remedies in an economic way, and then they
write quote it thus appears that Justice Holmes, archetypical bad man,
(21:09):
will finally be brought to corporeal form, though ironically not
as a man at all. And if Justice Holmes metaphorical
subject is truly morally impoverished and analytically deficient, as some accused,
it will have significant ramifications for robots. But yeah, thinking
about these incentives, it gets more and more difficult, Like
the more you try to imagine the particulars humans have
(21:32):
self motivations, you know, pre existing motivations that can just
be assumed. In most cases, humans don't want to pay
out money, Humans don't want to go to jail. How
would these costs be instantiated as motivating for robots? You
would have to you would have to basically force some
humans I guess meaning the programmers or creators of the robots,
(21:53):
to instill those costs as motivating on the robot. But
that's not always going to be easy to do because, okay,
imagine a robot does violate one of these norms and
it causes harm to somebody, and as a result, the
court says, okay, uh, someone, you know, someone has been
harmed by this negligent or failed autonomous vehicle, and now
(22:14):
there must be a payout. Who actually pays? Where is
the pain of the punishment located? A bunch of complications
to this problem arise like, it gets way more complicated
than just the programmer or the owner, especially because in
this age of artificial intelligence, there is a kind of
there's a kind of distributed responsibility across many parties. The
(22:36):
authors write, quote robots are composed of many complex components,
learning from their interactions with thousands, millions, or even billions
of data points, and they are often designed, operated, least
or owned by different companies. Which party is to internalize
these costs, The one that designed the robot or AI
in the first place, and that might even be multiple companies,
(22:57):
The one that collected and curated the data set used
to train and its algorithm in unpredictable ways, the users
who bought the robot and deployed it in the field.
And then it gets even more complicated than that, because
the authors start going into tons of ways that we
can predict now that it's unlikely that these costs will
be internalized in commercially produced why robots in ways that
(23:20):
are socially optimal, Because if if you're you're asking a
corporation that makes robots to take into account some type
of economic disincentive against the robot behaving badly, other economic
incentives are going to be competing with those disincentives, right,
So the author's right. For instance, if I make it
(23:41):
clear that my car will kill its driver rather than
run over a pedestrian, if the issue arises, people might
not buy my car. The economic costs of lost sales
may swamp the costs of liability from a contrary choice.
In the other direction, car companies could run into pr
problems if their cars run over kids. But simply it
is aggregate profits, not just profits related to legal sanctions,
(24:05):
that will drive robot decision making. And then there are
still a million other things to consider. I mean, one
thing they talk about is the idea that even within
corporations that produce UH robots and ai uh, the parts
of those corporations don't all understand what the other parts
are doing. You know. They say workers within these corporations
(24:25):
are likely to be siloed in ways that interfere with
effective cost internalization. UH. Quote. Machine learning is a specialized
programming skill, and programmers aren't economists. Uh. And then they
talk about why in many cases, it's going to be
really difficult to answer the question of why an AI
did what it did, So can you even determine that
(24:45):
the AI say, was was acting in a way that
wasn't reasonable, Like, how could you ever fundamentally examine the
state of mind of the AI well enough to to
prove that the decision it made wasn't the most reasonable
one from its own perspective of But then another thing
they raise, I think is a really interesting point, and
this gets into one of the things we talked about
in the last episode where thinking about culpability UH for
(25:11):
AI and robots actually makes us is going to force
us to re examine our ideas of of culpability and
blame when it comes to human decision making. Because they
talk about this the idea that quote, the sheer rationality
of robot decision making may itself provoke the ire of humans.
(25:32):
Now how would that be? It seems like we would say, okay, well,
you know, we want robots to be as rational as possible.
We don't want them to be irrational. But it is
often only by carelessly putting costs and risks out of
mind that we are able to go about our lives.
For example, people drive cars, and no matter how safe
(25:54):
of a driver you are, driving a car comes with
the unavoidable risk that you will harms one uh the
right quote. Any economist will tell you that the optimal
number of deaths from many socially beneficial activities is more
than zero where it. Otherwise, our cars would never go
more than five miles per hour. Indeed, we would rarely
(26:15):
leave our homes at all. Even today, we deal with
those costs and remedies law unevenly. The effective statistical price
of a human life in court decisions is all over
the map. The calculation is generally done ad hoc and
after the fact. That allows us to avoid explicitly discussing
politically fraught concepts that can lead to accusations of trading
(26:37):
lives for cash. And it may work acceptably for humans
because we have instinctive reactions against injuring others that make
deterrence less important. But in many instances robots will need
to quantify the value we put on a life if
they are to modify their behavior at all. Accordingly, the
companies that make robots will have to figure out how
(26:59):
much they value you human life, and they will have
to write it down in the algorithm for all to see,
at least after extensive discovery UH referring to like you
know what, the courts will find out by looking into
how these algorithms are created. And I think this is
a fantastic point, Like, in order for a robot to
make ethical decisions about living in the real world, it's
(27:21):
going to have to do things like put a price
tag on you know, what kind of risk to human
life is acceptable in order for it to do anything,
And we don't, and that seems monstrous to us. It
does not seem reasonable for any percent chance of harming
a human, of killing somebody to be the unacceptable risk
(27:43):
of your day to day activities. And yet it actually
already is that, you know, it always is that way
whenever we do anything, but we just like have to
put it out of mind, like we can't think about it. Yeah,
I mean, like, what's the alternative, right A programming monstrous
self delusion into the self driving car where it says
I will not get into a wreck on my on
(28:05):
my next route because I cannot That cannot happen to me.
It has never happened to me before, it will never happen.
You know, These sorts of you know, ridiculous, not even
statements that we make in our mind. It's just kind
of like assumptions, like that's that's the kind of thing
that happens to other drivers, and it's not going to
happen to me, even though we we've all seen the
(28:25):
you know, the statistics before. Yeah, exactly, I mean, I
think this is a really good point. And uh so,
in this case, the robot wouldn't even necessarily be doing
something evil. In fact, you could argue there could be
cases where the robot is behaving in a way that
is far safer, far less risky than the average human
doing the same thing. But the very fact of its
(28:47):
clearly coded rationality reveals something that is already true about
human societies, which we can't really bear to look at
or think about. So another thing that the authors explored
that I think is really interesting is the idea of
(29:08):
how robot punishment would make it like directly punishing the
robot itself, whether how that possibility might make us rethink
the idea of punishing humans. Uh Now, of course, it's
just the case that whether or not it actually serves
as any kind of deterrent, whether or not it actually
(29:29):
rationally reduces harm, it may just be unavoidable that humans
sometimes feel they want to inflict direct harm on a perpetrator,
as punishment for the crime they're alleged to have committed,
and that may well translate to robots themselves. I mean,
you can imagine we we've all i think raged against
an inanimate object before. We wanted to kick a printer
(29:51):
or something like that. Uh. And we talked in the
last episode about some of that psychological research about how
people mindlessly apply social rule to robots. The authors here right,
Certainly people punch or smash inanimate objects all the time.
Juries might similarly want to punish a robot not to
create optimal cost internalization, but because it makes the jury
(30:14):
and the victim feel better. The authors write later towards
their conclusion about the idea of directly punishing robots that quote,
this seems socially wasteful. Punishing robots not to make them
behave better, but just to punish them is kind of
like kicking a puppy that can't understand why it's being hurt.
The same might be true of punishing people to make
(30:34):
us feel better, but with robots, the punishment is stripped
of any pretense that it is sending a message to
make the robot understand the wrongness of its actions. Now
I'm pretty sympathetic personally to the point of view that
a lot of punishment that happens in the world is
not actually uh, is not actually a rational way to
reduce harm, but just kind of like is uh. You know,
(30:58):
if it serves any purpose, it is the purpose of
the emotional satisfaction of people who feel they've been wronged,
or people who want to demonstrate moral approbrium on on
the offender. But I understand that. You know, in some cases,
you could imagine that punishing somebody serves as an object
example that deters behavior in the future, and to the
(31:20):
extent that that is ever the case. If it is
the case, could punishing a robot serve that role, could
actually inflicting say like like punching a robot or somehow
otherwise punishing a robot serve as a kind of object
example that deters behavior in humans, say, say the humans
who will program the robots of the future. It's a
(31:40):
weird kind of symbolism to imagine. Yeah, I mean, when
you start thinking about, you know, the ways to punish robots,
I mean you think of some of the more ridiculous
examples that have been brought up in sci fi and
sci fi comedy like robot Hells and so forth. Um,
and or the just the idea of even destroying or
deleting a robot that is faulty or misbehaving. Um. But
(32:05):
but maybe, you know, maybe it ends up being something
more like I think of game systems right where say,
if you accumulate too many of uh say, madness points,
your I don't know, your movement is cut in half,
that sort of thing, and then that has a ramification
on how you play the game and to what extent
you can play the game well. And therefore, like playing
(32:26):
into the economic model, you know, it could it could
have sort of artificially constructed but very real consequences on
how well a system could behave, you know, but then again,
you could imagine ways that an AI might find ways
to to circumvent that, and say, well, if I play
the game a certain way where I don't need to
move at normal speed, I can just move at half
(32:48):
speed but have the benefit of getting to break these rules,
then who knows, you know, it just I feel like
there it seems an inescapable maze. Yeah. Well, that's that's
interesting because is edging toward another thing that the authors
actually talked about here, which is the idea of a
robot death penalty. Uh. And this is funny because I
(33:10):
again because personally, you know, I see a lot of
flaws in in applying a death penalty to humans. I
think that is a very flawed judicial remedy. But I
can understand a death penalty for robots. Like you know,
robots don't have the same rights as human defendants. If
a robot is malfunctioning or behaving in a way that
(33:32):
is so dangerous as to suggest it is likely in
the future to continue to endanger human lives to an
unacceptable extent, then yeah, it seems to me reasonable that
you should just turn off that robot permanently. Okay, But
but then again, and then it raises the question, well,
what about what what led us to this malfunction? Is
(33:52):
there something in the system itself that needs to be
remedied in order to prevent that from happening? Again, that's
like very good point, and the authors bring up exactly
this concern. Yeah, so they say, well, then again, so
a robot might not have human rights where you would
be concerned about the death penalty for the robot's own
good but you might be concerned about what you are
(34:12):
failing to be able to learn from. Allowing the robot
to continue to operate like that that could help you
refine AI in the future. Maybe not letting it continue
to operate in the wild, but I don't know, keeping
it operative in some sense because like, whatever it's doing
is something we need to understand better. So, you know,
with the robot prison instead of robot death penalty. UM,
(34:34):
and of course that the human comparison to be made
is equally as is frustrating because you end up with
scenarios where you'll have, um, a society that's very pro
death penalty. But then when it comes to doing the
same sort of backwork and saying, well, what led to
this case, what were some of the systematic problems, uh,
cultural problems, societal problems, I don't know, you know, well,
(34:56):
whatever it is that that led to this case that
needed to be remedied with death, should we correct those
problems too, And in some cases the answer seems to be, oh, no,
we're not doing that. We'll just we'll just do the
death penalty as is necessary, even though it doesn't actually
prevent us from reaching this this point over and over again.
I mean, I feel like It's one of the most
common features of the tough on crime mentality that it
(35:18):
is resistant to the idea of understanding why what led
a person to commit a crime. I mean, you've heard
I'm trying to think of an example of somebody, but
I mean you've heard the person say, oh, uh, you know, oh,
you're just gonna give some sob story about what happened
when he was a child or something like that. Yeah, yeah, yeah, yeah,
I've definitely encountered that that that counter argument before. Yeah,
(35:40):
but yeah, I mean I think we're probably on the
same page that it really probably is very useful to
try to understand what are the common underlying conditions that
you can detect when people do something bad. And of
course the same thing would be true of robots, right,
and it seems like with robots there would potentially be
room for true rehabilitation with with with these things. If not,
(36:02):
I mean, certainly you could look at it in a
software hardware scenario where like, Okay, the software's something's wrong
with the software, Well delete that put in some some
healthy software, um, but keep the hardware. Uh you know,
that's in a way that's rehabilitation right there. It's a
sort of rehabilitation that's not possible with humans. We can't
wipe somebody's mental state and replace it with a new,
(36:23):
factory clean mental state. You know, we can't go back
and edit someone's memories and traumas and what have you. Uh.
But with machines, it seems like we would have more
ability to do something of that nature. Yeah, though, this
is another thing that comes up, and I mean, of
course it probably would be useful to try to learn
from failed AI in order to better perfect AI and robots.
(36:46):
But on the other hand, in in basically the idea
of trying to rehabilitate or reprogram robots that do wrong, uh,
the authors point out that they're probably going to be
a lot of difficulties in enforcing, say, the the equivalent
of court orders against robots. So one thing that is
a common remedy in in legal cases against humans, as
(37:07):
you might get a restraining order, you know, you need
to stay fifty feet away from somebody right fifty feet
away from the plaintiff or something like that, or you
need to not operate a vehicle or you know something.
There will be cases where it's probably difficult to enforce
that same kind of thing on a robot, especially on
robots whose behavior is determined by a complex interaction of
(37:29):
rules that are not explicitly coded by humans. So you know,
most AI these days is not going to be a
series of if then statements written by humans, but it's
going to be determined by machine learning, which can to
some extent be sort of reverse engineered and and somewhat
understood by humans. But the more complex it is, the
harder it is to do that. And so there might
(37:50):
be a lot of cases where you know, you say, okay,
this robot needs to do X, it needs to obit,
you know, stay fifty feet away from the plaintiff or something,
but the person whoever is in charge of the robot
might say, I don't know how to make it do that.
Or the possibly more tragic or funnier example would be
the it discovers the equivalent of the drone with the
(38:12):
wormhole that we talked about in the last episode, right
where it's the robot is told to keep fifty feet
of distance between you and the plaintiff. Robot obeys the
role by lifting the plaintiff and throwing them fifty feet away.
So to read another section from Limeley and casey. Here,
they're right to issue an effective injunction that causes a
robot to do what we want it to do and
(38:32):
nothing else, requires both extreme foresight and extreme precision in
drafting it. If injunctions are to work at all, courts
will have to spend a lot more time thinking about
exactly what they want to happen and all the possible
circumstances that could arise. If past experience is any indication,
courts are unlikely to do it very well. That's not
(38:54):
a knock on courts. Rather, the problem is twofold words
are notoriously bad at conveying our tended meaning, and people
are notoriously bad at predicting the future. Coders, for their part,
aren't known for their deep understanding of the law, and
so we should expect errors in translation even if the
injunction is flawlessly written. And if we fall into any
(39:15):
of these traps, the consequences of drafting the injunction incompletely
maybe quite severe. So I'm imagining you issue a cord
order to a robot to do something or not do something.
You're kind of in the situation of like the monkeys
pawl wish you know, right, like you, Oh, you shouldn't
have phrased it that way. Now you're in for real trouble.
(39:36):
Or what's the better example of that? And there's some
movie we were just talking about recently with like the
Bad Genie who when you phrase a wish wrong, does
you know works it out on you in a terrible way. Um,
I don't know. We were talking about Lepricn or wish
Master or something. Does LEPrecon grant wishes? I don't remember
LEPrecon granting any wishes. What's he do? Then? I think
(39:56):
the only one I've seen is Lepricn in space, so
it is. I'm a little foggy on the the logic.
I don't think he grants wishes. He just he just
like rides around on skateboards and punishes people. He just
attacks people who try to get his gold and stuff. Well,
but Lepricans in general are known for this sort of thing. Though,
where are they? Okay, if you're not precise enough, they'll
(40:17):
work something in there to cheat you out of your
your your prize. I'm trying to think, so like, don't
come within fifty feet of the plaintiff, and so the
robot I don't know, like it builds a big yard
stick made out of human feed or something. Yeah, yeah,
has fifty ft long arms again to lift them into
the air. Something to that effect. Or say the say
(40:38):
it's uh, for some reason, schools are just too dangerous
and this self driving car is not permitted to go
within um, you know some you know so many blocks
of an active school, and so it calls in a
bomb threat on that school every day in order to
get the kids out so that it can actually go
buy I don't know, something to that effect. Maybe, well,
(41:00):
that reminds me of a funny observation that uh, not
that this is lawful activity, but uh, a funny observation
that the authors make towards their conclusion. They bring up
there are cases of of crashes with autonomous vehicles where
the autonomous vehicle didn't crash into someone. The autonomous vehicle,
(41:23):
you could argue, caused a crash, but somebody else ran
into the autonomous vehicle because the autonomous vehicle did something
that is legal and presumably safe but unexpected. And examples
here would be driving the speed limit in certain areas
or coming to a complete stop at an intersection. And
(41:45):
this is another way that the authors are bringing up
the idea that, uh, examining robot logic is really going
to have to cause us to re examine the way
humans interact with the law, because there are cases where
people cause problems that lead to harm by a baying
the rules. Oh yeah, Like I think of this all
the time, and imagine most people do when when driving
(42:06):
for any long distance, because you have the speed limit
as it's posted, you have the speed that the majority
of people are driving. Um, you know, you have that
sort of ten mile over zone. Then you have the
people who are driving exceedingly fast. Then you have that
minimum speed limit that virtually nobody is driving forty miles
(42:26):
per hour on the interstate, but it's posted. Uh, and
therefore it would be legal to drive forty one mile
per hour if you were a robot and weren't in
a particular hurry. And perhaps that's you know, maximum efficiency
for your travel. Uh. There's so many, so many things
like that to think about, and I think we're probably
not even very good at at guessing until we encounter
(42:48):
them through robots. How many other situations there are like
this in the world, where where you can technically be
within the bounds of the law, like you're doing what
by the book you're supposed to be doing. Actually, it's
really dangerous to be doing it that way. So how
are you supposed to interrogate a robot state of mind?
And when it comes to stuff like that. But so anyway,
(43:09):
this leads to the author's talking about the difficulties in
in robots state of mind valuation, and they say, quote,
robots don't seem to be good targets for rules based
on moral blame or state of mind, but they are
good at data. So we might consider a legal standard
that bases liability on how safe the robot is compared
to others of its type. This would be a sort
(43:31):
of robotic reasonableness test that could take the form of
a carrot, such as a safe harbor for self driving
cars that are significantly safer than average or significantly safer
than human drivers. Or we could use a stick holding
robots liable if they lagged behind their peers, or even
shutting down the worst ten percent of robots in a
(43:52):
category every year. So I'm not sure if I agree
with this, but this was an interesting idea to me.
So instead of like trying to to interrogate the underlying logic,
of a type of autonomous car, robot or whatever. Because
it's so difficult to try to understand the underlying logic.
What if you just compare its outcomes to other machines
(44:16):
of the same genre as it, or two humans. I mean,
you can imagine this working better in the case of
something like autonomous cars, then you can and you know
other cases where the robot is essentially introducing a sort
of a new genre of agent into the world. But
autonomous cars are in many ways going to be roughly
equivalent in outcomes to human drivers in in regular cars,
(44:39):
and so would it make more sense to try to
understand the reasoning behind each autonomous vehicle's decision making when
it gets into an accident, or uh, to compare its
behavior to I don't know, some kind of aggregate or
standard of human driving or other autonomous vehicles, or maybe
we just we just tell it, Look, most humans I
(45:00):
have like selfish bastards, So just go do it and
do what you gotta do well. I mean, I would
say that there is a downside risk to not taking
this stuff seriously enough, which is uh, which is something
like that, I mean something like essentially letting robots go
hog wild? Because they can well be designed and not
(45:21):
saying that anybody would be you know, maliciously going wahaha
and rubbing their hands together while they make it this case.
But you know, you could imagine a situation where there
are more and more robots entering the world, where the uh,
the corporate responsibility for them is so diffuse that nobody
can locate the one person who's responsible for the robots behavior,
(45:42):
and thus nobody ever really makes the robot, you know,
behave morally at all. So robots just sort of like
become a new class of superhuman psychopaths that are immune
from all consequences. In fact, I would say that is
a robot apocalypse scenario I've never seen before done in
a movie. It's always like when the robots are terrible
(46:03):
to us, it's always like organized, it's always like that,
you know, they okay, they decide humans are a cancer
or something and there so they're going to wipe us out.
What if instead, it's the problem is just that robots,
sort of, by corporate negligence and distributed responsibility for their
behavior among humans, robots just end up being ultimately a
(46:23):
moral and we're flooded with these a moral critters running
around all over the place that are pretty smart and
really powerful. I guess there are You do see some
shades of this in UM in some futuristic sci fi genres.
I'm particularly thinking of some of the models of cyberpunk genre,
where the the corporation model has been has been embraced
(46:47):
as the way of understanding the future of Aiyes, um,
but but yeah, I think I think for the most
part this this scenario hasn't been as explored as much
we tend to. We tend to want to go for
the eve overlord or the out of control kilbot rather
than this, right, yeah, you want you want an identifiable villain,
just like they do in the courts. But yeah, sometimes uh,
(47:10):
sometimes corporations or manufacturers can be kind of slippery and
saying like whose thing is this? Thank so, I was
thinking about all this, about the idea of you know,
particularly self driving cars being like the main example we
we ruminate on with with this sort of thing. UM,
(47:32):
I decided to to look to the book Life three
point oh by Max teg Mark. UM, which is a
is a really great book came out of a couple
of years back, and Max teg Mark is a Swedish
American physicist, cosmologist and machine learning researcher. If you've been
listening to the show for a while, you might remember
that I briefly interviewed him, had like a mini interview
(47:52):
with him at the World Science Festival, UH several years back. Yeah,
and I know I've referenced his book Our Mathematical Universe
previous episodes. Yeah. So so these are these are both
books intended for a wide audience, very very readable. Life
three point oh does a does a fabulous job of
walking the reader through these various scenarios of UH in
(48:14):
many cases of of AI scendency and how it could work.
And he gets into this this topic of UM of
legality and UM and and AI and self driving cars.
Now he does not make any allusions to Johnny Cab
in Total Recall, but I'm going to make allusions to
Johnny Cab in Total Recall is a way of sort
of putting a manic face on self driving cars. How
(48:36):
did I get here? The door opened, you got in,
it's sound reason, so UM, imagine that you're in a
self driving Johnny cab and it recks. So the basic
question you might ask is are you responsible? For this
wreck as the occupant. That seems ridiculous to think so, right,
you weren't driving it, You just told it where to go? Um,
(48:59):
Are the owners of the Johnny Cab responsible? Now? This
seems more reasonable, right, sure, but again it runs into
a lot of the problems we were just raising there. Yeah,
but tag Mark points out that there is this other
option and that American legal scholar David of Latic has
pointed out that perhaps it is the Johnny Cab itself
(49:19):
that should be responsible. Now we've been already been discussing
a lot of this, like what does that mean? What
does it mean if a have a Johnny Cab of
a self driving vehicle is responsible for the wreck that
it is in? What you know, how do we even
begin to make sense of that statement? Do you do
you take the damages out of the Johnny cabs bank account? Well,
(49:40):
that's the thing. We we kind of end up getting
into that scenario because if the Johnny Cab has responsibilities
than than tag Mark rights, why not let it own
car insurance? Not only would this allow for it to
financially handle accidents, it would also potentially serve as a
design incentive and a purchasing incent incentive. So the the
(50:01):
idea here is the better self driving cars with better
records will qualify for lower premiums, and the less reliable
models will have to pay higher premiums. So if the
Johnny Cab runs into enough stuff and explodes enough, then
that brand of Johnny Caps simply won't be able to
take to the streets anymore. Oh this is interesting, okay,
(50:22):
So in order to mean this is very much the
economic model that we were discussing earlier. So when Schwarzenegger
hops in and Johnny Cap says, where would you like
to go? And he says, drive, just drive anywhere? And
he says, I don't know where that is. And so
so his incentive to not just like blindly plow forward
is how much would it cost if I ran into
something when I did that? Yeah? Exactly. But but Tamar
(50:47):
points out that the implications of letting a self driving
car own car insurance it ultimately goes beyond this situation,
because how does the Johnny Cab pay for its insurance
policy that, again it hypothetically owns in this scenario? Should
we let it own money in order to do this?
Does it have its own bank account? Like you alluded
to earlier, especially if it's operating as an independent contractor
(51:09):
of sorts, perhaps paying back certain percentages or fees to
a greater cab company. Like maybe that's how it would work.
And if it can own money, well, can it also
own property like perhaps at the very least it rents
garage space, uh, but maybe it owns garage space for itself, um,
you know, or a maintenance facility or the tools that
(51:31):
work on it. Does it own those as well? Does
it own spare parts? Does it own the bottles of
water that go inside of itself for its customers? Does
it own the complementary wet towels for your head that
it keeps on hand? Yeah? Um, I mean if nothing else.
It seems like if it owned things like, the more
things it owns, the more things that you could potentially
(51:54):
um uh invoke a penalty upon through the legal system.
And if they can own money and property and again
potentially themselves, then Tegmark takes it a step further. He writes,
if this is the case, quote, there's nothing legally stopping
smart computers from making money on the stock market and
using it to buy online services. Once the computer starts
(52:17):
paying humans to work for it. It can accomplish anything
that humans can do. I see. So you might say
that even if you're skeptical of an AI's ability to
have say the emotional and cultural intelligence to uh to
write a popular screenplay or you know, create a popular movie,
it just doesn't get humans well enough to do that.
(52:37):
It could, at least if it had its own economic
agency pay humans to do that, right, right and um.
Elsewhere in the book, tag Mark gets into a lot
of this, especially the entertainment idea, presenting a scenario by
which machines like this could gain the entertainment industry in
order to to ascend to you know, extreme financial power.
A lot of it is just like sort of playing
(52:59):
the algorithms, you know, like doing corporation stuff and then
hiring humans as necessary to to bring that to fruition.
You know, I mean, would this be all that different
from any of our like I don't know, Disney or
Comic book Studios or whatever exists today. Yeah, yeah, exactly. Um.
So you know we already know the sort of prowess
(53:20):
that computers have when it comes to the stock market.
Tech Mark you know, points out that, you know, you know,
what we have examples of this in the world already
where we're using AI, and he writes that it could
lead to a situation where most of the economy is
owned and controlled by machines. And this, he warns, is
not that crazy, considering that we already live in a
world where non human entities called corporations exert tremendous power
(53:43):
and hold tremendous wealth. I think there there is a
large amount of overlap between the concept of corporation and
the concept of an AI. Yeah, and uh, And then
there are steps beyond this as well. If if machines
can do all of these things. So if they can,
if they can if a machine can own property, if
it can potentially own itself, if it can if it
(54:03):
can buy things, if it can invest in the stock market,
if it can accumulate financial power. If you can do
all these things, then should they also get the right
to vote as well? You know, it's it's potentially paying taxes?
Does it get to vote in addition to that? And
then if not, why and what becomes the caveat that
determines the right to vote in the scenario? Now, if
(54:25):
I understand you, right, I think you're saying the tech
mark is is exploring these possibilities as stuff that he
thinks might not be as implausible as people would suspect,
rather than his stuff where he's like, here's my ideal world,
right right, He's saying like, look, you know, this is
already where we are. We know what a I can do,
and we can easily extrapolate where it might go. These
(54:47):
are the scenarios we should we should potentially be prepared
for in much the same way that nobody, nobody really
at an intuitive level, believes that a corporation is a person,
like a like a human being as a person. Uh,
you know, it's at least done well enough at convincing
the courts that it is a person. So would you
not be able to expect the same coming out of
machines that were sophisticated enough right and convincing the court is? Uh,
(55:12):
I'm glad you brought that up, because that's that's another
area that tech markets into. So what does it mean
when judges have to potentially judge aiyes? Um? Would these
be specialized judges with technical knowledge and understanding of the
complex systems involved, Uh, you know, or is it going
to be a human judge judging a machine as if
it were a human. Um. You know, both of these
(55:35):
are possibilities. But then here's another idea that Tegmarks discusses
at length. What if we use robo judges um. And
this ultimately goes beyond the idea of using robo judges
to judge at the robots, but potentially using them to
judge humans as well. Um. Because while human judges have
limited ability to understand the technical knowledge of cases, robo judges,
(55:57):
tech Mark points out would in theory have un limitted
learning and memory capacity. They could also be copied, so
there would be no staffing shortages you need to judges today,
We'll just copy and paste, right, uh and simplification. But
but you know, essentially, once you have one, you can
have many. Uh. This way justice could be cheaper and
just maybe a little more just by removing the human equation,
(56:20):
or at least so the machines would argue, right, but
then the others. The side of the thing is, we've
already discussed how human created a I is susceptible to
to bias, so we could potentially, you know, create we
could create a robo judge. But if we're not not careful,
it could be bugged, it could be hacked, it could
be otherwise compromised, where it just might have these various
(56:42):
biases that it is um that it is using when
it's judging humans or machines. And then you'd have to
have public trust in such a system as well. So
we run into a lot of the same problems we
run into when we're talking about trusting the machine to
drive us across town. Yeah, Like, so if robot judge,
even if now I'm certainly not granting this because I
(57:04):
don't necessarily believe this was the case, but even if
it were true that a robot judge would be better
at judging cases than a human and like more fair
and more just, you could run into problems with public
trust in those kind of judges because, for example, they
make the calculations explicit, right, the same way we talked about,
like placing a certain value on a human life. Uh,
(57:27):
it's something that we all sort of do, but we
don't like to think about it or acknowledge we do it.
We just do it at an intuitive level that's sort
of hidden in the dark recesses of the mind, and
and and don't think about it. A machine would have
to like put a number on that and and for
public transparency reasons, that number would probably need to be
publicly accessible. Yeah, another area, and this is where this
(57:48):
is another topic and robotics that you know, we could
easily discuss at at extreme length, but there's a robotic
surgery to consider. You know. While we continue to make
great strides and robotic surgery, and in some cases the
robotic surgery route is indisputably the safest route, there remains
a lot of discussion regarding UM. You know, how robot
(58:09):
surgery is, UM is progressing, where it's headed, and how
malpractice potentially factors into everything UM. Now to despite the
advances that we've seen, we're not quite at the medical
droid level, you know, like the autonomous UH surgical bought.
But as reported by Dennis Grady in the New York
Times just last year, AI coupled with new imaging techniques,
(58:32):
are already showing promise as a means of diagnosing tumors
as accurately as human physicians, but at far greater speed. UM.
So it's interesting to to think about these advancements, but
at the same time realize that particularly in an AI
we're talking more about AI I mean, particularly in AI
and medicine. We're talking about AI assisted medicine or AI
(58:55):
assisted surgery. So the human AI relationship is in these
cases not one of replacement, but of cooperation, at least
for the near term. Yeah. Yeah, yeah, I see that
because I mean, there are many reasons for that, but
one of the one of the reasons that strikes me
is it comes back to a perhaps sometimes irrational desire
(59:17):
to inflict punishment on a person who has done wrong,
even if it doesn't like help the person who has
been harmed in the first place. Um. There there are
certain just like intuitions we have, and I think one
of them is we we feel more confident if there
is somebody in the loop who would suffer from the
consequences of failure, you know, like the fit Like it
(59:40):
doesn't just help that, Like, oh no, I I assure
you the surgical robot has, you know, strong incentives within
its programming not to fail, not to botch this surgery
and take out your you know, remove one of your
vital organs. Yeah. Like, on one level, on some level,
we want that person to know their careers on the line,
are the reput pation is on the line. You know,
(01:00:01):
I think most people would feel better going under surgery
with the knowledge that if the surgeon were to do
something bad to you. It's not just enough to know
that the surgeon surgeon is going to try really hard
not to do something bad to you. You also want
the like second order guarantee that, like, if the surgeon
were to screw up and take take out one of
(01:00:23):
your vital organs, something bad would happen to them and
they would suffer. But with a robot, they wouldn't suffer.
It's just like, oh, whoops. I wonder if we end
up reaching a point with this in this discussion where
you know, we're talking about robots hiring people, do we
end up in a in a position where aiyes, higher
(01:00:44):
humans not so much because they need human um expertise
or human skills or human senses the ability to feel pain. Yeah,
and to be culpable, Like they need somebody that will,
like essentially aies hiring humans to be scapegoats in the
system or in their in the in the in their
(01:01:04):
particular job. Uh So they're like, yeah, we need a
human in the loop. Not because I need a human
in the loop. I can do this by myself, but
if something goes wrong. If you know, then there's always
a certain chance that something will happen. I need a
human there that will bear the blame. Every robot essentially
needs a human co pilot, even in cases where robots
(01:01:25):
far outperformed the humans, just because the human copilot has
to be there to accept responsibility for failure. Oh yeah.
In the first episode, we talked about the idea of
there being like a punchable um plate on a robot,
um for when it for when we feel like we
need to punish it. It's like that, except instead of
a specialized plate on the robot itself, it's just a
(01:01:46):
person that the robot hired. A whipping boy. Oh this
is so horrible and and so perversely plausible. I can
I can kind of see it. It's like in my lifetime,
I can see it. Well, thanks for the nightmares, Rob, Well, no,
I think we've had plenty of potential nightmares discuss here.
But I mean we shouldn't just focus on the nightmares.
(01:02:08):
I mean, again, to be clear, Um, you know, so
the idea of self driving cars, the idea of robot
assisted surgery, I mean, we're ultimately talking about the aim
of of of creating safer practices of saving him and lives. So, uh,
you know it's all it's not all nightmares and um
robot health scapes. But we have to be realistic about
(01:02:31):
the very complex UM scenarios and tasks that we're building
things around and unleashing machine intelligence upon. Yeah. I mean
I made this clear in uh in the previous episode.
I'm not like down on things like autonomous vehicles. I mean,
ultimately I think autonomous vehicles are are probably a good thing. Um,
but I do think it's really important for people to
(01:02:54):
start paying attention to these, uh, these unbelievably complicated philosophical, moral,
and legal questions that will inevitably arise as more independent
and intelligent agents infiltrate our our world. All right, Well,
on that note, we're gonna go ahead close it out.
But if you would like to listen to other episodes
(01:03:14):
of Stuff to Blow Your Mind, you know where to
find them. You can find our core episodes on Tuesdays
and Thursdays in the Stuff to Blow Your Mind podcast feed.
On Monday's, we tend to do listener mail. On Wednesdays,
we tend to bust out an artifact shorty episode and
on Fridays we do a little weird house cinema where
we don't want to talk about science so much as
we just talk about one weird movie or another, and
(01:03:36):
then we have a little rerun on the weekend. Huge Things.
As always to our excellent audio producer Seth Nicholas Johnson.
If you would like to get in touch with us
with feedback on this episode or any other, to suggest
a topic for the future, or just to say hello,
you can email us at contact at stuff to Blow
your Mind dot com. Stuff to Blow Your Mind is
(01:04:02):
production of I Heart Radio. For more podcasts for my
Heart Radio, visit the i heart Radio app, Apple Podcasts,
or wherever you listening to your favorite shows.