All Episodes

March 4, 2015 55 mins

As machine intelligence becomes more complex, our machines will increasingly behave in ways that surprise us. What happens when one of them does something that we find socially, legally or even morally unacceptable?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Neither the
creators nor podcasters for Forward Thinking are bona fide legal counsel.
Consult a local artificial intelligence before participating in actions that
may or may not result in legal consequences a I
crime may or may not be punishable in your quadrant.
Welcome to Forward Thinking. Hey there, and welcome to forwards Thinking.

(00:25):
The podcast that looks the future says secret Secret, I've
got a secret. I'm Jonathan Strickland, I'm Lauren Laban, and
I'm Joe McCormick. So how's everybody doing today? Not too bad?
How about you? Joe? I am doing wonderful because we
have a podcast topic that I'm very excited about, very excited.
It's about breaking the law, breaking the law, which which

(00:47):
we is the second time we have referred to that
while we were in the studio and it's still funny
to us. Yes, and so it's not about the future
of judas priest. No I saw many Oh did you
not get that reference at first? Oh? No, no, no,
I was just like, what is the future of judus priests?
Can we talk about that today? Instead? Can we just
scrap this entire episode if I cannot do that, so

(01:09):
let's let us focus on somehow. Because I don't have
the notes on Judus Priest, I can't really talk about it.
I don't think that would be an incredibly fruitful discussion. No, Instead,
I want to talk about machines doing stuff they're not
supposed to do the law, doing things that are socially unacceptable.
I thought you're talking about like when my toaster burns
the toast, but you're you're going a little further than that. Yeah,

(01:32):
that that's lame. So this would be like my toaster
going not criminal, a three state arson spree, burning down
schools or something that would be more akin to what
we're talking about. It actually would, because I want to
ask a general question about where we draw the line
for humans having to accept responsibility for what their machines do.

(01:55):
Which humans have to accept the responsibility when a machine
does something bad, sure, and or any types of machines exempt.
So there was This was actually inspired by some news
stories I saw recently, But I want to talk about
some hypotheticals first, and then we'll get into some really
bizarre actual examples of what it looks like when a

(02:16):
semi autonomous machine does something not so cool. Okay, so
let's say you are a roboticist. You enjoy building robots,
all right, that that I could imagine that, sure, And
the way you express this enjoyment is that you build
a bipedal humanoid robot and program it to strut down

(02:36):
the sidewalk, pushing random pedestrians into traffic. Well, that does
not sound like me. But okay, are you sure. I've
seen you pretty cranky some days. If that street were Twitter,
if the street were if I were pushing pedestrians into Twitter,
that I'm alright with that. Just right into the flow
of the tweets. They're falling into the birds, the fail

(02:58):
whale falls on top of them. But this example, this
this hypothetical example you give where we we have a
person who has created a machine specifically with the intent
of harming others. Yeah, this is sort of a starting
point because it should be obvious to everyone that the
person who created this machine bears responsibility for what it does,
right right, Yeah, because they designed the thing to do

(03:22):
exactly what it is doing, which is bringing people into
potential situations. Yeah. I would call this sort of like
the weapon orientation towards robotics. You're simply using the robots
body and programming as a weapon. Not much different than
if you're using a big stick to bash somebody on
the head. It's just like a more complex stick, right.

(03:44):
So in other words, the robot itself is not at fault.
It's actually carrying out the programming of and and the
design for which it was built. It's really the person
who was behind that design and programming who's truly at
fault in this case. And furthermore, the robot doesn't know
what it's doing. It's not aware of the fact that
it's causing harm to humans because it doesn't understand complex situation, right,

(04:05):
just as a an industrial robot doesn't understand that it's
building a car. It's just following a very specific set
of motions that it repeats until someone stops it, or
like your room, but doesn't understand that it's freaking your
cat right out. Yeah, well, I mean, we we have
to assume these things assuming robots don't have secret consciousness.

(04:27):
So fairly confident with that. But yeah, yeah, yeah, slightly
different scenario. What if you imagine the same thing I
just described, but one person writes the code for a
bipedal robot that walks around pushing people, but never uploads
it into a robot body because this programmer wrote the
program as a joke. Then another person comes along takes

(04:48):
that joke program uploads it into a robot body, not
understanding what it would do when activated. Who's responsible? Then? Uh? So, yeah,
this is interesting. I started king into court cases for
the closest approximation I could find for these kind of
scenarios because obviously we don't really have court law established

(05:11):
for this kind of stuff because we're not quite there yet.
But but if you think this is something that why
are they even talking about this? It's never gonna happen.
Just wait till we talk about a few cases later
in this podcast. Right, And so some of the law
I'm looking at it uh refers back to gun laws, gunmakers,

(05:32):
guns sellers, that kind of thing. And the courts have
found that gunmakers and gun sellers can be held accountable
for practices that could end up allowing guns to enter
the hands of criminals or gun traffickers if they have
not shown that they have taken the proper steps, if
they have been negligent in some way or purposefully allowing

(05:54):
this to happen, they can be held criminally responsible. So
it would probably be a case by case basis. Let's
say that we have this hypothetical situation you have proposed,
where someone has created a potentially hazardous program but not
not actually executed that program. Someone else takes that program
and uploads it to a robot, not knowing what the

(06:16):
program does. That would be something that would have to
be decided within an actual case, right, And I would
imagine that it would be a very complicated, drawn out case,
pricey a lot of different appeals, because the actual situation
you've laid out here has so much wiggle room in
either direction that it would be you know, it's impossible

(06:36):
for us to say what the outcome would be. Oh yeah, yeah,
and it obviously depends on the amount of harm done.
And but there are certainly um for example, in voluntary
man slaughter laws, sure, which means that you are still
responsible for someone's death even though you completely did not
mean to do it. Yeah yeah, there, it's really it's
establishing the level of accountability. And you know what, whether

(06:59):
in hent was there or not there, um, it would
be it would be a complicated case. But I imagine
that ultimately both parties would be held responsible. Both both
humans in this hypothetical situation would likely be held responsible
to what extent would depend upon the actual parameters of
of the situation. Yeah, okay, here's another one, the third

(07:21):
one that would be sort of analogous to the way
we have to ask questions about weapons that exist today.
What if one person builds and programs a pushing robot
and sells it to somebody else, and then the new
owner releases it onto the sidewalk and it does its thing.
Who's responsible? Then? I think both parties would be held responsible.
I think actually this one is way easier than the

(07:42):
first hypothetical situation because you have someone who has designed
a device specifically to put others into harm. There's no
other reason for that device, Like, it's not a tool
used for anything else. Okay, sure, this is this is
a special concept because is there aren't really that many
people pushing robots around right now, but but in the

(08:04):
incredible future when we all have robots that are capable
of pushing people. Um, I don't know, it's it's a
it's a complicated and big question, and I would suspect
that that the maker of the object could you know,
make any number of arguments about the actual purpose of
the object, and that it would be the person who

(08:24):
sold the dangerous thing. And well, I can I can argument.
I can see your argument because you could state that
there are a lot of things out there where the
the person who created it could argue their way out
of the You know, I just built this thing. I
didn't put it to use, and it wasn't my intent
to have this be something to harm you know, innocent

(08:47):
civilians or whatever. But in this particular hypothetical situation, it's
hard to imagine a world where someone couldn't successfully argue
you built a robot with the specific per us to
push people. Um, well, let's make it more complicated because
this this was we just started arguing about what was

(09:09):
supposed to be the really easy, obvious example. Let's say
you build a bipedal robot that is simply designed to
walk forward on a flat surface. It's not designed to
hurt anyone, and in fact, it has safety measures in
place that are supposed to keep it from hurting anyone.
But the problem is the person who programmed it's walking

(09:30):
behavior did not write very good object recognition or collision
detection code. So it walks down the sidewalk, occasionally accidentally
pushing anyone who gets in its way. Is the programmer
criminally liable? Could this be considered criminal negligence if someone's hurt?
Probably not criminal negligence, at least not right now. I mean,

(09:51):
if you're if you look at correlatives and other industries,
this would be something akin to a defect, and you
could pursue civil cases against the manufacturer, programmer, etcetera. You
would actually have to identify which parties you would want
to sue in pursuit of getting damages from whatever this

(10:13):
you know entity is UH, and I would imagine it
would follow that same route, So you wouldn't be able
to necessarily sue them or or you wouldn't be able
to pursue a claim of criminal negligence necessarily, but you
might be able to say this defect in this UH
in this device lead to injury suffering. Therefore, I'm going

(10:37):
to ask for um some form of restitution on that UM,
and liability would certainly still be on the side of
whoever made the thing in that case. Yeah, and again
I do think that it probably depends on the severity
of the damage done of course, yeah, especially if you're
talking about like lawsuits or something, right, Yeah, but yeah,

(10:58):
let's muck this up even there. One last hypothetical before
we get to the real examples. What if instead of
one programmer, many waves of graduate students and professors in
a university artificial intelligence lab work on various stages of
source code for a robot that is designed to find

(11:19):
the safest and best way to navigate a busy sidewalk,
and over time, this robot gets very complex. It's a
very complex piece of artificial intelligence. It's navigation decision making,
is displaying unexpected emergent behaviors the way we would expect
any highly advanced artificial intelligence program to have, and in

(11:39):
one particular test, it decides to kick a slow poke
out of the way. Now, it was never programmed to kick.
In fact, you know, we can stipulate they put some
kind of safety measures in place to make sure it
didn't hurt people. But this program had enough complexity and
freedom that it basically invented the move of a kick

(12:00):
as a novel solution, sort of like how we talked
about the artificial intelligence program that was able to identify
what a cat was, define what a cat was by
seeing enough examples of a cat. In this case, we're
talking about an artificial intelligent program that comes up with
a novel solution to trying to get through a particular

(12:21):
an obstacle. Yeah. Yeah, So an unspecified emergent behavior from
this robot or computer program does something socially unacceptable. Are
the creators held accountable for it? This one is particularly tricky. Yeah.
I think at this point we just burn all computers
and go back to living in caves. And it's it's

(12:43):
possible that the only solution to this problem, and we'll
talk more about this in a little bit, is to
not is when we get to this level of complexity,
this level of sophistication, is to not necessarily look back
on the people who made it, who designed it. This
is where we start to perhaps discuss the possibility of

(13:05):
extending some form of the concept of personhood two robots,
meaning that you'd have to have some form of liability
upon the robot itself, which usually I mean you'd think like, well,
how do you hold a robot responsible? I mean, you
can't punish a robot. You can't you know, you can't
go after a robot's bank account, A robot doesn't have

(13:26):
a bank account. So in this case, you would have
to create a whole new type of industry, essentially robot insurance,
which would cover this sort of thing, and you would
end up having the robot insurance would have to be
the thing that would cover that liability. And in fact,
this is something that people have honestly and seriously suggested.

(13:48):
It will talk more about personhood and a little bit.
We're going to discuss a specific implementation of robots and
robot intelligence in the form of autonomous cars, a subject
we've talked about a lot on this show, yes, and
which is more practical than the walking robots we have
been using as an example here, right right, So, so
the kung fu robots will take a back seat figuratively speaking,

(14:12):
in our autonomous car, and we will discuss personhood with that.
But we've got some other things to talk about first. Sure. Well,
there were a couple of news items I saw recently
that made me want to talk about this subject in general,
and they were both examples of where computer programs and
you know, we've been using the word robot or bought
kind of loosely. You know, there are embodied robots that

(14:35):
have some kind of physical form. But we could also
just be talking about computer programs. So these are computer
programs that one way or another violated conventions in a
way that was unacceptable at some level to law enforcement.
So one of them is a shopper bot that bought drugs. Okay, Wait,

(14:59):
you're gonna have to go a little more further into detail. Yeah, okay,
So we want to go back to last year. There
were some Swiss artists based in London that called themselves
Median Group. Bit nick I hope I pronounced that right.
It starts with an exclamation point before the Median Group. Okay.

(15:19):
And they conceived an art project exploring the existence and
implications of underground marketplaces on the deep Web. So I
think they were sort of trying to get at what
it means that we have these just continuous, ongoing underground
markets for illicit goods, all right. And they created a

(15:40):
program called the Random Darknet Shopper. It is an online
shopping bot. It's a computer program that once a week
goes into a marketplace on the deep Web. Brief side note,
Jonathan explained the deep Web in less than a hundred words.
All right. So it's theoretically the untraceable anonymous web where

(16:05):
you can do clandestine things without having all your online
motions tracked. Right, So, if there's an iceberg, that is
the Worldwide Web. The part poking out the top is Google, Yahoo,
all the normal stuff we do, and then below the
surface is this massive hive of scum and villainy. Well
it's not, I would argue, it's not as massive as

(16:28):
what the iceberg example would have you believe. But yes,
that otherwise is accurate. Okay. Anyway, so backing up, this
computer program, once a week goes onto a marketplace on
the deep Web with a hundred dollars worth of bitcoin
to spend, okay, and it makes a random purchase each week.
All right, So what are some of the purchases. Well, yeah,

(16:50):
here is a sampling of some things. The random darknet
shopper randomly bought fake diesel jeans of course, a stash
can that looks like a can of sprite. I believe
a stash can I think is too high drugs or
other illicit substances in I would I would imagine correct jewelry.
If you don't want brobbers to find your jewelry. That

(17:13):
is a very good cover story Lauren. Also it bought
a Lord of the Rings e book collection. Would you
would you might store your e hobbits in baseball cap
with a hidden camera inside. Of course, some Nike shoes.
It came up with a fake Hungarian passport scan, ten

(17:37):
packs of Chesterfield Blue cigarettes from Moldavia. And here's the
big one, a packet of ten hundred and twenty mill
Graham tablets of M D M A. That's ecstasy. Ah.
And so the the all of these things I believe
were delivered by the sellers to the artists and then

(18:00):
they were displayed at an art exhibition about this, about
this shopper at Kunstall and Saint Gallen called the dark
Net from Memes to onion Land, and that exhibit was
open through January eleven. So they had to know, the
artists had to know when they made this Darknet shopper,

(18:20):
that it was possible, maybe even very likely, that this
shopping bot would buy something illegal. Well, yeah, I mean,
the the one of the things, it's they're looking at
underground markets right right. They could have had it go
on Amazon if they had wanted it to bring back
nothing but chocolate and at the very worst, like novelty
sex toys, though you may note from the things I

(18:43):
listed that while most of its purchases were somewhat sketchy,
most of them also were not illegal, and there are
many not illegal things on sure. Yeah, I mean it's
It's one of the things we like to mention is
the fact that while there are plenty t of examples
of illegal goods, people using the Deep Web for purposes

(19:05):
to you know, illegal purposes, that's not the only reason.
There are plenty of people who are just very concerned
about their privacy. They don't want they don't want their
purchases tracked by people. They want the freedom to be
able to shop without any kind of uh, you know, supervision,
whether it's corporate or government or whatever. And it has
nothing to do with trying to shop for illegal goods. Sure,

(19:28):
but I would guess that probably the artists in question,
we're sort of hoping. I mean, it's hard to speculate,
you know, we can't really know, but yeah, they must
have known that it was highly possible that it would
buy something illegal, even though they never explicitly told it
to do so. Right, So, what's the deal? Are these

(19:49):
artists criminally liable for in countries that have drug laws
against buying something like M D M A. And should
they be held criminally liabele in such a case. I mean,
putting aside whatever you think about drug laws and all that,
just assuming this program bought something that's against the law,
should that be should the art the people who created

(20:11):
something like this be punishable under the law, right, considering
that the assuming of course, that that the the shopping
truly was random, that that there was no way to
direct said shop bot to go for anything. In particular,
is it the fault of the programmers if the shop

(20:32):
bought actually does buy something that is illegal, and it's
that's a complicated question. Yeah, I mean it would be
almost like if you if you created a bipedal robot
with a sack full of items and then sent it
down into the street with instructions to barter with random strangers.
Now what if it came back with some drugs or

(20:52):
some illegal guns or something, right, I mean, what what
would you think then? Well, and and part of it
made us be that the this is the source stuff
that's decided upon in court cases, right, It's not It's
not like that we have an answer right now where
we could say oh, well, this is clearly X, Y
or z. This is something that ends up getting decided

(21:13):
after long drawing out court cases. And also it could
even be that we ultimately see something where anything that
doesn't have any agency of its own, like it doesn't
have the ability to make decisions on its own in
a conscious way. It's truly following as close to random
action as you can determine. It may end up being

(21:34):
that if it makes contact and purchases illegal things, that
the accountability shifts to the person doing the selling of
that illegal substance and not necessarily the buying, although again
there's also the question of what do you do? What
did the people you know, wherever that stuff went to,
what do the people do with that? In this case,

(21:55):
we're talking about art and art show right right, and
that complicates it even further. So there are a few
opinions that came across. In the Guardian, they had an
article about this event, and the writer of the Guardian
article got a statement from somebody in great written from
the National Crime Agency who basically said that this kind
of thing would have to be assessed on a case
by case basis. It's so weird and unusual that you

(22:18):
you kind of have to would just look at the
individual circumstances. Though, as things like this become more common,
it seems like it would be harder to try to
make up a new way to apply the law every
time you see a case like this. Yeah. Well, well,
I'm sure that eventually the law would catch up. Yeah.
Fusion dot Net also had a good article about this

(22:39):
where they talked to a University of Washington law professor
named Ryan Callo about the legal outcome of such a situation,
and and Calo made an interesting distinction between the way
the law is written. And so Calo was talking about
the difference between laws that are written in such a
way that they explicitly punished reckless behavior versus laws that

(23:02):
explicitly call out intent. And that's how it is in
the United States. So, in other words, if if I
were to create a program that would specifically go out
and find illegal drugs, that's very clear there's an intent there.
If I create a program that it doesn't discriminate on

(23:23):
whatever is being sold and therefore could buy illegal drugs,
or it could buy a bunch of e books of
Lord of the Rings, then that would fall more under
the reckless behavior depending upon what environment I set that
shop bought out in. So, in other words, if the
phrasing of the statute itself, sure, Yeah. If the if
the statute is to guard against reckless behavior, and I

(23:46):
release a random shop bought into a target rich environment
in which there's a lot of illegal activity going on,
there's a there's definitely an argument to be made that
I was acting recklessly and therefore m AM liable in
that sense, Whereas if it were more on the intent side,
that would be a lot harder to argue. Yeah. Another
thing I wanted to read a specific sentence from a

(24:08):
quote from Calo was that he said, wanting a bad
outcome doesn't make it illegal. You cannot wish someone to death,
But purposefully leaving the bot in the dark net until
it yielded contraband seems hard to distinguish from intent. Right. So,
in other words, if you were to look at the
let's let's let's say that we were able to see
a full timeline of this project from the point when

(24:31):
they the programmers released the shop bought to the point
when they conclude their experiment. If that experiment were to
go on for several weeks, and it only stopped after
the shop bought bought the drugs. That at least is
somewhat suspicious because it looks and it may not be
the case, but it looks like they were waiting for

(24:53):
it to get one of these hits and then they said,
all right, we got We wanted to stop it, but
we I don't know what timeline was you gave me
that when you gave that list of things that it bought.
I don't necessarily know that that was chronological. Oh no, no,
I put the M D M A tablets last for effect.
I don't I don't know in what order they arrived that.

(25:13):
You might actually be able to find that out on
their blog though. And if it were something like in
week three that hit, you know, I ended up purchasing
the drugs and then they continued for another six weeks,
then you could say, well, there's no you know, it's
it's harder to say that that was specifically what they
were trying to achieve because it kept on going and
the selection involves more than just this one hit on

(25:38):
illicit substances. Yeah. Well, one of the creators actually gave
a statement to The Guardian where they said that we
are the legal owner of the drugs, were responsible for
everything the bot does as we executed the code. But
this was the qualification, one of the creators said. But
our lawyer and the Swiss constitution says art in the
public interest is allowed to be free, which is kind

(26:00):
of a totally separate issue. Yeah. It's almost like we
gotta get out of jail free card because this is
done in the name of art, which I think can
only go so far. Yeah. Well, yeah, because in January,
after the exhibit concluded, the artists uploaded an announcement to
their blog and they basically explained that Swiss officials had
seized their exhibit, so they came in that they announced

(26:23):
that the authorities came in and they confiscated the exhibit.
Presumably they said to impede this is a quote, to
impede and endangerment of third parties through the drugs exhibited
by destroying them. Yeah, saying that we are convinced that
is an objective of art to shed light on the
fringes of society and to pose fundamental contemporary questions. I'm

(26:46):
curious what contemporary questions they were posing. Well, I mean,
on one hand, they may have been intending to bring
up this very question we're discussing today, and that is
what I suspect overall, in which case they got an answer,
so mission achieved. So I mean, if the Swiss authorities
waited until after the exhibit was basically over, then I

(27:07):
say good on all parties, Like you know, it's they
They stopped the illegal behavior before it could cause potential
harm to anyone um and also let us have this
wonderful discussion about drugs and robots. Yeah. Well, there's another
story that came out just in the past few months
that's kind of similar. So in February, a Twitter account

(27:32):
named either jeff e Books or jeff ree Books. I've
seen both. I think maybe one is the handle and
one is the name on it. But it was a
Twitter account. I'll call it jeff e Books because of
my love for horse e Books, which I think it
might be a play on. A Twitter account called jeff
e Books was used to make a statement that sounded

(27:55):
like a death threat. So, while the owner of the
account was at a fashion and cosmetics event in Amsterdam,
the account jeff e Books tweeted, I seriously want to
kill people. Yep, that that does sound rather threatening. A
very sad fact about the Internet and humanity is that
this should come as a surprise to no one that

(28:17):
a Twitter account made a death threat. But what is
a surprise was that the Twitter account that made the
threat was not operated by a human. It was a
bot account. So I thought, you're gonna say it was
gonna be a cat. It does sound like what something
I can. Cats make so many more death threats than humans.
Their their eyes are just emanating death threats, beaming out.

(28:40):
Kittens think of nothing but murder all day. But this
was a butt, So what what was the So it
was the button meant to just make death threats? No? No,
no, no no, okay. So it was an account run by
a piece of software. This is what we mean when
we say a bot account. And the account was owned
by a Dutch web developer named Jeffrey vander Goot. But
Vandergoot did not compose or intend this tweet, and an

(29:04):
interesting thing happened. The authorities took the online death threat seriously,
so the police contacted and questioned Vandergoot, and they eventually
requested that the offending bought account be taken down, and
it was. But what actually happened. Why did the bot
do this? Was it like an evil Twitter bought psychopath

(29:26):
like cats. No, no, it was not programmed to make
death threats. That was not part of its intention at all.
It was programmed to cobble together random sentences. Now, why
would that work and how would it work? Well, do
you all remember the Facebook app? What would I say? Yeah, yeah,
I don't know this one. Yeah, There's been a bunch

(29:47):
of things like this over over the past several years
on the Internet. I remember one from from back in
live journal days, when it would like something would cobble
together a live journal post based on a whole bunch
of your previous live journal posts. Now, I will say it,
there is something similar to this that I did just
last night, which was that someone posted a thing on
Facebook where you use your phone to create a text

(30:09):
message using the suggested word and you just press it
twenty times in a row and and it creates a sentence.
Mine was mine was dark? Mine was I am so
much for the end of the end of the end
of the end of the end, like it just repeated
at that point. So it's apocalyptic in my case, No, no, no, okay,

(30:32):
So it's beautiful. It's not exactly like that. What would
I say? Okay, sorry, go ahead, I'm fine, I'm fine,
I'm just I'm just my my tiny dark heart is
made very happy by that. I know you like it.
I can tell you the Facebook app, what would I say?
It took all of the text that you had entered
on Facebook over some previous period of time, maybe since

(30:55):
you had signed up as a corpus. That's like an
archive of tech, and then it tried to make new
sentences by rearranging words and phrases. So a couple of
mine I went back and found when I did this
years ago. One of mine was to be my life
we need a book of secret police who work in
Ecuador and read the users interpretations of warmth. Now I

(31:18):
remember this thing. The Another one of mine was someday,
when you're off the force, hand me your badge. Here's
your badge. Here's the weirdest thing. That's a that's a
very mercurial. Uh, you know a sergeant right there in
a police have your badge. Here's your badge. Here's the

(31:40):
weirdest thing. Though, I was like, that does sound like something.
Also that The last one I remember is that I had.
I don't think I could vote for some busy hands.
I could see you saying that to actually that, I mean,
it would be a really strange context. But I I
went and found I went and found this app, and
and ran a couple of rough just before we came

(32:01):
into the studio here, and it brought back, Um, all
three of those sound unspeakably adorable, which is absolutely a
thing that I would say. I probably have said it before.
And and another one, Um, if anyone I elbowed in
the morning, it sounds like you're about to complete the

(32:21):
thought and then you're just like, yeah, well, a lot
of them do end up like that. They are these
fragments because the Facebook app was based on a Markov
chain generator, which is an algorithmic program that takes fragments
of sentences from an existing archive of text and then
it paced them together to try to make sort of
coherent but random new statements. So, according to a couple

(32:44):
of reports, the app that generated the offending tweet was
also of this kind. It was a Markov change generator,
so it was just taking the corpus of this person's
account and creating random new statements by mixing and matching
words and phrases. Here's the even weirder part. When the
bot account tweeted I seriously want to kill people, it

(33:08):
was actually in conversation with another Twitter account, also run
by a bot. So I think this may be the
first instance of bought on bought crime that I can
think of, depending on if you count all those like
robot combat TV shows from the two thousands. I also
think it indicates that even bots find bots incredibly irritating.

(33:30):
Although actually the Jeff e Book's account was not in
any real way tweeting an accidental death threat at the
other bot. I mean they were both bots just programmed
to respond with random messages when tweeted out by a stranger,
so they were basically playing spam tennis. Anyway, back to
Vandergoot being interrogated by the police for a random tweet

(33:53):
generated by a bot that I don't Vandergoot didn't even
create this spot, by the way, must be strange statement
given to the Guardian. Vandergoot said, I told the police
that I can technically see that it would be my
responsibility since I started the bot and it's basically tweeting
under my name. However, it is a random generator. So yes,

(34:16):
it's possible that something bad can come out of it,
but to treat it as if I made the threat
does not make sense to me. I feel very conflicted
about it. I can see their point, but it does
not feel right to me that the random output of
a program can be considered something I said. Yeah, I can,
I mean, I can certainly see how you would who
you could argue like, these are not my words. However,

(34:39):
there's also the argument of well, yeah, I did allow
this thing to to tweet out. I I enabled it.
But then you know, but there's that the same as
as that the same as actually making the threat. Well, well,
I mean again, I would say that if it came
to a court case, it would come down to whether

(35:00):
or not like like like how much harm was actually caused,
like if that if that bought had tweeted a threat
like that to an actual human person directly, and the
human person had had been damaged by that concept, then
that would be a legally culpable thing. Um, you know.
And and kudos to the police for tracking down death

(35:23):
threats on Twitter. That's amazing. That's part of the story
that Actually I'm most blown away, But and I was
like what I'm like, yeah, like like round of applause.
Considering how many stories we've heard about people who have
had to field this sort of stuff and and the
lack of response they've had from law enforcement, it is
rather surprising. Yeah. Well, at any rate, it is you know,

(35:45):
this is these are very unusual circumstances, right, I mean,
it's just one of those things again, like you were
pointing out, Lauren, this is the sort of stuff that
we don't have the answers for because it is so new.
It's the kind of stuff that courts are having to
answer or on the fly. Although this is certainly not
the only time that two bots have gotten into a

(36:06):
wacky bot war and caused some kind of mischief. My
favorite story is not a criminally related story. Uh, it's
from back in twleven. So these two Amazon bookseller bots
got into a price war. Um, both were trying to
sell an out of print copy of a science textbook,
and one of these bots was programmed to stay one

(36:29):
point two seven, zero five eight nine times ahead of
its competitor in price up selling in in order to
you know, bit banking on the fact that it was
a large seller and and had good product. It figured
that it could it could go that many times up
ahead of its competitor and still sell stuff. The other

(36:53):
bot was programmed to stay zero point nine nine eight
three times below the spot, figuring that it could just
undercut by that much. So the difference here you can
see if you were to subtract one from the other,
or or you know, compare them against each other. You
see there's a bit of a favor on the overside

(37:13):
compared to the underside. So what happened when these things
were going up against each other? Um, Eventually a seventy
dollar textbook wound up being offered for twenty three point
six million dollars before both sellers realized this thing and
went oops, is that what all these memes on the

(37:35):
internet about the price of textbooks are about? I would
just say that this proves that knowledge is priceless, truly, truly,
and you are the first target of my pushing robot.
I'm glad that we have a record of it on
the podcast. That'll make they'll make the prosecution so much easier. Well,

(37:56):
we are getting into some really weird territory, yes, I so,
I for one, in the case of something like the
Twitter bot uh, from from Vandergoot would not feel comfortable
holding the owner of this Twitter account liable for what
the bots said, even if it says something really scary.
In that case, since it's so random, there's not even

(38:20):
really uh, I don't know, there's not even a likelihood
that it would say something like that. It just seems
like a complete fluke occurrence. Well, and it's a random
combination of words that could be in any corpus. You
could also argue, I think pretty effectively that what it
ultimately is is a very sad commentary on the public

(38:41):
discourse taking place over things like Twitter, where enough instances
of this are happening, where that could potentially become something
tweeted out at random. Like it's it's almost like it
ends up being social commentary, which of course doesn't help
you in the individual judgment of who is responsible for
this in stance. But but but it points to a

(39:02):
larger problem that goes well beyond artificial intelligence. There, I
think the best, I mean, the most thorough argument you
could you can put forth is that it was reckless
of Vandergoot to put the spot out there. Yeah, so
I guess maybe somebody might say that I I don't
know if I feel like that. I kind of don't.
It just seems like it's such a harmless experiment to

(39:25):
run a Markov chain on a Twitter account. I don't know,
maybe if you're doing it from a Twitter account that's
typically full of phrases and words that would make death threats. Well,
I mean again, if you if you're just it depends
on from where the corpus corpus comes from. It depends
on your magnetic poetry set of what winds up being

(39:46):
set on your fridge. Right. If it's if it ends
up being the the equivalent of the Cards against Humanity deck,
then you're just you're just asking for trouble. Yeah, but
I mean, I don't know. This is very strange. It's
just very recently that we've started having technology that has
the kind of complexity in its behavior that it's difficult

(40:07):
to trace the output of the technology from the person
who created or owns it. I mean, typically throughout the
history of technology, machines have been very have been much
simpler tools. They very directly affect the will of the
user or the creator. You know, a gun isn't really
going it's not likely to surprise you in what it does,

(40:30):
it's probably going to do what you do with it.
But if you create an artificially intelligent program, even one
that's you know, only kind of artificially intelligent in the
way these things we've been talking about are I don't know,
you know, it's strange emergent behavior can happen. Yeah, um, certainly,

(40:50):
things like unintended actions can be a result of that
sort of thing where uh, it is more complicated than
you know, pull this lever and this thing happens. Now
we're talking about there's some form of randomness that's introduced,
not true randomness because we can't really accomplish that, but
but some form of behavior that's abstracted from our intentions, right,

(41:14):
and it can it can be effectively random. It might
not in the true sense be random, but in every
you know, practical sense, it is. And that's when we're
kind of we're kind of working our way back over
to this idea of autonomous cars because it's a very
it's it's something we can we can easily talk about
because it's something that is unfolding right now. Sure, Okay,

(41:35):
well let's look at autonomous cars. I think we've talked
about it before, feel free either of you to disagree,
but I think the three of us are pretty well
in agreement. From what I recalled that autonomous cars will
probably mean fewer accidents based upon Google's experience, assuming that
that would carry true moving forward, I think it's I

(41:58):
think it's a safe inclusion to say that you know
there will be fewer accidents. The more autonomous cars we
have on the road, the more we'll see the accident
rates decline, right right, uh. And those experiments included If
you haven't listened to our prior episode a couple of
years ago now on autonomous cars, um, the two accidents

(42:20):
that at that time Google Google cars had been in
there were there were only two of them at the
time that we recorded that podcast, I haven't heard about anymore,
and one was one was due to a driver of
another car hitting the autonomous car, uh, and the second
was when a person in the autonomous car had taken

(42:41):
manual control. Right, So pretty good track records so far.
I think the computers that drive these cars are going
to be vastly superior to human drivers. The problem is
you're just not as good a driver as you think
you are. And yes, that applies to you you think. Yeah,
other people aren't as good as they think they are.

(43:01):
You're not as good as you think you are. I'm
not as good as I think I am. Your problem,
Your reaction time is never ever going to equal that
of a computer. It's just it. We have physical limitations
on it. But that doesn't mean that autonomous cars will
never have at fault accidents. Yeah, it would be kind

(43:22):
of it would be kind of foolish to make that statement,
to say that there will never ever be an accident
in which an autonomous car was the principal cause. Computers
can make poor decisions, and when they do, this is
a big question. Who's liable, the owner, the manufacturer. Could

(43:44):
we say in some cases, nobody's at fault. It's a
weird question. Oh, I am sure that insurance companies aren't
gonna like that last answer. Well, the insurance companies again
may end up coming up with a brand new model
specifically for autonomous cars. There's some who are calling for
that to be the case. Now, granted, we're still at

(44:04):
least a few years away from autonomous cars becoming a
consumer product. Yeah, yeah, there there I've seen some predictions
as early as others. You know, when when when two
thousand came around, there were people saying, oh, not within
our lifetimes, and now we're suddenly, oh, within the next
five years. But one of the people I was reading about,

(44:24):
he's a lawyer, Frank John Frank Weaver. He posted in
in Slate about this and said that we might want
to this is where we might talk about it. Personhood
extends some concept of personhood to robots and hold the
robots themselves liable. And thus insurance companies would have specific

(44:45):
rates for robots, in this case autonomous cars. That would
mean that if that autonomous car were to cause an accident,
it would be counted against it in that respect. That's
how it would be liable was through this insurance. Now
grant that insurance would be paid by the own or
of the autonomous vehicle. And the basic argument I've seen
is that the belief is that the accident rates will

(45:08):
be very low, so the risk for insurance companies will
not be that high. Thus, it will actually be attractive
for insurance companies to do this because they're going to
make way more on premiums than they ever have to
pay out. So the ideas that for insurance companies this
is a this is a windfall for them. I have
feelings about car insurance companies even well, yeah, I mean

(45:33):
there there are a lot of things we could say. Certainly,
I'll leave it at that. Yeah, I also have feelings,
but in this in this case, it is an interesting
concept and saying that, you know, we could extend personhood
to these autonomous cars and keeping in mind they are
not intelligent in the way humans are intelligent. Right, they
are able to sense the environment and react to it,

(45:54):
but they're not able to compose a thought the way
humans can. And there's some people who argue, well that
why would you extend personhood to that? And his reaction
is that, hey, we already extend the concept of personhood
to corporations those are not human. Yes, But on the
other hand, I would think that there is a slight
disconnect there because corporations, at least in theory, can be punished.

(46:22):
And this this sort of brings up the question that
underlies this episode, like what is the purpose of holding
someone legally accountable? What is the purpose of punishment? And
I know there are lots of sort of philosophical theories
about there when people differ, but I think a really
common answer you'd here today is that the purpose of
punishment is deterrence. You want to discourage people from committing

(46:47):
crimes by saying, well, if you do commit a crime,
of bad things going to happen to you, all right,
and and a corporation has just as much power to
observe that, and go, I guess we won't do that
thing as a single person does, because that corporation is
being non randomly, completely on purpose. We all cross our
fingers and hope every day run by people. Right, Well, yeah,

(47:10):
that's the thing. Now this is another problem. Like again,
in practice, you might say that corporations can't be held
sufficiently accountable, but at least in theory they are supposed
to be held accountable. They can be punished by the law.
Well in this case, here's here's My perspective is that
if we're talking about damages, like, for example, the au
Thomas car collides with another vehicle, then in that case,

(47:34):
I think people would argue that the insurance the liability
of the Automas car would be on the Automas car itself.
The insurance would cover it. That would all be handled
in a very in a civil way, not as in
civilized but in non criminal civil courts, civil court, Whereas
if it were something that was more serious. Let's say

(47:55):
that it was an accident involving an autonomous car and
a passenger or a bicyclist, or that there were injuries
that were a result of this, and it would have
been something that should it had been a human driver
behind the wheel, it would have been a case of
vehicular manslaughter or some other criminal act. In that case,

(48:16):
I would argue that it would be most likely the
courts would be looking into whether or not the manufacturer
of the vehicle had in fact created a defective product,
and therefore it would become more of a defective product
state issue and less of fault or no. And it

(48:37):
seems in this case that the same principle we've been
observing throughout this podcast in in our hypothetical examples applies again.
The simpler the instructions this machine is following, the easier
it is to establish whether the creator or whatever is
guilty right once the instruction, When the behavior becomes more
and more complex, where it's more and more difficult to

(49:00):
look at the code and predict exactly what this thing
would do in any given scenario, I don't know it's
it's again. I think that it would come down to
two potentially criminal negligence, probably civil negligence, because I believe
that most of the time, when um, for example, a
air bag malfunctions and injures a passage in a car, UM,

(49:22):
that's usually settled out of court. It's it's all, it's
a civil matter. It's it's a civil matter. There's not
There isn't criminal liability in that case, right, right, And
so I would assume that that based on our current laws,
new laws are probably going to be written that include
this kind of civil liability. Yeah. We usually see new

(49:43):
laws built upon older ones, right, We don't tend to
see the reinventing of the legal wheel. Yeah. You know.
It makes me wonder if in the future, as artificial
intelligence becomes more ubiquitous in our lives, especially in robots,
things that are actually embodied and can act in society
and perhaps in your people, are we going to start
having a legal profession that's like a computer motive analyst,

(50:06):
someone who is a who is an expert witness in
court cases to look at source code and tell you
about the motives of the robot. I'm sure we'll see
plenty of experts assess intent right, right, right, your honor.
I do not think that the roomba should be held
liable for this cat's trauma. Well, the or the person

(50:28):
who created the room. But yes, well at any rate.
So so, John Frank Weaver was arguing that we extend
personhood to robots. Uh. Brian Sherwood Jones, who's a design consultant,
disagreed with that and said that we have to hold
people accountable for robot mistakes, not to ever have the
robots have the point where the blame is on the robots,

(50:51):
because his argument was otherwise, it means that you have
a mass evasion of responsibility. That being said that, if
if if I were to investigate this, and I were
to find that the designers, the engineers, the programmers, everyone
did their due diligence and did their absolute best, and
in all the testing there was never any indication that

(51:13):
some sort of emergent behavior could result in this. How
do I do I just end up saying, well, this
is a this is a this is a faultless situation
where someone was uh injured or damage was done to
property or whatever, but there's no one at fault. I
mean I guess that would be the only other conclusion
you could draw. Well, you could, I mean, this would

(51:34):
be a very weird scenario, but you could start to
think about once we achieve truly sophisticated artificial intelligence that
sufficiently abstracted from the intentions of the creators, would we
have to think about the robots that had this kind
of intelligence as like forces of nature, the way we
treat a tornado or a wild animal or so, so

(51:57):
we would say it's an active bot. Yeah, got you.
You know a good idea. Let's create like a cascading
series of laws that dictate how robots should behave in
order to prevent them from ever causing harm to humans, property,
or each other. Hey, but we saw some emergent behavior

(52:19):
in Asimov, didn't we what it looks? All I did
was read the laws. I didn't bother to read the stories.
It's not true, and those are flawed on purpose to
make stories interesting. Actually it is. It is a great
example of exactly what we're talking about. This is something
that has been bandied about in science fiction for decades

(52:40):
very well, and and it's it was very prescient because
it turns out that we're now on the cusp of
having to actually create the legislation that will guide us
in the years ahead, and ultimately, if we said, all right,
was this look like twenty years from now, It's very
hard to say because we're still in that earliest of phases.

(53:02):
But it is fun to talk about, and it's becoming
necessary at this point. We've already seen some examples that
show how necessary this is. It's just gonna get more
necessary as time goes on. Any last thoughts, Um, we
are not lawyers and do not offer official legal counsel

(53:22):
or unofficial legal counsel to say, yeah, we probably should
have put that at the front, but maybe we will. Yeah,
you know, maybe we'll just have that preface each episode. Uh,
this has actually been a ton of fun to talk about, Joe,
you were the one to suggest this topic and it

(53:42):
was amazing, So thank you very much. UM. And of
course we're getting great suggestions from you listeners out there,
and we want to encourage you to continue sending those
suggestions in We love to hear from you, We love
to hear your thoughts on the subjects we cover, and
of course we are happy to cover the topics you
suggest in episodes as well, So in upcoming episodes. Your

(54:03):
suggestion could be one we talk about. We'll we'll give
you a shout out and everything, so you should send
us an email if you have any suggestions or questions
or comments. That email address is f W Thinking at
how Stuff Works dot com, or drop us a line
on Twitter, Google Plus or Facebook. At Twitter and Google Plus,
we are f W Thinking. On Facebook. Just search f

(54:24):
W Thinking and that search bar will pop right up.
Leave us a message, and we'll talk to you again
really soon. For more on this topic in the future

(54:57):
of technology, visit forward thinking dot com, brought to you
by Toyota Let's Go Places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.