Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. You're listening to Brave New Planet, a podcast about
amazing new technologies that could dramatically improve our world. Or
if we don't make wise choices, could leave us a
(00:36):
lot worse off. Utopia or dystopia. It's up to us.
On September twenty sixth, nineteen eighty three, the world almost
came to an end. Just three weeks earlier, the Soviet
(00:56):
Union had shot down Korean Airlines Flight Double O seven,
a passenger plane with two hundred and sixty nine people aboard.
I'm coming before you tonight about the Korean Airline massacre.
President Ronald Reagan addressed the nation the attack by the
Soviet Union against two hundred and sixty nine innocent men, women,
and children aboard an unarmed Korean passenger plane. This crime
(01:21):
against humanity must never be forgotten here or throughout the world.
Cold War tensions escalated, with the two nuclear powers on
high alert. World War three felt frighteningly possible. Then, on
September twenty sixth, in a command center outside of Moscow,
an alarm sounded. The Soviet Union's early warning system reported
(01:44):
the launch of multiple intercontinental ballistic missiles from bases in
the United states. Statislav Petrov, a forty four year old
member of the Soviet Air Defense Forces, was the duty
officer that night. His role was to alert Moscow that
an attack was under way, likely triggering Soviet nuclear retaliation
(02:08):
and all out war. Petrov spoke with BBC News in
twenty thirteen. The sirens sounded very loudly, and I just
sat there for a few seconds, staring at the screen
with the word launch displayed in bold red letters. A
minute later, the siren went off again. The second missile
(02:30):
was launched, then the third, and the fourth, and the fifth.
The computers changed their alerts from launch to missile strike.
Petrov's instructions were clear, report the attack on the motherland,
but something didn't make sense. If the US were attacking,
why only five missiles rather than an entire fleet? And
(02:53):
then I made my decision. I would not trust the computer.
I picked up the telephone handset, spoke to my superiors
and reported that the alarm was false. But I myself
was not sure. Until the very last moment. I knew
perfectly well that nobody would be able to correct my
(03:14):
mistake if I had made one. Petrov, of course, was right.
The false alarm was later found to be the result
of a rare and unanticipated coincidence sunlight glinting off high
altitude clouds over North Dakota at just the right angle
to fool the Soviet satellites. Statislav Petrov's story comes up
(03:36):
again and again in discussions of how far we should
go and turning over important decisions, especially life and death decisions,
to artificial intelligence. It's not an easy call. Think about
the split second decisions and avoiding a highway collision. Who
will ultimately do better a tire driver or a self
(03:59):
driving car? Nowhere is the question more fraught than on
the battlefield. As technology evolves, should weapons systems be given
the power to make life and death decisions? Or do
we need to ensure there's always a human a Stanislav
Petrov in the loop. Some people, including winners of the
(04:20):
Nobel Peace Prize, say that weapons should never be allowed
to make their own decisions about who or what to attack.
They're calling for a ban on what they call killer robots.
Others think that idea is well meaning but naive. Today's
big question lethal autonomous weapons. Should they ever be allowed?
(04:47):
If so, when, if not, can we stop them? My
name is Eric Lander. I'm a scientist who works on
ways to improve human health. I helped lead the Human
Genome Project, and today I lead the Broad Institute of
(05:09):
and Harvard. In the twenty first century, powerful technologies have
been appearing at a breathtaking pace, related to the Internet,
artificial intelligence, genetic engineering, and more. They have amazing potential upsides,
but we can't ignore the risks that come with them.
The decisions aren't just up to scientists or politicians, whether
(05:31):
we like it or not, we all of us are
the stewards of a brave New Planet. This generation's choices
will shape the future as never before. Coming up on
today's episode of Brave New Planet fully autonomous lethal weapons
(05:53):
or killer robots, we hear from a fighter pilot about
why it might make sense to have machines in charge
of some major battlefield decisions. I know people who have
killed civilians, and in all cases where people made mistakes,
it was just too much information. Things were happening too fast.
(06:16):
I speak with one of the world's leading robo ethesis.
Robots will make mistakes too, but hopefully, if done correctly,
they will make far far less mistakes than human beings.
We'll hear about some of the possible consequences of autonomous weapons.
Algorithms interacting at machine speed faster than humans couldn't respond
(06:36):
might result in accidents, and that's something like a flash war.
I'll speak with a leader from Human Rights Watch. The
campaign to stop Killer Robots is seeking new international lure
in the form of a new treaty. And we'll talk
with former Secretary of Defense Ash Carter. Because I'm the
guy who asked to go out the next morning after
(06:57):
some women and children have been accidentally killed. As suppose
I go out there, Eric and I say, oh, I
don't know how it happened. The machine did it. I
would be crucified. I should be crucified. So stay with us.
Chapter one, Stanley the self driving Car. Not long after
(07:21):
the first general purpose computers were invented in the nineteen forties,
some people began to dream about fully autonomous robots, machines
that used their electronic brains to navigate the world, make decisions,
and take actions. Not surprisingly, some of those dreamers were
in the US Department of Defense, specifically the Defense Advanced
(07:44):
Research Projects Agency or DARPA, the visionary unit behind the
creation of the Internet. They saw a lot of potential
for automating battlefields, but they knew it might take decades.
In the nineteen sixties, DARPA funded the Stanford Research inst
to build Shaky the Robots, a machine that used cameras
(08:06):
to move about a laboratory. In the nineteen eight ease,
it supported universities to create vehicles that could follow lanes
on a road. By the early two thousands, DARPA decided
that computers had reached the point that fully autonomous vehicles
able to navigate the real world might finally be feasible.
(08:27):
To find out, DARPA decided to launch a race. I
talked to someone who knew a lot about it. My
name is Sebastian Thrun. I mean the smartest person on
the planet and the best looking. That's kidding. Sebastian Thrun
gained recognition when his autonomous car, a modified Volkswagon with
a computer in the trunk and sensors on the roof,
(08:50):
was the first to win the DARPA Grand Challenge. A
dun Challenge was his momentous government sponsors robot raises epic RaSE,
can you bid a robot that can navigate one hundred
and thirty punishing miles through the Mohabi Desert and the
best robot like seven miles and then literally end up
in many many mini researchers had concluded can't be done.
(09:12):
In fact, many of my colleagues told me I'm going
to waste my time and my name if I engaged
in this kind of super hard race. And that made
you more interested in doing it, of course, and so
you built Stanley. Yeah, So my students built Stanley and
started as a class And Stanford students are great. If
you tell them go to the moon in two months,
they're going to go to the moon. So then two
(09:33):
thousand or five, the actual government sponsored race, how did
Stanley do. We came in first, so we are focused
insanely strongly on software and specifically on machine learning, and
that differentiated as from pretty much every other team that
focused on hardware. But the way I look at this
is there were five teams that finished this ruling race
(09:54):
within one year, and it's the community of the people
that build all these machines that really won. So nobody
made it a mile in the first race, and five
different teams made it more than one hundred and thirty
miles through the desert, just a year later, Yeah, that's
kind of amazing to me. That just showed how fast
this technology can possibly evolve. And what's happened since then?
(10:18):
I worked at Google for a vile and eventually this guy,
Larry Page came to me and says, hey, Sebastian, I
thought about this long and heart. We should build a
self driving car they can drive on all streets in
the world. And with my entire authority, I said that
cannot be done. We just had driven a desert raised
there was never pedestrians and bicycles and all the other
(10:39):
people that we could kill in the environment. And for me,
just the sheer imagination we would drive a self driving
car to San Francisco sounded always like a crime. So
you told Larry Page, one of the two co founders
of Google, that the idea of building a self driving
car that could navigate anywhere was just not Yeah, feelous.
Later came back and said, he Sebastian, look, I trust you,
(11:00):
you're the expert, but I want to explain Eric Schmidt,
then the Google CEO, and it's my co founder, surgery brain,
why it can be done. Can you give me the
technical reason? So I went home in agony, thinking about
what is the technical reason why it can be done?
And I got back the next day and I said, so,
what is it? And I said, I can't think of
(11:23):
any and Lomi Hoold. Eighteen months later, roughly ten engineers
we drove pretty much every street in California. Today, autonomous
technology is changing the transportation industry. About ten percent of
cars sold in the US are already capable of at
least partly guiding themselves down the highway. In twenty eighteen,
(11:44):
Google's self driving car company Weymo launched a self driving
taxi service in Phoenix, Arizona, initially with human backup drivers
behind every wheel, but now sometimes even without. I asked
Sebastian why he thinks this matters. We lose more than
a million people in traffic accidents every year, almost exclusively
(12:08):
to us not pay attention. When it was eighteen, my
best friend died in a traffic accident and it was
a split second poor decision from his friend who was
driving in who also died. To me, this is just unacceptable.
Beyond safety, Sebastian sees many other advantages for autonomy. During
(12:28):
a commute, you can do something else that means you're
probably willing to commute further distances. You could sleep, or
watch the movie, or do email. And then eventually people
can use cars that today can't operate them blind people,
old people, children, babies. I mean, there's an entire spectrum
of people that are kindly excluded. They would now be
(12:49):
able to be mobile. Chapter two the Tomahawk so darpest
efforts over the decades helped give rise to the modern
self driving car industry, which promises to make transportations safer,
more efficient, and more accessible. But the agencies primary motivation
(13:10):
was to bring autonomy to a different challenge, the battlefield.
I traveled to Chapel Hill, North Carolina, to meet with
someone who spends a lot of time thinking about the
consequences of autonomous technology. We both serve on a civilian
advisory board for the Defense Department. My name is Missy Cummings.
I'm a professor of electrical and computer engineering at Duke University,
(13:33):
and I think one of the things that people find
most interesting about me is that I was one of
the US military's first female fighter pilots in the Navy.
Did you always want to be a fighter pilot? So
when I was growing up, I did not know that
women could be pilots, and indeed, when I was growing up,
women could it be pilots. And it wasn't until the
(13:56):
late seventies that women actually became pilots in the military.
So I went to college. In nineteen eighty four, I
was at the Naval Academy, and then of course, in
nineteen eighty six Top Gun came out, and then I
know who doesn't want to be a pilot After you
see the movie Top Gun. Missy is tremendously proud of
(14:17):
the eleven years she spent in the Navy, but she
also acknowledges the challenges of being part of that first
generation of woman fighter pilots. It's no secret that the
reason I left the military was because of the hostile
attitude towards women. None of the women in that first
group stayed in to make it a career. The guys
were very angry that we were there, and I decided
(14:41):
to leave when they started sabotaging my flight gear. I
just thought, this is too much. If something really bad happened,
you know, I would die. When Missy Cummings left the Navy,
she decided to pursue a PhD in Human Machine interaction.
In my last three years flying IF eighteens, there were
about thirty six people I knew that died, about one
(15:02):
person a month. They were all training accidents. It just
really struck me how many people were dying because the
design of the airplane just did not go with the
human tendencies. And so I decided to go back to
school to find out what can be done about that.
So I went to finish my PhD at the University
(15:23):
of Virginia, and then I spent the next ten years
at MT learning my craft. The person I am today
is half because of the Navy and half because of MIGHT. Today,
Missy is a Duke University where she runs the Humans
an Autonomy Lab, or for short HOW. It's a nod
to the sentient computer that goes rogue in Stanley Kubrick's
(15:46):
film two thousand and one, A Space Odyssey. This mission
is too important for me to allow you to jeopardize it.
I don't know what you're talking about. How. I know
that you and Frank were planning to disconnect me, and
I'm afraid that's something I cannot allow to happen. And
so I intentionally named my lab how so that we
(16:10):
were there to stop that from happening. Right, I had
seen many friends die, not because the robot became sentient,
in fact, because the designers of the automation really had
no clue how people would or would not use this technology.
It is my life's mission statement to develop human collaborative
(16:30):
computer systems that work with each other to achieve something
greater than either would alone. The Humans and Autonomy Lab
works on the interactions between humans and machines across many fields,
but given her background, Missy's thought a lot about how
technology has changed the relationship between humans and their weapons.
(16:52):
There's a long history of us distancing ourselves from our actions.
We want to shoot somebody, we wanted to shoot them
with bows and arrows. We wanted to drop bombs from
five miles over a target. We want cruise muscles that
can kill you from another country. Right, it is human
nature to back that distance up, Missy season inherent tension.
(17:15):
On one hand, technology distances ourselves from killing. On the
other hand, technology is letting us design weapons that are
more accurate and less indiscriminate in their killing. Missy rotor
PhD thesis about the Tomahawk missile, an early precursor of
the autonomous weapons systems being developed today. The Tomahawk missile
(17:37):
has these stored maps in its brain, and as its
skimming along the nap of the Earth, it compares the
pictures that it's taking with its pictures and its database
to decide how to get to its target. This Tomahawk
was kind of a set it and forget it kind
of thing. Once you launched it, it would follow its
map to the right place and there was nobody looking
(17:57):
over its shoulders. Well, so the Tomahawk missile that we
saw in the Gulf War, that was a fire and
forget missile that a target would be programmed into the
sole and then it would be fired and that's where
it would go. Later, around two thousand, two thousand and three,
then GPS technology was coming online, and that's when we
(18:20):
got the tactical Tomahawk, which had the ability to be
redirected in flight. That success with GPS and the Tomahawk
opened the military's eyes to the ability to use them
in drones. Today's precision guided weapons are far more accurate
than the widespread aerial bombing that occurred on all sides
in World War Two, where some cities were almost entirely leveled,
(18:44):
resulting in huge numbers of civilian casualties. In the Gulf War,
Tomahawk missile attacks came to be called surgical strikes. I
know people who have killed civilians, and I know people
who have killed friendlies. They have dropped bombs on our
(19:05):
own forces and killed our own people. And in all
cases where people made mistakes, it was just too much information.
Things were happening too fast. You've seen some pictures that
you've got in a brief hours ago, and you're supposed
to know that what you're seeing now through this grainy
image thirty five thousand feet over a target is the
(19:27):
same image that you're being asked to bob. The Tomahawk
never missed its target. It never made a mistake unless
it was programmed as a mistake. And that's old autonomy,
and it's only gotten better over time. Chapter three, Kicking
down Doors. The Tomahawk was just a baby step toward automation.
(19:53):
With the ability to read maps, it could correct its course,
but it couldn't make sophisticated decisions. But what happens when
you start adding modern artificial intelligence? So where do you
see autonomous weapons going? If you could kind of map
out where are we today and where do you think
we'll be ten twenty years from now. So, in terms
(20:13):
of autonomy and weapons, by today's standards, the Tomahawk missile
is still one of the best ones that we have,
and it's also still one of the most advanced. Certainly,
there are research arms of the military who are trying
very hard to come up with new forms of autonomy.
There was the Predicts that came out of Lincoln Lab,
(20:35):
and this was basically a swarm of really tiny UAVs
that could coordinate together. A ua V, an unmanned aerial
vehicle is military speak for a drone. The Predicts the
drones that Missy was referring to. They were commissioned by
the Strategic Capabilities Office of the US Department of Defense.
(20:57):
These tiny flying robots are able to communicate with each
other and make split second decisions about how to move
as a group. Many researchers, have you been using bio
inspired methods? Be right? So bees have local and global intelligence.
Like a group of bees, these drones are called a
swarm collective intelligence on a shared mission. A human can
(21:20):
make the big picture decision and the swarm of microdrones
can then collectively decide on the most efficient way to
carry out the order in the moment. I wanted to
know why exactly this technology is necessary, so I went
to speak to someone who I was pretty sure would know.
I'm Ash Carter. Most people will probably have heard my
(21:43):
name as the Secretary of Defense who proceeded Gimatus. You
will know me in part from the fact that we
knew one another way back in Oxford when we were
both young scientists, and I guess agents start there. I'm
a physicist. When you were doing your PhD in physics,
I was doing my PhD in mathematics at Oxford. What
was your thesis on? It was on quantum chrominynamics. That
(22:06):
was the theory of quarks and gluons. And how in
the world is somebody who's an expert in quantum chromodynamics
become the Secretary of Defense. It's an interesting story. The
people who were the seniors in my field of physics,
the mentors, so to speak, were all members of the
(22:28):
Manhattan Project generation. They had built the bomb during World
War Two, and they were proud of what they'd done
because they believed that it had ended the war with
fewer casualties than otherwise there would have been in a
full scale of invasion of Japan, and also that it
(22:49):
had kept the peace through the Cold War, so they
were proud of it. However, they knew there was a
dark side, and they conveyed to me that it was
my responsibility as a scientist to be involved in these matters.
And the technology doesn't determine what the balance of good
and bad is. We human beings do. That was the lesson,
(23:12):
and so that's what got me started, and then my
very first Pentagon job, which was in nineteen eighty one,
right through until the last time I walked out of
the Pentagon of Sectary Defense, which was January of twenty seventeen. Now,
when you were secretary, there was a Strategic Capabilities Office
that it's been publicly reported, was experimenting with using drones
(23:38):
to make swarms of drones that could do things, communicate
with each other, make formations. Why would you want such things?
So it's a good question. Here's what you do with
the drone like that, You put a jammer on it,
a little radio beacon, and you fly it right into
the eye of a enemy radar. So all that radar
(24:01):
c's is the energy emitted by that little drone, and
it's essentially dazzled or blinded. If there's one big drone,
that radar is precious enough that the defenders going to
shoot that drone down. But if you have so many
out there, the enemy can't afford to shoot them all down.
(24:22):
And since they are flying right up to the radar,
they don't have to be very powerful. So there's an
application where lots of little drones can have the effect
of nullifying enemy radar. That's a pretty big deal for
a few little, little microdrones. To learn more, I went
(24:42):
to speak with Paul Shari. Paul's the director of the
Technology and National Security Program at the Center for a
New American Security. Before that, he worked for Ash Carter
at the Pentagon studying autonomous weapons and a recently authored
a book called Army of None, Autonomous Weapons and the
Future of War. Paul's interest in autonomous weapons began when
(25:05):
he served in the Army. I enlisted in the Army
to become an Army ranger. That was June of two
thousand and one, did a number of tours overseas and
the wars and a rock Afghanistan. So I'll say one
moment that stuck out for me where I really sort
of the light bulb went on about the power of
robotics in warfare. I was in a rock in two
(25:28):
thousand and seven. We were on a patrol, driving along
in a Striker armored vehicle. Came across an ID improvised
explosive device makeshift road type bomb, and so we called
up bomb disposal folks. So they show up and I'm
expecting the bomb tech to come out in that big
bomb suit that you might have seen in the movie
The hurt Locker, for example, and instead out rolls out
(25:50):
a little robot and I kind of went, oh, that
makes a lot of sense. Have the robot diffused the bomb. Well,
it turns out there's a lot of things in war
that are super dangerous where it makes sense to have
robots out on the front lines, getting people better stand
off a little bit more separation from potential threats. The
bomb diffusing robe are still remote controlled by a technician,
(26:12):
but ashe Carter wants to take the idea of robots
doing the most dangerous work a step further somewhere in
the future. But I'm certain will occur. Is I think
there will be robots who will be part of infantry
squads and that will do some of the most dangerous
jobs in an infantry squad, like kicking down the door
(26:32):
of a building and being the first one to run
in and clear the building of terrorists or whatever. That's
a job that doesn't sound like something I would like
to have a young American man or woman doing if
I could replace them with a robot. Chapter four Harpies,
(26:55):
Paul Shari gave me an overview of the sophisticated unmanned
systems currently used by militaries. So I think it's worth
separate running out the value of robotics versus autonomy removing
a person from decision making. So what's so special about autonomy.
The advantages there are really about speed. Machines can make
(27:16):
decisions faster than humans. That's why automatic breaking and automobiles
is valuable. But you could have much faster reflexes than
a person might had. Pul separates the technology into three baskets. First,
semi autonomous weapons. Semi autonomous weapons that are widely used
around the globe today, where automation is used to maybe
(27:36):
help identify targets, but humans are in the final decision
about which targets to attack. Second, there are supervised autonomous weapons.
There are automatic modes that can be activated on air
and missile defense systems that allow these computers to defend
the ship or ground vehicle or land base all on
(27:58):
its own against these incoming threats. But humans supervise these
systems in real time. They could, at least in theory, intervene. Finally,
there are fully autonomous weapon There are a few isolated
examples of what you might consider fully autonomous weapons where
there's no human oversight and they're using an offensive capacity.
(28:19):
The clearest today that's an operation is the Israeli Harpy
drone that can load over a wide area after about
two and a half hours at a time to search
for enemy radars, and then when it finds one, it
can attack it all on its own without any further
human approval. Once it's launched, that decision about which particular
target to attack that's delegated to the machine. It's been
(28:42):
sold to a handful of countries Turkey, India, China, South Korea.
I asked Missy if she saw advantages to having autonomy
built into lethal weapons. While she had reservations, she pointed
out that in some circumstances it could prevent tragedies. A
human has something called the neuromuscular lag in them. It's
(29:02):
about a half second delay. So you see something, you
can execute an action a half second later. So let's
say that that guided weapon fired by a human is
going into a building, and then right before it gets
to the building, at a half second, the door opens
and a child walks out. It's too late. That child
(29:24):
is dead. But a lethal autonomous weapon who had a
good enough perception system could immediately detect that and immediately
guide itself to a safe place to explode. That is
a possibility in the future. Chapter five bounded morality. Some
(29:48):
people think, and this point is controversial, that robots might
turn out to be more humane than humans. The history
of warfare has enough examples of atrocities committed by soldiers
on all sides. For example, in the middle of the
Vietnam War in March nineteen sixty eight, a company of
American soldiers attack the village in South Vietnam, killing and
(30:12):
estimated five hundred and four unarmed Vietnamese men, women, and children,
all noncombatants. The horrific event became known as the Melai
massacre in nineteen sixty nine. Journalist Mike Wallace of Sixty
Minutes sat down with Private Paul Medloe, one of the
soldiers involved in the massacre. Well, I'm might a kill
(30:34):
about ten or fifteen of them men, women, and children
and babies and babies. You're married, right, children too? How
can a father of two young children shoot babies? I
don't know when to sworn in things. Of course, the
(30:55):
vast majority of soldiers do not behave this way. But
humans can be thoughtlessly violent. They can act out of anger,
out of fear, they can seek revenge, they can murder senselessly.
Can robots do better? After all, robots don't get angry,
They're not impulsive. I spoke with someone who thinks that
(31:16):
lethal autonomous weapons could ultimately be more humane. My name
is Ronald Arkin. I'm a regents professor at the Georgia
Institute of Technology in Atlanta, Georgia. I am a roboticist
for close to thirty five years. I've been in robot
ethics for maybe the past fifteen. Ron wanted to make
it clear that he doesn't think these robots are perfect,
(31:37):
but they could be better than our current option. I
am absolutely not pro lethal autonomous weapons systems because I'm
not pro lethal weapons of any sort. I am against
killing in all of its manifold forms. But the problem
is that humanity persist in entering into warfare. As such,
we must better protect the innocent in the battlespace, far
(31:59):
better than we currently do. So Ron thinks that lethal
autonomous weapons could prevent some of the unnecessary violence that
occurs in war. Human being don't do well in warfare
in general, and that's why there's so much room for improvement.
There's unnamed fire, there's mistakes, there's carelessness, and in the
(32:20):
worst case, there's the commission of atrocities, and unfortunately, all
those things lead to the depths of noncombatants. And robots
will make mistakes too. They probably will make different kinds
of mistakes, but hopefully, if done correctly, they will make
far far less mistakes than human beings do in certain
narrow circumstances where human beings are prone to those errors.
(32:41):
So how old the robots follow these international humanitarian standards?
The way in which we explored initially is looking at
something referred to as bounded morality, which means we look
at very narrow situations. You are not allowed to drop
bombs on schools, on hospitals, mosques, or churches. So the
(33:02):
point is, if you know the geographic location of those,
you can demarcate those on a map, use GPS, and
you can prevent someone from pulling a trigger. But keep
in mind these systems are not only going to decide
when to engage it, but also when not to engage
a target. They can be more conservative. I believe the
(33:22):
potential exists to reduce noncombatant casualties and collateral damage in
almost all of its forms over what we currently have,
so autonomous weapons might operate more efficiently, reduce risk to
one's own troops, operate faster than the enemy, decreased civilian casualties,
and perhaps avoid atrocities. What could possibly go wrong? Chapter six?
(33:51):
What could possibly go wrong? Autonomous systems can do some
pretty remarkable things these days, but of course, robots just
do what their computer code tells them to do. The
computer code is written by humans, or, in the case
of modern artificial intelligence, automatically inferred from training data. What
(34:14):
happens if a robot encounters a situation that the human
where the training data didn't anticipate, well, things could go
wrong in a hurry. One of the concerns with autonomous
weapons is that they might malfunction in a way that
leads them to begin erroneously engaging targets. Robots run amock,
(34:36):
and this is particularly a risk for weapons that could
target on their own. Now, this builds on a flaw
and known malfunction of machine guns today called a runaway gun.
A machine gun begins firing for one reason another and
because of the nature of a machine gun where one
(34:56):
bullets firing cycles the automation and brings in an ex bullet.
Once it starts firing, human doesn't have to do it,
and it will continue firing bullets. The same sort of
runaway behavior can result from small in computer code, and
the problems only multiply when autonomous systems interact at high speed.
(35:17):
Paul Shara points to Wall Street as a harbinger of
what can go wrong, and we end up some places
like where we are in stock trading today, where many
of the decisions are highly automated, and we get things
like flash crashes. What the pack is going on down here?
I don't know. There is fear. This is capitulation. Really.
(35:39):
In May twenty ten, computer algorithms drove the Dow Jones
down by nearly one thousand points in thirteen minutes, the
steepest drop it had ever seen in a day. The
concern is that a world where militaries have these algorithms
interacting at machine speed, faster than humans can respond, might
(36:02):
result in accidents. And that's something like a flash war.
By a flash war, you mean this thing just cycling
out of control somehow, right, But the algorithms are merely
following their programming, and they escalate a conflict into a
new area of warfare, a new level of violence, in
a way that might make it harder for humans to
(36:22):
then dial things back and bring things back under control.
The system only knows what it's been programmed or been
trained to know. The human can bring together all of
these other pieces of information about context, and human can
understand what's at stake. So there's no Stanislav Petrov on
the loop. That's the fear, right, is that if there's
(36:43):
no Petrov there to say no, what might the machines
do on their own? Chapter seven, slaughter Bots. The history
of weapons technology includes well intentioned efforts to reduce violence
and suffering that end up backfiring. I tell in the
(37:05):
book the story of the Gatling Gun, which was invented
by Richard Gatling during the American Civil War, and he
was motivated to invent this weapon, which was a forerunner
of the machine gun, as an effort to reduce soldiers
deaths and more. He saw all of these soldiers coming
back maimed and injured from the Civil War, and he said,
would it be great if we needed fewer people to fight?
(37:26):
So we invented a machine that could allow four people
to deliver the same lethal effects in the battlefield as
a hundred. Now, the effect of this wasn't actually to
reduce the number of people fighting, and we got to
World War One, we saw massive devastation and a whole
generation of young men in Europe killed because of this technology.
And so I think that's a good cautionary tale as well,
(37:49):
that sometimes the way the technology evolves and how it's
used may not always be how we'd like it to
be used. And even if regular armies can keep autonomous
weapons within the confines of international humanitarian law, what about
rogue actors? Remember those autonomous swarms we discussed with Ash Carter,
those tiny drones that work together to block enemy radar.
(38:13):
What happens if the technology spreads beyond armies? What if
a terrorist adds a gun or an explosive and maybe
facial recognition technology to those little flying bots. In twenty seventeen,
Berkeley professors Stuart Russell and the Future of Life Institute
made a mock documentary called slaughter Bots, is part of
(38:33):
their campaign against fully autonomous lethal drones. The nation is
still recovering from yesterday's incident, which officials are describing as
some kind of automated attack which killed eleven US senators
at the Capitol Building. They flew in from every rare,
but attack just one side of the aisle. It was
people were spreading. Unlike nuclear weapons, which are difficult to build,
(38:57):
you know, it's not easy to obtain or work with
weapons grade uranium, the technology to create and modify autonomous
drones is getting more and more accessible. All of the
technology you need from the automation standpoint either exists in
the vehicle already or you can download from GitHub. I
asked former Secretary of Defense As Carter, if the US
(39:19):
government is concerned about this sort of attack, you're right
to worry about drones and Chris. It only takes a
depraved person who can go to a store and buy
a drone to at least scare people and quite possibly
threaten people hanging a gun off of it or putting
a bomb of some kind on it, and then suddenly
(39:41):
people don't feel safe going to the super Bowl or
landing at the municipal airport. And we can't have that.
I mean it, certainly. As your former secretary of Defense,
my job was to make sure that we didn't put
up with that kind of stuff. I'm supposed to protect
our people, and so how do I protect people against drones?
In general? They can be shot down, but they can
(40:03):
put more drones up than it I can conceivably shoot at.
Not to mention, shooting at things in a Super Bowl
stadium is an inherently dangerous solution to this problem. And
so there's a more subtle way of dealing with drones.
I will either jam or take over the radio link,
(40:24):
and then you just tell it to fly away and
go off into the countryside somewhere and crash into a field.
All right, So help me out if I have enough autonomy,
couldn't I have drones without radio links that just get
their assignment and go off and do things. Yes, and
then your mind as a defender goes to something else.
(40:46):
Now that they've got their idea of what they're looking
for a set in their electronic mind. Let me change
what I look like, Let me change what the stadium
looks like to it, let me change what the target
looks like. And for the Super Bowl, what do I
do about that? Well, once I know I'm being looked at,
I have the opponent in a box. A few people
(41:09):
know how easy facial recognition is to fool. Because I
can wear the right kind of goggles. Are stick ping
pong balls in my cheeks? There's always a stratagem memo toself.
Next time I go to Gillette Stadium for a Patriots game,
bring ping pong balls? Really? Chapter eight, The Moral Buffer.
(41:39):
So we have to worry about whether lethal autonomous weapons
might run them up or fall into the wrong hands.
But there may be an even deeper question. Could fully
autonomous lethal weapons change the way we think about war?
I brought this up with Army of non author Paul Shari.
So one of the concerns about autonomous weapons is that
(42:00):
it might lead to a breakdown in human more responsibility
for killing and war. If the weapons themselves are choosing targets,
the people no longer feel like they're the ones doing
the killing. Now, on the plus side of things, that
might mean to less post traumatic stress in war. These
things have real burdens that weigh on people, but some
(42:22):
argue that the burden of killing should be a requirement
of war. It's worth also asking if nobody slept uneasy
at night, what does that look like? Would there be
less restraint in war and more killing as a result.
Missy Cummings, the former fighter pilot and current Duke professor,
wrote an influential paper in two thousand and four about
(42:44):
how increasing the gap between a person and their actions
creates what she called a moral buffer. People ease the
psychological and emotional pain of warfare by basically superficially layering
in these other technologies to kind of make them lose
(43:06):
track of what they're doing. And this is actually something
that I do think it's a problem for lethal autonomous weapons.
If we send a weapon and we'd tell it to
kill one person and it kills the wrong person, then
it's very likely that people will push off their sense
of responsibility and accountability onto the autonomous agent because they say, well,
(43:27):
it's not my fault, it was the autonomous agent's fault.
On the other hand, Paul Scharide tells a story about
how when there's no buffer, humans rely on an implicit
sense of morality that might be hard to explain to
a robot. There was an incident early in the war
where I was part of an army ranger sniper team
up on the Afghanistan Pakistan border and we were watching
(43:51):
for Taliban fighters infiltrating across the border, and when dawn came,
we weren't nearly as concealed as we had hoped to be,
and very quickly a farmer came out to relieve himself
in the fields and saw us, and we knew that
we were compromised. What I did not expect was what
they did next, which was I sent a little girl
to scout at our position. She was maybe five or six,
(44:14):
She was not particularly sneaky. She stared directly at us
and we heard the chirping of what we later realized
was probably a radio that you had on her, and
she was reporting back information about us, and then she
left it. Not long after, some fighters did come and
then The gun fight that ensued brought out the whole valley,
so we had to leave. But later that day we
were talking about how it would treat a situation like that.
(44:37):
Something that just didn't come up in conversation was the
idea of shooting this little girl. Now, what's interesting is
that under the laws of war, that would have been legal.
The laws of war don't set an age for combatants.
Your status as a combatant just based on your actions,
and by scouting for the enemy, she was directly participating
on hostilities. If you had a robot that was programmed
(45:00):
to perfectly comply with the laws of war, it would
have shot this little girl. There are sometimes very difficult
decisions that are forced on people in but I don't
think this was one of them. But I think it's
worth asking how would a robot know the difference between
what's legal and what's right, and how would you even
begin to prehend that into a machine. Chapter nine, The
(45:24):
Campaign to stop killer robots. The most fundamental moral objection
to fully autonomous lethal weapons comes down to this, As
a matter of human dignity, only a human should be
able to make the decision to kill another human. Some
things are just morally wrong, regardless of the outcome, regardless
(45:47):
of whether or not you know, torturing one person saves
a thousand, its torture is wrong. Slavery is wrong. And
from this point of view, one might say, well, look,
it's wrong to let a machine decide whom to kill.
Humans have to make that decision. Some people have been
working hard to turn this moral view into binding international law.
(46:09):
So my name is Mary Warem. I'm the advocacy director
of the Arms division of Human Rights Watch. I also
coordinate this coalition of groups called the Campaign to Stop
Killer Robots, and that's a coalition of one hundred and
twelve non governmental organizations in about fifty six countries that
is working towards a single goal, which is to create
(46:32):
a prohibition on fully autonomous weapons. The campaign's argument is
rooted in the Geneva Conventions, a set of treaties that
establish humanitarian standards for the conduct of war. There's the
principle of distinction, which says that armed forces must recognize
civilians and may not target them. And there's the principle
(46:54):
of proportionality, which says that incidental civilian deaths can't be
disproportionate to an attack direct military advantage. The campaign says
killer robots fail these tests. First, they can't distinguish between
combatants and noncombatants or tell when an enemy is surrendering. Second,
(47:15):
they say, deciding whether civilian deaths are disproportionate inherently requires
human judgment. For these reasons and others, the campaign says,
fully autonomous lethal weapons should be banned. Getting an international
treaty to ban fully autonomous lethal weapons might seem like
(47:36):
a total pipe dream, except for one thing. Mary warm
In her colleagues already pulled it off for another class
of weapons, land mines. The signing of this historic treaty
at the very end of the century is this generation's
pledge to the future. The International Campaign to Ban Landmines
(47:57):
and its founder, Jody Williams, received the Nobel Peace Prize
in nineteen ninety seven for their work leading to the
Ottawa Convention, which banned the use, production, sale, and stockpiling
of an anti personnel mines. While one hundred and sixty
four nations joined the treaty, some of the world's major
military powers never signed it, including the United States, China,
(48:20):
and Russia. Still, the treaty has worked and even influence
the holdouts. So the United States did not join, but
it went on to I think prioritize clearance of anti
personnel land mines and remains the biggest donor to clearing
landlines an unexploded ordinance around the world. And then under
the Obama administration, the US committed not to use anti
(48:44):
personnel land mines anywhere in the world other than to
keep the option open for the Korean peninsula. So slowly,
over time countries do I think come in line. One
major difference between banning land mines and banning fully autonomous
lethal weapons is, well, it's pretty clear what a land
mine is, but a fully autonomous lethal weapon that's not
(49:08):
quite as obvious. Six years of discussion at the United
Nations have yet to produce a crisp definition. Trying to
define autonomy is also a very challenging task, and this
is why we focus on the need for meaningful human control.
So what exactly is meaningful human control? The ability for
(49:29):
the human operator and the weapon system to communicate the
ability for the human to intervene in the detection, selection
and engagement of targets if necessary to cancel the operation.
Not surprisingly, international talks about the proposed ban are complicated.
I will say that a majority of the countries who
have been talking about killer robots have called for illegally
(49:52):
binding instruments and international treaty. You've got the countries who
want to be helpful, like France who was proposing working groups,
Germany who's proposed political declarations on the importance of human control.
There's a lot of proposals, I think from Australia about
legal reviews of weapons. Those efforts are being rebuffed by
(50:13):
a smaller handful of what we call militarily powerful countries
who don't want to see new international law. The United
States and Russia have probably been amongst the most problematic
on dismissing the calls for any form of regulation. As
with the landlines, Mary Wareham sees a path forward even
if the major military powers don't join at first. We
(50:34):
cannot stop every potential use. What we want to do, though,
is stigmatized, so that everybody understands that even if you
could do it, it's not right and you shouldn't. Part
of the campaign strategy is to get other groups on board,
and they're making some progress. I think a big move
in our favor came in November when the United Nations
(50:56):
Secretary General Antonio Guterres, he made a speech in which
he called for them to be banned under international law.
Machine the power and the dispression to take human lives
are politically and acceptable, are morally impartment and should be
(51:16):
banned by international law. Artificial intelligence researchers have also been
expressing concern. Since twenty fifteen, more than forty five hundred
AI and robotics researchers have signed an open letter calling
for a ban on offensive autonomous weapons beyond meaningful human control.
(51:41):
The signers included Elon Musk, Stephen Hawking, and Demis Assabas,
the CEO of Google's Deep Mind. An excerpt from the
letter quote, if any major military power pushes ahead with
AI weapon development, a global arms race is virtually inevitable,
and the endpoint of this technological trajectory is obvious. Autonomous
(52:06):
weapons will become the Khalishnikov's Tomorrow Chapter ten. To ban
or not to ban? Not Everyone, however, favors the idea
of an international treaty banning all lethal autonomous weapons. In fact,
everyone else I spoke to for this episode, Ron Arkin,
(52:28):
Missy Cummings, Paul Shari, and Ash Carter oppose it, interestingly,
though each had a different reason and a different alternative solution.
Robo ethicist Ron Arkin thinks we'd be missing a chance
to make wars safer. Technology can, must, and should be
used to reduce noncombatant casualties. And if it's not going
(52:50):
to be this, you tell me what you are going
to do to address that horrible problem that exists in
the world right now, with all these innocence being slaughtered
in the battlespace. Something needs to be done, and to me,
this is one possible way. Paul Shari thinks a comprehensive
ban is just not practical. Instead, he thinks we should
(53:11):
focus on banning lethal autonomous weapons that specifically target people.
That is, anti personnel weapons. In fact, the Landmine Treaty
bans anti personnel land mines, but not say, anti tank
land mines. One of the challenging things about anti personnel
weapons is that you can't stop being a person if
(53:32):
you wan't avoid being targeted. So if you have a
weapon that's targeting tanks, you can come out of a
tank and run away. I mean, that's a good way
to effectively surrender and render yourself life of combat. If
it's even targeting, say handheld weapons. You could set down
your weapon and run away from it. So do you
think that'd be practical to actually get either a treaty
(53:53):
or at least an understanding that countries should forswear anti
personnel lethal autonomous weapons. I think it's easier for me
to envision how you might get to actual restraint. You
need to make sure that the weapon that countries are
giving up it's not so valuable that they can't still
defeat those you might be willing to cheat. And I
(54:15):
think it's really an open question how valuable autonomous weapons are.
But my suspicion is that they are not as valuable
or necessary in an anti personnel context. Former fighter pilot
and Duke professor Missy Cummings thinks it's just not feasible
to ban lethal autonomous weapons. Look, you can't ban people
(54:36):
developing computer code. It's not a productive conversation to start
asking for bands on technology that are almost as common
as the air we breathe. Right, So we are not
in the world of banning nuclear technologies. And because it's
a different world, we need to come up with new ideas.
What we really need is that we make sure that
(54:59):
we certify these technologies in advance. How do you actually
do the test certify that the weapon does at least
as well as a human. That's actually a big problem
because no one on the planet, not the Department of Defense,
not Google, not Uber, not any driverless car company understands
how to certify autonomous technologies. So four driverless cars can
(55:22):
come to an intersection and they will never prosecute that
intersection the same way a sun angle can change the
way that these things think. We need to come up
with some out of the box thinking about how to
test these systems to make sure that they're seeing the world.
And I'm doing that in air quotes in a way
that we are expecting them to see the world. And
(55:45):
this is why we need a national agenda to understand
how to do testing to get to a place that
we feel comfortable with the results those you are successful
and you get the Pentagon and the driverless car folks
to actually do real world testing, what about rest to world?
What's going to happen? So one of the problems that
(56:07):
we see in all technology development is that the rest
of the world doesn't agree with our standards. It is
going to be a problem going forward, So we certainly
should not circumvent testing because other countries are circumventing testing. Finally,
(56:29):
there's former Secretary of Defense Ash Carter Back in twenty twelve,
ashe was one of the few people who were thinking
about the consequences of autonomous technology. At the time, he
was the third ranking official in the Pentagon in charge
of weapons and technology. He decided to draft a policy,
which the Department of Defense adopted. It was issued as
(56:50):
Directive three thousand point zero nine Autonomy in Weapons Systems.
So I wrote this directive that said, in essence, there
will always be a human being involved in the decision
making when it comes to lethal force in the military
of the United States of America. I'm not going to
(57:10):
accept autonomous weapons in a literal sense because I'm the
guy who has to go out the next morning after
some women and children have been accidentally killed and explain
it to a press conference or a foreign government or
a widow. And as suppose I go out there, Eric
and I say, oh, I don't know how it happened,
the machine, did it? Are you going to allow your
(57:34):
Secretary of Defense to walk out and give that kind
of excuse. No way I would be crucified. I should
be crucified for giving a press conference like that, And
I didn't think any future Secretary Defense should ever be
in that position, or allow him or herself to be
in that position. That's why I wrote the directive, Because,
(57:55):
ashe wrote the directive that currently prevents US forces from
deploying fully autonomous lethal weapons, I was curious to know
what he thought about an international ban. I think it's
reasonable to think about a national ban, and we have
and we have one. Do I think it's reasonable that
I get everybody else to sign up to that. I
don't because I think that people will say they'll sign
(58:18):
up and then not do it. In general, I don't
like fakeery in serious matters, and that's too easy to fake.
That is the fake meaning to fake that they have
forsworn those weapons, and then we find out that they haven't,
and so it turns out they're doing it, and they're
(58:40):
lying about doing it or hiding that they're doing it.
We've run into that all the time. I remember the
Soviet Union said it signed the Biological Weapons Convention. They
ran a very large biological warfare bird. They just said
they didn't all right, but take the situation. Now, what
would be the harm of the US signing up to
such a thing, at least building the moral approbrium around
(59:05):
lethal autonomous weapons, because you're building something else at the
same time, which it's an illusion of safety for other people.
You're conspiring in a circumstance in which they are lied
to about their own safety, and I feel very uncomfortable
doing that. Paul Shari sums up the challenge as well.
(59:25):
Countries are widely divergent interviews on things like a treaty,
but there's also been some early agreement that at some
level we need humans involved in these kinds of decisions.
What's not clear is at what level is that the
level of prisiople choosing every single target, people deciding at
a higher level what kinds of targets are to be attacked.
(59:47):
How far are we comfortable removing humans from these decisions.
If we had all the technology in the world, what
decisions would we want humans to make it more? And
why what decisions in the world require uniquely human judgment
and why is that? And I think if we can
answer that question, will be in a much better place
to crapple with the challenge of a hotness weapons going forward. Conclusion,
(01:00:18):
choose your planet, So there you haven't fully autonomous lethal weapons.
They might keep our soldiers safer, minimize casualties, and protect civilians,
but delegating more decision making to machines might have big
risks in unanticipated situations. They might make bad decisions that
(01:00:41):
could spiral out of control with no Stanislav Petrov in
the loop. They might even lead to flash wars. The
technology might also fall into the hands of dictators and terrorists,
and it might change us as well by increasing the
moral buffer between us and our actions. But as war
(01:01:02):
gets faster and more complex, will it really be practical
to keep humans involved in decisions? Is it time to
draw a line? Should we press for an international treaty
to completely ban what some call killer robots? What about
a limited ban or just a national ban in the US?
(01:01:23):
Or would all this be naive? Would nations ever believe
each other's promises. It's hard to know, but the right
time to decide about fully autonomous lethal weapons is probably now,
before we've gone too far down the path. The question
is what can you do a lot? It turns out
(01:01:44):
you don't have to be an expert, and you don't
have to do it alone. When enough people get engaged,
we make wise choices. Invite friends over virtually for now
in person what it's safe for dinner and debate about
what we should do. Or organize a conversation for a
book club or a faith group or a campus event.
(01:02:05):
Talk to people with firsthand experience, those who have served
in the military or been refugees from war. And don't
forget to email your elected representatives to ask what they think.
That's how questions get on the national radar. You can
find lots of resources and ideas at our website Brave
New Planet dot org. It's time to choose our planet.
(01:02:29):
The future is up to us. ED don't want a
truly autonomous car. I don't want to come to garage
and the concess. I've fallen in love with the motorcycle
and I won't drive you today because I'm autonomous. Brave
(01:02:55):
New Planet is a co production of the Broad Institute
of MT and Harvard Pushkin Industries in the Boston Globe,
with support from the Alfred P. Sloan Foundation. Our show
is produced by Rebecca Lee Douglas with Mary Doo theme
song composed by Ned Porter Mastering and sound designed by
James Garver, fact checking by Joseph Fridman and a stitt
(01:03:16):
An enchant. Special thanks to Christine Heenan and Rachel Roberts
at Clarendon Communications, to Lee McGuire, Kristen Zarelli and Justine
Levin Allerhans at the Broad, to Milobelle and Heather Faine
at Pushkin, and to Eliah Edie Brode who made the
Broad Institute possible. This is brave new planet. I'm Eric Lander.