Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Sleep Workers is a production of I Heart Radio and
Unusual Productions. I was on a patrol up in the
mountains of Afghanistan, and I was doing a reconnaissance patrol
with a very small ranger team, and I saw someone
approaching us, young man, maybe early twenties. That's Polar, a
(00:29):
former Army ranger, an elite special operative in the U. S. Military.
There were sort of a couple of possibilities that came
into my mind. One was that he was a goat hurter,
just out, you know, taking his goats out to eat grass.
Another was that he might be like a woodcutter that's
protecting his property. And another one might be that he
was someone who was had seen us. There wasn't a
(00:51):
lot of vegetation in the area, so we were pretty exposed,
and he was coming to kill us, And those all
seemed like very real possibilities. PULL knew of similar situations
where enemy fighters pretended to be civilians, concealing their weapons
until the last minute ambush. I fairly id degree of
confidence that, you know, if we got into a gunfight,
(01:11):
we could outmatch him or three of us, but if
you surprised us, he might have usially killed one of
us in the process first, so I wanted to maneuver
to a position where I could see him, and I
found him. He was sitting on the edge of a
cliff and looking out over this by valley, and he
had his back to me, and I settled into a
position where I could watch him very closely through my
sniper rifle. It was close enough that the wind carried
(01:34):
his voice towards me and I could hear him talking.
That alarmed me because I couldn't see who was talking to.
Paul didn't speak the local Afghan languages Dory or Pashtow,
so he couldn't understand what the man was saying. I
thought he might have had a radio and he could
be reporting back to maybe a group of fighters a nearby.
(01:54):
I was wearing that possibility versus maybe he's just you know,
talking to himself or talking to his goats or something
in And then eventually I heard him start singing, and
it struck me that if he was singing, you know,
singing out over the radio to someone in our position,
it seemed like a very strange thing for a fighter
to be doing. And the most like the explanation was
he was just an innocent go to hurd Um had
(02:17):
no idea that we were there, and he was singing
to himself just to pass the time, and so Paul
lowered his sniper rifle and he left, no provocation, no
harm done to either side. But that incident really stuck
with me because there was a period of time where
I didn't know whether he was, you know, an innocent
(02:38):
person or a fighter who might have been trying to
kill us Um, And the actions I would take were
very different in those two worlds. And now I look
back when I think about autonomous weapons and I think
what would a machine do? How could a machine understand
the context that I did, and sort of realize it
doesn't make sense that a person might be singing in
a tactical way that's some strange is probably just an
(03:00):
in person be able to do that. It's a profound question.
Paul poses what happens when something like a Predator drone
has as much autonomy as a self driving car and
can an AI system ever understand context, which can sometimes
mean the difference between life and death. In this episode
will examine AI on the battlefield and the future of
(03:22):
technology driven warfare. I'm as Velosen and this is Sleepwalkers.
So care last episode we talked about the future of
(03:42):
work and we focus on one big question, which was
what can AI not do and what Paul is talking about?
Identifying whether someone's a shepherd or an insurgent. Identifying targets
on the battlefield seems to be one of those things, sure,
But on the other hand, it's important for us to
ask what is AI good at. You know, good at
making predictions based on data about what might happen next,
(04:03):
It's good at seeing patterns. So it makes me think,
you know that with enough training data about battlefield interactions,
it could get just as good or better than humans
at this task. Yeah, and of course we don't have
the counter effectual to pull story. Sadly, there's probably many
stories where the army ranger doesn't wait to hear the
shepherd seeing before deciding what to do and just pulls
(04:25):
the trigger. And equally, there's probably many sad cases where
it does turn out to be an insurgent and an
ambush happens to the soldiers. So just from that story,
it's not necessarily clear to me that humans are always
or will always be better than algorithms that identifying targets.
You know, I do think it raises an important question.
We are increasingly comfortable outsourcing many parts of our lives
(04:45):
to technology. You know, who are we going to fall
in love with? How are we going to get to
our cousin's house? You know, that's decision making. Though, in
the world of war, we can also build tools to
do things that people don't want to do. A lot
of people have heard about Boston Dynamics. They have this
extremely terrifying video of this ford legged robodog aptly named Spot,
(05:07):
you know, that can run across bumpy ground. It can
go into places that aren't safe for human beings, you know,
And they've just announced this model with a claw on
the back of it that can open doors. There are
so many potential uses for this, from assisting military rates
to sending Spot into buildings that are unsafe for humans. Yeah, like,
a whole pack of Spots can actually pull a truck.
(05:30):
There's a video of it online. Look it up. It's frightening.
Teamwork is the dream work exactly? But no, like, what's
going to happen when that comes to the world of
war and we have packs of robodogs? Well, who knows?
Right the future is unclear, and that's really the question
of this episode. How will AI and robotics change the battlefield?
And how good of an insurance policy is it to
(05:51):
keep a human in the loop, to have somebody a
person controlling the pack of spots. We have from Paul
char at the beginning. He's now the director of Technology,
g and National Security at a bipartisan think tank called
the Center for a New American Security. He's recently written
a book called Army of None, Autonomous Weapons and the
Future of War. We're moving to a world where machines
(06:14):
may be making some of the most important decisions on
the battlefield about who lives and dies. And as Paul's
experienced in Afghanistan taught him, even elite military training isn't
always enough to tell who is and isn't likely to
be a threat. So understandably there's a great worry in
handing over target identification to a computer, especially as the
(06:34):
stakes are life and death. And as Paul tells us,
autonomous weapons are already being produced and sold around the world.
The best example today of fully autonomous weapon is a
drone built by Israel called the Harpy Drown and he's
been sold to a number of countries Turkey, India, China,
South Korea. It's an anti radar webbon that is lying
(06:57):
a switch pattern in the sky looking for enemy radars,
and when it finds one, it can then attack the
radar within any further human permission. And that processes the
line to an autonomous weapon that's able to go find
targets and then attack them all by itself. These are
not the autonomous killer robots of science fiction, setting their
own goals and killing it will. We still send them
(07:20):
into battle and tell them what to look for, but
we no longer control exactly what they do when they
get there. So by analogy, you might think about a
self driving car. Um, there are really degrees of how
much a car could be self driving. You could have
some like the Tesla autopilot today, where there's a steering
wheel and the human could intervene and grab control of
(07:41):
the vehicle. UM, you might have some like Google self
driving car that is no steering wheel and a person
just merely a passenger. But even in a totally self
driving car, the human is still choosing the destination. You're
not getting in your car just in car, you know,
take me where you want to go. So it's some
level humans are always involved, and it's going to be
the case in warfare. Um. The question is, you know,
(08:04):
when do we cross the threshold where the humans have
transfer control of some important and meaningful decisions two machines,
and then what are the legal or ethical complications of that.
It is a hugely important question. In the aftermath of
the Second World War, a hundred and ninety six countries
signed up to the Geneva Convention setting standards for behavior
(08:25):
in battle, But how do we enforce those standards if
the combatants are machines not people. Beyond having a human
in the loop, one important piece of the puzzle is
a healthy testing and review process, a clear understanding of
how a weapon works and the decisions it will make.
But according to Richard Danzig, a former secretary of the Navy,
(08:45):
that's easier said than done. One of the things that
concerns me is that the technologies are frequently highly classified.
So we're for self driving cars, we typically require that
there be millions of miles driven, and we insist that
external regulators review them for safety. In the military context,
(09:08):
we don't have millions of miles of experience before combat
and we don't typically have any kind of third party
review that says, wait a minute, here, the risks associated
with this system spinning out of control. We ought to
be using teams to say, hey, what could go wrong
if an adversary wants to attack these and subvert them,
(09:31):
or when they interact with other systems. This kind of
war gaming is valuable, but the best laid plans and
standards can crumble in the face of existential threats, real
or perceived. You only need to think about Hiroshima and
Nagasaki to understand how quickly restraint can give way to
the desire for victory his pool Again, as countries feel
(09:55):
that their national survival might be at stake, they're gonna
be willing to take more risk put up more experimental weapons.
The use of poison gas in World War One is
I think a terrible example of this happening and practice.
Germany was in a panic to find some kind of
wonder weapon that might break that stalemate. So a desire
to get an upper hand will clearly Historically we've seen
(10:16):
lead militarios to take risks and deploy more experimental technology.
This is one of the scariest things about war to
me Kara, particularly war involving new technology, the potential for
misunderstanding and unintended consequences. Paul mentions the almost accidental use
of poison gas in World War One, and then there
was a Cuban missile crisis where we almost stumbled into
(10:38):
a nuclear war. And all this potential for misunderstanding is
compounded exponentially in the world of AI because it's not
just humans who are trying to read each other and
make decisions. It's algorithms sort of loose in the wild,
interacting with one another. The Guardian actually had a great
story about this called Frank and Algorithms, The Deadly Consequences
of Unpredictable Code, and the article makes the point that
(11:00):
the stock market flash crash of was actually caused by
algorithms interacting with one another. You know, it's hard not
to think that this could happen in the wild, so
to speak. It's a scary thought, and it's made even
scarier because there's just so much potential all around for misunderstanding.
And that's something Richard Danzig is seriously concerned about. When
we talked about sending a ship on a mission, policy
(11:22):
makers by and large understand what that means and how
others will perceive it if the ship comes into their waters.
Whereas when we start talking about artificial intelligence, policy makers,
if there are five people in a room, may readily
envisioned five different things. We need the people making decisions
to understand both the situations they're dealing with and also
(11:46):
how the tools they're using actually work. And that's not
easy when it comes to AI. Part of the issue
is the so called black box problem. Currently, we understand
the principles of how a neural network uses probabilities to
reach a conclusion, but we can't interrogate the millions of
micro decisions it makes along the way. This is a
huge barrier to understanding weapons systems powered by AI. I
(12:09):
wanted to know how the research is developing new military
technology think about the black box problem. So I spoke
with Artie Prabaka, the former head of DARPA, the Defense
Advanced Research Projects Agency, and Arthie shared a story about
how the black box problem plays out. Many years ago, now,
there was a wonderful paper from Stanford that showed a
(12:29):
machine system that could label images. This was a girl
blowing out the candles on her birthday cake, or a
construction worker doing something so fairly complex analysis of what
was going on in this picture, and it would get
one right, and it would get ten right, and we
get a hunder right. And then there was a picture
that every human being would say, that's a baby holding
an electric truth brush, but the machine said it's a
(12:52):
small boy holding a baseball bat. And you know, you
just look at it and you think, what what what?
What were what were you thinking? And this I think
it's a great illustration of the black box nature of
these learning systems because they've been trained on all this
volume of data, but when you look inside to say, well,
what went wrong there, you know, you just see a
(13:13):
bunch of nodes with weights from being trained, and so
they're really opaque. Offi's example is kind of cute, but
think about it for a moment. The difference between a
baby holding an electric toothbrush and a small boy holding
a baseball bat could also be in Afghanistan, the difference
between Paul Shepherd and militant, in other words, the difference
(13:35):
between life and death. Yeah, and it's a bit daunting
that we are becoming more reliant on something that we
continue not to understand fully, don't you think absolutely and
Henry kissing you actually wrote a piece on this for
the Atlantic called how the Enlightenment Ends Big Stuff for Sure,
Hetty and what's kissing You? He argued that because we're
(13:56):
unable to interrogate the output of algorithms, as we rely
on them more immorti classify the world around us, we
may actually start to lose the ability to reason for ourselves.
It's not inevitable though, that AI will always be opaque.
The EU are actually working on this policy that decisions
made by AI need to be explainable to people they affect.
That may be a policy that's easier said than done.
(14:18):
Although the so called next wave of AI is all
about explainable AI, and it's actually a major initiative right
now at DAPPA. Here's off the again. Explainable AI has
been part of starting an entire new field of inquiry
and artificial intelligence to couple that kind of statistical power
that machine learning systems have with systems that explain how
(14:42):
they got the results that they got. In order for
us human users to be able to know when to
trust those machine learning systems and when not to trust them.
What authe is describing would be a huge breakthrough in AI.
Understanding how neural networks make their decisions would allow us
to honess the power the technology much more safely, and
(15:02):
not just on the battlefield. When we come back, we
look at how much of the technology we take for
granted in our everyday lives actually originated in the military.
So DARPA, the defense agency with an annual budget of
three point five billion dollars, its motto is to cast
(15:25):
a javelin into the infinite spaces of the future. What
you may not know is how much of the technology
we all use every day came right out of Darper.
I think about this every time I use my smartphone
because that's a beautiful, seamless integration of a whole host
of technologies that were sparked many many years ago by Darper.
(15:48):
So the chip in your cell phone that talks to
the cell tower is based on materials and electronics technology
that was developed for communications, and radar systems that up
that knows when you've rotated your phone is MEM's technology
that had huge early support from DARPA. But also Series
or other intelligent agents are based on the artificial intelligence
(16:12):
research that was done. But what fueled this wave of
incredible innovation at DAPPA. Well, actually, war DARPA is a
very American concept. In nineteen fifty seven, the Soviet Union
put the first artificial satellite on orbit. That was Sputnik.
There was a lot of excitement because human beings had
(16:33):
never done that before, but of course also quite a
shock for the United States at the height of the
CULD WARP. Sputnik was a reminder that in addition to
working on the problems that you knew you had, you
also needed to have people who came to work every
day to think about those kinds of technological surprises. And
so DARPA was started as a reaction to that technological
(16:56):
surprise of Sputnik. It's mission for sixty years has been
to create those kinds of technological surprises, and its history
is one in which it's accomplished that mission. DARPA is
known as the place that made the early stages of
each revolution and artificial intelligence, and of course must memorably
(17:17):
for starting the ARPA net and writing the protocols that
became the Internet that we have today. Think about that
the technology that forms the architecture of our daily lives.
In the twenty one century, the Internet was created by
a defense agency whose mission was to outthink the Soviet Union,
and in some sense all of the technologies we've looked
(17:39):
at so far in Sleepwalkers are the outgrowth of Dapper's work.
What really enabled this was DARPA's decision to allow their
technology into the US private sector and to let entrepreneurs
build on top of it. Absolutely as vital was that
were the companies and the entrepreneurs and the investors who
(17:59):
saw that you could turn those those raw research results
into this seamless, beautiful product that we've now we all
live with all the time. Characters. Amazing to think just
how much a Silicon Valley really stands on the shoulders
of DARPA. But even outside of what we think of
as the tech world, there's plenty of examples of military
technology living with us. For example, microwaves, which were an
(18:22):
accidental byproduct of radar technology, and then as good old Rumba,
which was originally developed a mind sweeping technology and is
still a mind sweeping technology in my house. Um. Yeah,
you know. There's this concept of dual use technology, which
we've talked about a few times. It's the idea that
technology developed for the military can have civilian applications and
(18:44):
vice versa. We talked about this with blink identity and
facial recognition. Do you remember last year the project may
even walk out? Yeah, Google, right, So that was a
project for the Pentagon. Over three thousand employees protested that
they didn't want to develop that technology. Google pulled the
plug on the project. The problem is though that you know,
you can say you're developing you know, AI and target
(19:06):
recognition for the military. All you can say you're developing
AI to recognize what's happening in images. But it's the
same technology and once it gets into the wild, anyone
who wants can use it. And that's actually something that
Arthur you spoke about about innovation traveling from DARPA to
the private sector. Yeah, and now DARPA actually has this
younger sibling called the Defense Innovation Unit d i u X,
(19:30):
whose job is actually to invest and incubate technologies from
the private sector that could be helpful for defense. So
this is basically the bridge from Silicon Valley to d C,
which is taking things from the consumer space and applying
them for military use. And one of these technologies is
called Halo. They're basically headphones that electrify your brain in
(19:50):
order to stimulate growth. That's right. So I went to
Connecticut to test them out with former Navy Seal John Wilson,
OH a seal for twelve years. I've served in Iraq
multiple times, Afghanistan, I went to Mogadishue, and then South America.
As you can imagine, drug warfare is still a really
(20:10):
big issue, so we have military units down there to
combat the cartels. We were gone three hundred days out
of the year training and then we were deployed from
once on end. So that's what we lived, breathed day
in and day out. We weren't going home at night.
These days, John is back with his family and he's
especially interested in how new technology can help seals past
(20:31):
and present. One such technology is the Halo headset, which
d I u X invested in. The headset uses an
electric current to prime the brain for so called neuroplasticity,
in other words, the ability to learn and learn quickly.
For us, I've recognized and when we do a pistol draw,
that movement is a repetitive movement that we've done thousands
(20:52):
and thousands and thousands of times over and over again.
And what this does is it primes the brain to
learn those repetitive movements faster? And have you genuinely knows
the difference. When I first came across this, I've got
a bunch of seals together and went out to the range.
We neuro primed and we started shooting. So we got
the baseline. Did this for a month, looked at our
scores and our scores were light years better. And light
(21:15):
years I mean milliseconds, right, But milliseconds on the battlefield
equates the life or death. I may be as far
away from being a military person as anyone could be,
but John has kindly agreed to lead me through a
Navy seal workout. So what we have here it looks
like a beats headset right with some strange noduals. Yeah,
so what we have on the top has mentioned were
(21:35):
these little nods. What those do? Or those are going
to send a current into the cortex or frontal cortex
into your brain? Is that safe? That's a great question.
I'm gonna bypass that question. It's safe. Turns out it
is safe. It's been tested by Dapper and others. So
I had to put it on and John agreed to
(21:56):
help me use the headset to neuro prime before putting
me through my pace is all right, let's let's crank
it up so you can. So it's probably a good
I don't know if you can feel a difference there,
but we've got twenty minutes of neuro priming. It feels
like something pinching in my head a bit. Yeah, it's not.
It's not a comfortable feeling, right, yeah, and you imagine, no,
you never do. But it's worth it, right, What is
(22:17):
that uncomfortable sensation? What's happening that? That's electricity? Yea, so
it's it's going into your brain right now, and it's
getting your brain into a state of neuroplasicity. Hyper Learning,
essentially is what that state allows you to do, and
it just allows you to learn faster and learn more information.
After a warm up, which was in fact a lot
more intense than my normal workout, two three, come on up,
(22:39):
drive up and hold right still the warm up, Yes,
it's still the warm up, it was time to take
the headset off and start the real thing. Or so
I thought, all right, it's already to keep going. You
can do twenty reps. So oh my god, oh god, ah,
(23:09):
those people's not do what hell weeks like I thinks
be worse than this. Yeah, why do you ask? John? Thankfully,
the workout came to an end and without any injury. Though,
to be honest, I couldn't tell if the neuropriming had
(23:30):
worked for me. Because halo enhances the brain's ability to learn,
Studies show best results when it's used over time. In
other words, if I wore the headset before every workout,
I might start to notice the difference in how quickly
I performed. But somehow I trusted John talking about the
draw time for his weapons. So the battle feels Afghanistan
(23:51):
rugs Syria. It's my happy place. Yeah, who can understand that?
It's my happy place because around people that I know
would do anything for me that I love me and
I love them for John. Being on the battlefield wasn't
the hardest part leaving. It was transition and is not
an easy thing. It's the hardest thing I've ever done.
(24:14):
When we transition, we do it by ourselves. We're trying
to solve a complex problem, which we love to do.
But normally when that takes place, you have your team
around you and you're going to figure it out because
you know you'll never let the person the left and
right of you down. When you're trying to do this
by yourself, you have nobody to talk to. When it
starts getting heart and you go dark is what we
call us. So we go into our shell. We we
(24:34):
kind of ostracize ourselves from society and and that's when
bad stuff starts happening. John recools a recent narrow escape
for one of his Sealed comrades. He was a Seal
Team six guy and um ended up going through divorce.
He had a newborn that he had to stay home
and take care of, so he went to a really
dark place. He just called me in unbeknownst to me,
(24:56):
it just put his son to bed and he was
sitting in the car with a pistol. I didn't know that.
I'm just picked up the phone and asked him how
he was doing, and he said he's doing okay, but
he needs some help. I said, we got your brother,
That's all I said. And that was enough for him
to put that gun away, go back inside and take
care of that little guy. Just me saying we got you.
That camaraderie saved John's friends life, but returning veterans need
(25:20):
something more than community. They need a purpose, a mission,
and that can be hard to find in civilian life.
You don't know you're fit, you don't know your rule
in the family, in your tribe, you don't know your
rule in society, and you're just trying to get by
to put food on the table. And there's there's a
void there now. And according to John, that's where Halo
comes in. Just because we're steals doesn't mean we all
(25:41):
want to end up doing security work. There's a lot
of people that wanted to maybe go in to finance
or be a lawyer. Halo allows us to succeed and
accelerate at that process. So if it's people want to
go back to school putting a Halo headset on before
you study, people maybe wanting to get a job that
requires multiple languages, you can throw on the halo and
(26:02):
pick up those language at an accelerated pace. Do you
think it's more powerful as something to believe in, like
if I put this headset on, like achieve my goals?
Is more powerful? Is a technological solution or is it
somewhere in between those two things? I think it's probably
both right. So our Strength and Conditioning coach and the
steel teams. He had a bottle p E is what
(26:22):
he wrote on it, which stood for placebo effect. It
was just water, but guys would come over and saying
their herd and he would just spare a little bit
of this water on them and they would every single
time would walk away like, oh I feel better, thanks, coach.
My point is is the research their supports that this
actually works. But who gives a ship even if it didn't,
because people are going to believe in themselves and that's part.
(26:43):
That's the bottle of my opinion. If it takes a
headset to get there, then great, but we know that
this headset works is going to help accelerate you atue
in that goal. We've talked about dual use in terms
of military technology that enters the civilian world and vice versa,
and Halo is just that a consumer product that is
also used by the armed forces, but it has a
(27:04):
much more profound dual use. It can save soldiers lives twice,
the first time on the battlefield where shaving milliseconds of
reaction time coming the difference between life and death, and
the second time when they return home. Haylo can help
them develop new skills and perhaps even more importantly, give
them the hope they need to keep going. When we
(27:25):
come back, we return to Darper and how to ensure
that we design new military technologies with worst case scenarios
top of mind. One of the central contentions of Sleepwalkers
is that our creations reflect us, and knowing this, we
(27:47):
need to be deliberate about how we tell them to
behave We're talking episode three of this series, The Watchman
about automation bias, the very human habit of treating the
output of computers as infallible even while ignoring the inputs
that we've given them. And recognizing this, Arthur made it
a central tenet of her tenure at Dapper to argue
that technology is not inevitable. There's a tendency to give
(28:13):
the active role to the technology. It's what the AI
will do to us. I want to keep bringing us
back to the fact that these technologies are our creations.
They're built by human beings. We have this enormous privilege
that we get to work on the powerful technologies that
can shape the progress of our societies. That privilege comes
(28:36):
with a responsibility to ask what could possibly go wrong?
What could possibly go wrong? It's a legitimate question, but
there's also a reason it's become a meme. It's notoriously
hard to answer. This is especially true in times of war,
when new technologies are often rushed into action without being
fully understood. Here's poor Shari again. Old War one is
(29:00):
a a wonderful, a terrible, um example of what can
happen when we see new technologies change warfare in ways
that policymakers were not prepared for. You know, the Industrial
Revolution brought not just you know, locomotives, but also um,
you know, cars, tanks, airplanes, machine guns that then were
(29:23):
used to industrialized warfare in a totally new way that
dramatically change this scale and the speed of killing that
was possible. The Gatling gun. People still had to crank
the gun, but then it automated the process of loading
and firing bullets. We began this episode talking about the
new dangers posed by automated weapons. Well, the Gatling gun
(29:45):
was actually one of the world's first, and as Paul
told us, its invention had a ripple effect that its
inventor could not have foreseen. The inventor, Richard Gatling, did
this to save lives. He was looking at um people
were coming back wounded and killed from the American Civil War.
He wanted to build a machine that could reduce the
(30:06):
number of soldiers that were needed on the battlefield as
a way to save lives. And that sounds like, you know,
a very well meaning idea and in practice, um as
the Gatlin gun involved into the machine gun. In World
War One, we saw a scale of killing that was
just unprecedented and in a whole a whole generation of
European man wiped out a battlefield. And so I think
(30:29):
it's it's an important cautionary tale for our ability to
predict how this technology will be used. The name of
this podcast, Sleepwalkers, is borrowed from a book called The Sleepwalkers,
How Europe Went to War in nineteen fourteen, written by
the historian Christopher Clark. And one of the big questions
I've been asking is, are we had a moment like
(30:50):
we were on the eve of World War One when
we haven't fully understood the implications of our new technology.
I asked Richard Danzig, the former Navy secretary, about the anels.
There is an analogy from World War One. European military
leaders developed mobilization plans to increase their own capabilities in
(31:11):
the event of an attack, and they underestimated the degree
to which that created rigidities and interactions, so that in
the end, the railroad timetables generated a war that perhaps
no one intended to be engaged in. People think that
they're driving the card, and in reality, the horses of
(31:34):
technology are frequently pulling us in directions that we don't
anticipate and can't control. So are we better place now
to understand the implications of AI and new technology for
global conflict than Europe was in ur I don't believe
our understanding of AI is greater than their understanding At
(31:55):
that point, we will make these mistakes too. I cannot
ask to make their significance or their frequency, but I'm
rather confident we will lose control, that we will make
mistakes of that kind and cause unintended consequences. So to me,
the interesting question is not can I predict their frequency?
(32:18):
The interesting question is what can I do in advance
if I recognize that it's one of them. Is well
represented by the darkast Safe Chaine project that government agency
is saying if people edit genes, but it turns out
they escape into the environment and proliferate. How do we
(32:39):
program them to begin with so that we can shut
them off? When it's so difficult to predict how new
technologies will be used and misused, it's hugely important that
we build precautions while they're still being researched and developed.
Is difficult, but we have to do our best to
anticipate the future dangers of a technology long before potential
(33:00):
deployment on the battlefield. Thankfully, that philosophy governed Arthur Prabaka's
work at DAPPER. What we developed was a way of
grappling with the ethical implications of these technologies. It started
by being open with ourselves, not just about our hopes
for the technology, but also our fears, and looking at
each other in the eye and saying, here's what we
(33:22):
think really is possible, but also here's what could really
go wrong. Were there any specific programs that you were
tempted by as a technologist, but in the end you
had to kill because they didn't meet your ethical standards.
I don't really have anything to say about that. The
answer is yes, but I can't give you an example
because it was classified. Karen, we've talked about the design
(33:44):
phase and thinking from r and d onwards about making
new weapons system safe. But it doesn't always work out
that way. Yeah, the genie does have a habit of
getting out of the bottle. Um. We've talked about dual
use before. Even seemingly benign technologies can be hugely destructive.
The one that blows my mind is the sort of
Arthur Galston, who was a plant biologist who discovered while
(34:07):
he was a graduate student, this compound that helps soybeans
flower faster. He also learned that if this compound were
applied in excess, that it would cause the plant to
shed its leaves. And when Galston discovered this defoliant effect,
that's what was abused by biological warfare scientists who would
then go on to develop Agent orange. I just got
(34:28):
back from a trip to Vietnam actually, where the effects
of Asian orange is still being felt. It's actually a
gene toxin which causes deformities through the generations. So that
is a truly horrific one, Kara, And it makes us
think it wasn't those chemists who were releasing agent orange
over Vietnam, it was the US military. So the idea
of the person who creates the technology gets to control
(34:49):
what happens to it is simply not the case, and
so we need to move forward with the assumption that
AI weapons will leave the laboratory and exists in the world.
And the central question is how helpful it to have
a human in the loop. Well, according to Richard Danzigg,
humans are of increasingly limited utility. I think there are
(35:10):
circumstances where human control is useful, but I don't think
that's the most useful approach. And the reason for that
is because the power is in the machine. So many
decisions that we care about are made at extraordinarily high speed,
and it just isn't time for the humans to assimilate
(35:30):
them and make correct judgments. Even the president declaring or
not nuclear war and responding might have fifteen minutes to
make a judgment. In other words, the whole idea of
a human in the loop making the final cool is
something of an illusion. At the very least, it relies
on us making wise decisions at lightning speed and under pressure.
(35:54):
And there's another problem. All of the information people like
the president used to make decisions has already been filtered
through several computer systems. So when the president reviews information
to make a tactical choice. He or she is already
relying on automation. He's extraordinarily dependent on what the machines
are telling him, what the sensors are interpreted to him,
(36:17):
and what the algorithms say the trajectories of missiles that
have been launched. So realistically, he's on the cart being
pulled by the horses of these technologies. UM. If that's
true for the president, think what it's like for the
person who's a sergeant in the field manning Patriot Missile Battalion. UM.
(36:39):
And it shows incoming missiles he has seconds or at
best minutes to respond and has to make decisions. We
know how fallible we are as decision makers, and we
know how dependent we already are on computer systems to
guide our decisions. So what can we do to prevent
ourselves from stumbling into a conflict that no one wants?
(37:02):
But I think we need to recognize that science is
now diffused around the globe, and we need a common
kind of understanding about how to reduce these risks, And
then we need some joint planning for the contingency that
these do escape. What do we do if a newly
engineered genetic system gets out there into the wild. Well,
(37:26):
that's not just a problem for the Chinese. If it
happens to happen in China, technology spreads, it gets modified, copied,
and hacked, and once something is out of the lab
is anyone's guess as to what happens. And countries are
slowly trying to establish standards for AI. But as Paul
Shari argues, creating a global framework governing AI in warfare
(37:47):
is a tool order. It's a very hard area to
think about. How do we mitigate that risk because countries
are not going to talk about the things that they're
doing in cyberspace and the fine angel sector. They've installed
things like circuit breakers that would take stocks offline if
the price moves too quickly. Well, there's no referee to
call time out in warfare. So if we're going to
(38:10):
manage those risks in the military space, does have to
be circuit breakers, uh if you will that people build
into our own weapons systems limits on them, ways to
interject human control to maintain control if things begin to
move in unexpected ways. And it's worth acknowledging really upfront
that there's a trade off there that every time that
(38:32):
a military puts guard rails on a weapon system or
inserts a human in the loop as a check. That's
potentially slowing down the effectiveness of their weapon. And there's
a risk that they are going to be afraid that
an adversary might not do that and might get an
edge on them. And that dynamic is really the crux
of the problem here. It's hard to get to a
(38:55):
place where countries um trust each other enough to engage
in mute all frustrain. But we may not have a choice.
Up until now, our defense policy has been based on
the assumption of technical superiority, and as our argues, we
can no longer rely on that. You have a model
that is based on owning all the technology and knowing
(39:16):
that no one else can have access to it for
two or three decades, and today those assumptions simply don't hold.
We are not the only people who can innovate right now.
The history of new technology and warfare is frankly disturbing.
When we create new weapons, we tend to use them.
We've talked about the atomic bomb in Japan, and about
(39:37):
poisonous gas in World War One, and even about the
Gatling gun, one of the world's first automated weapons designed
to reduce the number of combatants required to wage war.
It decimated an entire generation in Europe. We haven't yet
seen what happens when AI weapons systems begin to interact
with one another, but chances are we will in our lifetime.
(39:59):
So the pation in all of this can be to
desperately try to hit pools on new technology, but off,
y'all ues, that would be the wrong approach. Historically, we
are drawn forward by the enormous potential that these technologies
can enhance our lives, and at the same time we're
we're repelled by the consequences that we understand could be
(40:24):
fundamentally wrong. So I think that's the tension. But in
aggregate and over time, I do think that technology has
lifted us up, has advanced us. You know, when you
play the parlor game of asking your friends what period
in history they would rather live in. I might want
to visit, but there's no other time in history I
(40:45):
want to live in. And I think the future is
going to be fraught with problems, and it's still going
to be a better place than the one that we're in.
It's true that technology has made so many parts of
our lives easier, healthier, and safer, and it's also true
that the technology we create has the potential to be
ever more destructive. We've talked about dual use in this episode,
(41:08):
and as a matter of fact, many DARPA programs have
found their way into revolutionizing medicine. In the next episode,
we look at some of the incredible applications of AI
in the world of healthcare, from accurately predicting time of
death to decoding the human genome. I'm Ozveloshin, See you
next time. Sleepwalkers is a production of our Heart Radio
(41:41):
and unusual productions. For the latest AI news, live interviews,
and behind the scenes footage. Find us on Instagram, at
Sleepwalker's podcast or at sleepwalkers podcast dot com. Sleepwalkers is
hosted by me oz Veloshin and car hosted by me
Kara Price. Were produced by Julian Weller with help from
Jacopo Penzo and Taylor Chacogne. Mixing by Tristan McNeil and
(42:04):
Julian Weller. Our story editor is Matthew Riddle. Recording assistance
this episode from Chris Hambroke and Jackson Bierfeld. Sleepwalkers is
executive produced by me Osvaloschen and Mangesh Hattikiller. For more
podcasts from my Heart Radio, visit the I heart Radio app,
Apple Podcasts, or wherever you listen to your favorite shows.