Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how Stuff
Works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Joe McCormick. And
so today we're gonna be talking about a an issue
in the future of technology, actually not just the future,
(00:23):
the present of technology. And it's going to have to
do with how we assess threats based on different types
of technological regimes that exist in the world. And so
you and I, Robert, are aware that very often we
fear the wrong things, right, Oh yes, I mean we
we fear things that are completely disproportionate to the level
(00:47):
of threat they represent in our lives. And this is
obvious in some ways because we have phobias of things
that are totally harmless. Some people are afraid of balloons
or something. I guess they're not harmful to birds and fish,
but harmless to us. And some people are afraid of
I don't know, public speaking, something that is in some
ways genuinely threatening to your reputation maybe, but it is
not is not a threat to your to your body,
(01:08):
to your body integrity. Yeah, and we have a lot
of these, uh, what some people called paper tigers. You
know where, in fact, we have a whole classic episode
of stuff to build your mind about paper tigers, where
something something that is ultimately not life threatening or even
even something that's not even gonna cause you actual physical injury. Uh,
we build it up in our minds to the level
(01:29):
it's really on par with some sort of a large
predatory creature, you know, in our in our primordial past.
You know, like we give it the same credence we
would give a tiger leaping out of the bushes at us.
So those are the obvious ways that we fear the
wrong things. We fear stuff that isn't even literally going
to hurt us. But then people are just wrong in
the ways they assess the relative dangers of actual physical threats,
(01:53):
like attacks by animals. You know, people are more afraid of, say,
being attacked and killed by a shark, But shark kill
almost nobody. They kill like less than ten people on
average per year worldwide. Uh, you know, you you are
extremely unlikely to die by shark attack, even if you
swim a lot. Meanwhile, animals that don't come in nearly
(02:14):
as much fear and and grip our minds in the
same way like dogs kill way more people. Dogs kill
tens of thousands of people every year, and mosquitoes, which
spread diseases, they literally kill hundreds of thousands of people annually.
Nobody's got mosquito phobia. I mean I guess maybe some
people do. Yeah, it's true, Like at least some people
have dog phobias. I think you can. You can basically
(02:35):
look to horror cinema and see this played out right,
how many how many dog horror films can you think of? Yes,
you can think of some key ones here and there,
mainly Kujo, right um, and a few others of note
you can, And then when you think of shark horror films,
there's just an endless supply. You could spend the rest
of your life I think watching terrible shark movies. But
(02:56):
when it comes to mosquitoes, you basically just have man squito.
And that's it's not even just a straight up mosquito movie.
It's a it's a human mosquito hybrid. Well, part of
it has to do with the kinds of imagery that
excite our brain. I mean, you just it's hard to
make a mosquito look scary. Mosquitoes look irritating. Yeah, but
then on the other hand, Uh, ticks are horrifying looking
(03:18):
and carry illnesses. And as we pointed out in the past,
there's only one there's really only one Tick horror film,
and it's terrific, But there's only one of them. You're
talking about, the one with Seth Green and Clint Howard,
Ron Howard's brother. Yeah, I'll make sure that we linked
to the trailer talk video episode on the Landing page
for this episode of Stuff to Blow Your Mind, in
(03:39):
which we talk about this cinematic jewel. Well, if you must.
But people are also totally off base in the way
they assess the relative dangers of, say, travel threats. Everybody's
heard this statistic right about travel travel methods. Oh yeah,
fear of flying being a huge one. Um, you know
driving is considerably more dangerous, and yet it's it's flying
(04:00):
that fills so many of us with varying levels of anxiety.
I can personally relate to this, and I'm continually fascinated
and frustrated by the way like this deeper irrational fear
can overpower, or at least sufficiently overpower my rational understanding
of the risks. Yeah, I know exactly what you're talking about.
I I have had various levels of fear of flying
(04:22):
in the past, And yet I totally understand, Like I've
read all the statistics about how per mile traveled, you
are so much safer in a in a commercial airliner
than you are, say, driving yourself somewhere. Yeah, Like, just
as a rational human being, you think, well, I can
just I can educate myself out of this fear. Like
with it with the airplane thing. I think back to
(04:43):
m was it the escape pot episode where we talked
about like where why there are no escape pods on airplanes?
And it doesn't make sense? Yeah, that it doesn't make sense.
And also if you are going to encounter a dangerous
situation than an airplane, like, it's gonna be far more off.
That's going to be the take off of the landing. Um.
And uh. And yet I'll I'll run through these facts,
(05:04):
I'll run through the material that we research and uh,
and it still doesn't quite penetrate the deeper set anxiety
that sometimes kicks in while flying. Yeah, And this disconnect
between what we fear and the actual threat is kind
of troubling. But but it actually gets worse because you
can point out some scenarios where fearing the wrong thing
has direct consequences in reality. So I was reading this
(05:28):
short article on edge dot org from by the psychology
professor David G. Myers about how humans just do not
accurately gauge what the real threats they face are. For example,
people tend to be very afraid of terrorist attacks, and
of course that makes sense. Terrorist attacks are straightforwardly terrifying
like that, they are a horrible thing, but statistically they
(05:50):
are so unlikely to harm you, just as one example, uh,
Myers points to how much more likely you are to
be harmed by writing in a motor vehicle, by being
in a car accident, than by being a victim of
a terrorist attack. And yet terrorism is designed exactly to
make us afraid of threats in an outsized way. That's
sort of the purpose of it, right, It's to grip
(06:12):
your mind with a horrifying image that will force you
to behave irrationally in response. Yeah, kind of a forced
recategorization of a safe or reasonably safe place. Uh. And
and of course that can be psychologically damaging. That's again
the whole point of terrorism. Yeah, exactly. Terrorism plays on
our psychology. And Myers gives an example of how this
can work exactly against our interests. So after the nine
(06:35):
eleven attacks in America, a lot of people were very
worried about terrorist attacks on airplanes, and as a result,
fewer people were flying. But Myers did the math on
the effects of this given the statistics about the relative
safety of these travel methods, and he discovered that if
people in general flew an average of twenty less and
didn't just like not travel anywhere, but instead made all
(06:58):
those same trips and covered the same out of ground
by driving, then their risk of fatal injury actually went
up because scheduled airline flights, as we know, are much
safer than surface travel by car per mile. So if
Americans flew less in two thousand one and made the
same trips by car, we could expect about eight hundred
more people to die in auto fatalities that year. And
(07:21):
Myers reports that later there was a German psychologist named
Gerd Gigerinzer who checked this prediction against travel data for
the year and the year following the attacks, and gird
determined that an estimated one thousand, five hundred Americans died
in traffic accidents while trying to avoid the dangerous of
air travel. So Myers writes, quote, long after nine eleven,
(07:44):
the terrorists were still killing us. And in a way
this is true, that they were able to present people
with horrifying images that made them behave irrationally, made them
worry about the wrong things, and actually led to more
dangerous decisions that hurt more people. And of course this
is limited merely to people's choice of how to travel
right there. There are obviously other ways that you can
(08:06):
say terrorism leads to negative consequences that actually harm the
people who are seeking security and all that. Political decisions
and stuff like that, Yeah, basically affecting at at various
levels how you live your life. Yeah, trying to get
people to operate on the basis of terror management rather
than rational decision making. And people don't make good decisions
when they're terrified. And part of the reason is that
(08:29):
there's this principle exploited by terrorism. It's the cognitive bias
known as the availability heuristic. And we've talked about this
on the show before, but basically, what this means is
that items and events which you are, which you have
easily accessible in your memory, are given undo weight in
our considerations. So when we try to think about what's dangerous,
(08:50):
what are the things we should worry about and protect
ourselves against. We actually end up not thinking about what
is statistically the most relevant threat, but we end up
thinking basically about what's the most scary. And these are
two very different things. Scary images stick in the mind,
and real threats very often go unnoticed. Myers writes, quote,
(09:11):
Thus we remember and fear disasters, tornadoes, air crashes, attacks
that kill people dramatically in bunches, while fearing too little
the threats that claim lives one by one. We hardly
noticed the half million children dying quietly each day from rhodavirus.
Bill Gates once observed the equivalent of four seven forty
(09:32):
seven's full of children every day. And we discount the
future and its future weapon of mass destruction, climate change.
And so Myers goes on to quote the American security
and privacy expert Bruce Schneier, who says, quote, if it's
in the news, don't worry about it. The very definition
of news is something that hardly ever happens. Now, of course,
(09:52):
that's not always the case, because, of course you could
see news reports about things that are are real things
to be concerned about. But I think what he's talking
about there is that if you detect your in a
situation of if it bleeds, it leads, you should do
your best not to let the scary spectacle of what
you're seeing become overrepresented as a threat in your brain.
And of course it's also the case with false news,
(10:14):
which we've talked about recently, um, in which case this
is something that never happens and it can end up
affecting the way we we live our lives or govern ourselves.
I mean, think to such moral panics at say Satanic
panic or the or a you know, to a lesser
degree that the poison candy myth, the idea that Halloween
candy is going to be tampered with and then handed
(10:37):
out to children. These are things that that that did
not happen, but became pervasive ideas. I mean, especially in
the case of Satanic panic, this is the fiction that
there is or was an organized effort amongst secret occultists
to ritually abuse children, a fantasy that resulted in manufactured trauma,
ruined lives, and a legacy of superstitious persecution and in
(11:00):
violence in parts of the world, including parts of Africa
where you kind of see the echo effect of of
of satanic panic in western nations of predominantly United States
in the nineteen eighties. Yeah, but we should remember, of course,
while we talked about the myth, to talk about the
causal reality. I think a lot of people think what
was going on when people were coming up with stories
(11:20):
of Satanic ritual abuse in the eighties was that children
were being led in interviews and prompted by police and
therapists and people who had these pre existing ideas in
their head and children were just sort of like going
along with what they perceived the adults uh to want
them to say, right, and and of course it wasn't
just the children that the children were a major part of.
(11:42):
You also had adults with these uh these these these
supposed memories that they were reclaiming through therapy that were
revealing past satanic abuse. Um. But but yeah, it all,
it all amounted to a manufactured fear that that so
many people bought into and the thing that everyone was
afraid of did not exist and has never existed. You know,
(12:07):
it's it still floors me there there's an older episode
of stuff to blow your mind about it. Um what
to link to that it's it's a fascinating psychological event
in history that it makes you realize yet again that
it's the vividness of the imagery and sort of the
what the what the threat suggests as an idea that
captures people's fear, not so much the present reality of
(12:30):
the threat. Yeah, and it drives home just how how
easy it is, then um psychologically to just to have
an unbalanced fear of flying or dogs or terrorism or
what have you. And so for today, I wanted to
take this principle that we don't exactly fear the right
thing and and and try to redirect it in one
(12:52):
particular area, which is the kind of thing we're usually
worried about when it comes to AI and autonomous weaponry
and the dangers of technological weapons. I think we are
worried about the wrong stuff, or more specifically, we're worried
about stuff that we should be worried about, but we're
worried almost exclusively about the smaller threat rather than the
(13:16):
larger threat. The smaller more dramatic. And I think this
is this is something that came up when you're listening
the examples earlier. A threat that you could you've given
the chance physically run from fearing climate change, for instance,
how do you run from that? How do you hide
behind a bush? Uh like? But a tornado, on the
other hand, you could conceivably run into a bunker, right
(13:39):
you can, you can. You can attach a lot of
anxiety to it. But then you could conceivably see it
and and react in real time to its to its threat.
But if you're in a place where tornadoes don't happen
very often, say, and you are investing a lot of
energy and resources into preparing for a tornado threat instead
(13:59):
of in sting that energy and resources into something that
will definitely affect you in the future, like say, climate change,
which will affect most people wherever you are in one
way or another, especially with all of its secondary downstream effects.
If you're investing exclusively in the lesser threat, than you
are doing yourself and your future descendants a disservice. Absolutely.
(14:19):
All right, Well, let's take a quick break and the
when we come back, we will talk about different types
of technological weapon threats. Thank you, thank you. All right,
we're back. So, Joe, I know that when when we
when we talk about technological weapon threats, what we're really
talking about here is of course ED two oh nine.
(14:39):
We're talking about the terminator, right, We're talking about chopping mall,
the killer robots of shopping mall that may just go
out of control and start hunting our teenagers down in
the malls of the future. Chopping Mall is a fantastic
piece of eighties trash cinema. But but no, I mean
that is the thing that captures our mind because you
(15:00):
can run from it. Yeah, there's a robot with a
gun on it that that is the standard image. Okay,
what is the AI autonomous weapon threat of the future.
It is a terminator. It is those killer robots from
the Matrix. It is something that is an embodied robot
that is coming to you know, point a gun at
you and make you do something, or hunt you down
(15:22):
for robot sport or something like that. And I want
to be clear that this is absolutely a real thing
worth discussing. Not so much I don't know about terminators,
but the idea of conventional autonomous weapons that I'm not
saying that is not worth discussing, because it is It's
been the focus of a lot of international conversation about
the ethics of warfare. We're actively trying to figure out
(15:43):
how to regulate this, and there for example United Nations
Conventions on Discussing Autonomous Conventional Weapons and what we should
do about them. Yeah, the ideas of Head two oh
nine terminators, what have you even? Chopping mall Sci five
has always spoken to our anxieties and our fears, uh,
and our hopes about technology and where it's going to
(16:06):
to get us and given us vivid imagery to feed
our availability heuristic for these types of threats. Yea. And
of course give us something that we can we can
we can we can run from you know, we can,
we can battle, you can fight back against the the
machines and chopping mall. But to get serious for a minute,
I mean, we are more and more seeing actual semi
autonomous weapon technology being introduced into military arsenals around the world. Absolutely.
(16:32):
I mean you'd have to be under a rock to
have avoided uh any any coverage of the U. S
military's use of drone strikes in recent decades. Uh. And
and that's that's just the tip of the iceberg too.
I mean I was reading a little bit about autonomous
weapons in Max tech Mark's recent book Life three point
oh and he points out that you have. You have
(16:55):
in the U s military, the US Phalanx system for
its Ages class cruisers that automatically detect, track, and attack
threats such as anti ship missiles and aircraft. You may
have seen images of these these like big dome looking
devices with the weapon on on the front. Oh yeah,
I mean we'll think about other types of automatic missile defense,
(17:16):
the the Israeli Iron Dome system. Yeah. But in particular though,
this this one system, it's been in service since nineteen eighty,
still in service today, and it it actually led to
the nine downing of Iran air Flight six five five,
a civilian Iranian passenger jet, killed all two nine people
on board and caused international outrage. There was, however, a
(17:40):
human in the loop on this system who made the error.
And that's that's one of the key distinctions between uh,
some of these contemporary and past autonomous weapons systems and
the possible future of autonomous weapons systems. Is there a
human being that is at some point weighing in on
the decision or having to make a final decision. And
in pretty much all conventional autonomous weapons systems I can
(18:03):
think of today, they're they're not fully autonomous there's semi autonomous,
there's still a human command structure, a human override, there's
still basically being controlled by humans, but they're making some
kind of uh, they've got some kind of automatic assistance function, right,
they'll be they'll be a drone pilot somewhere, or in
some of these models, I believe it would be you'll
(18:26):
have a drone pilot and maybe they're looking they're dealing
with with multiple drones, but there's still a human in
the loop on that particular weapons system. And just a
reminder that that Russia is believed to be testing its
first autonomous nuclear torpedo. The idea with this is that
it would be guided largely by AI to strike the
United States even if it lost all communications with Moscow.
(18:48):
A frightening weapon concept, to be sure, and but it's
the one that was originally proposed back in the nineties
sixties by um um Andre Sakarov, and some analysts call
this a doomsday weapon, and with good reason. That's a
it's a terrifying concept. It certainly is. I mean, one
at least hopes that there's there's still human command structure
there and and but these types of weapons, all these
(19:11):
things we've been talking about are part of this debate,
part of this debate that the international community is having
about what are the ethics of autonomous weapons systems, especially
if they're not just weapons systems used to say, shoot
down incoming enemy missiles. I mean that you sort of
see the difference there right now, Like you can imagine
missile defense kind of thing is different from something that
(19:34):
will be aiming at people or aiming at places where
people could be even though even a missile defense system could,
of course, as we've seen, go go awry, right right,
I mean, it becomes increasingly more complicated when you start
thinking about, um, military engagements, say in a city, if
you're having to deal with civilians and or or or uh,
(19:54):
you know, combatants that are not in uniform that sort
of thing, or any variety of uh, morally complex standoff situation.
How do you program for that? Yeah, and so that
is a very important question. But I wanted to focus
today on a potentially even more dangerous class of weapons
that can actually hurt many more people, and a class
(20:17):
of weapons that in fact already exist in some form today.
And we have a pretty clear vision of how much
more advanced and how much more dangerous they will continue
to get in the very near future, even without access
to heavy weapons or manufacturing capabilities. So I got the
idea to have this discussion when I read an article
in Undark magazine, which if Robert you ever read an Undark, Yeah,
(20:41):
I don't think I've read this one before we can
mentioned the title. My mind immediately went to various horror
fiction publications. It's got a horror kind of name. I
think it's named after We intentionally named after a type
of radium paint that was used back in the day,
I think early twentieth century, when before people realize what
the risks of it were. And that's sort of what
(21:01):
the magazine explores. That explores science and a lot of
good long form science writing, exploring the good and the
bad that science has to offer. But anyway, this article
was published in July of this year by science journalist
named Jeremy Sue, and it's called forget killer robots. Autonomous
weapons are already online. So I was reading this and
I started thinking about this. I thought this would be
(21:23):
a good topic for us to talk today. So Sue
starts by discussing the ways that the problem of autonomous
weaponry has really captured people's attention worldwide in the conventional
weapons since and as we've been talking about, there's a
good reason for this. If we're gonna be increasingly using
robots and AI programs capable of delivering lethal force on
the battleground, or I guess to be less uphimistic, if
(21:45):
we're gonna have machines that can kill people without a
human taking responsibility and directly making the machine do it,
we really need to be having serious conversations about the
ethics of this technology. Should there be international treaties governing
what kinds of autonomous weapons we allow each other to
make and so forth? I mean, we've got international treaties
about nuclear weapons. Shouldn't there be international treaty agreements on
(22:09):
autonomous weapons? Oh? Yeah, I mean it makes sense. Yet
to your point, we have we have treaties about nuclear weapons.
We have we have treaties about biological and chemical agents
as well. We have treaties about just the way that
one wages traditional warfare. Their various you know, non nuclear,
non biological weapons that are also banned under international treaty.
(22:30):
So it makes sense that we would we would also
have treaties dealing with autonomous systems. Yeah, it totally makes
sense in these conversations, as we've been saying, are ongoing.
Sue writes about how the issue was discussed at a
United Nations convention in Geneva in April of this year
in and this was part of an ongoing series of
conversations with the u N's Convention on Certain Conventional Weapons,
(22:53):
which has a kind of clunky name. But uh, that's
a useful kind of forum to be having right now.
But while we're having this debate about the role of
AI and autonomous programming and conventional deadly weapons, we really
tend to miss the fact that autonomous weapons are already
being used widely in warfare around the world. And we're
(23:14):
talking about cyber warfare, not autonomous guns and bombs and missiles,
but little pieces of computer code that autonomously attacks systems
around the world or in targeted places. Plus, with software,
there's always the worry of replication. YEA. No matter how
worried you are about a AI controlled nuclear torpedoes, you
(23:35):
usually don't have to fret about them mating with each
other and producing a torpedo offspring. Exactly. I mean that
it introduces a totally different dynamic of threat and a
certainly different dynamic of proliferation, and that that's something that
makes it a whole different ball game when we're considering
how much of a threat it is and what kind
of limits we should put on it in international agreements,
(23:57):
so much the same way that we debate how to
make air travel safer while people are are dying by
the thousands on the road in traffic accidents, we're debating
autonomous conventional weapons, which does matter, while autonomous cyber weapons
are already here and already fighting and potentially capable of
doing far more harm and killing more people than a
(24:18):
robot with a gun could. But I think the availability
heuristic is at play again here because it's it's just
that the damage done by autonomous digital weapons like these
computer viruses and worms and malicious bits of code, the
the type of damage done by these is less visceral,
it's harder to picture, and it definitely has fewer movies
(24:39):
about it, and thus it doesn't get the advantage offered
by the availability heuristic. You get when you get pictures
of terminators, that's right, When when you think of an
infrastructure wrecking, malicious program, uh, It's really hard to think
of in any horror films that that really line up
with that concept. The best I can come up with
is Stephen is maximum Overdrive, which is of course Vigures film.
(25:04):
So that is the most realistic picture of future technology threat. Yeah,
it's strange how this works. Right, we were talking about
this a little before we we started rolling. You know,
it's it's It's been said that any sufficiently advanced technology
is indistinguishable from magic, right, and you can point to
it to various bits of mythology, the myths of say,
humans creating other rational beings. And in the past these
(25:28):
were this was pure fantasy. Now in the twenty first century,
it is. Uh, it is far more realistic when we
look at models of AI and genetic engineering, etcetera. Uh. Likewise,
maximum overdrive was ridiculous in the nineteen eighties, but as
we enter into this age that is increasingly defined by
research into autonomous vehicles or the Internet of things, Uh,
(25:51):
maximum overdrive is suddenly not so bonkers. It is crazy.
In the nineteen eighties, people might have said, Okay, maximum
overdrive is a silly fantasy about the future technology threat,
and Terminator is a more realistic movie about future technology
threat and looking at the wilay of the land today,
I said, I mean, obviously Maximum Overdrive his cartoony, but
(26:13):
the general picture outlined by the two of them, maximum
Overdrive might be the more realistic threat. Any sufficiently advanced
technology is indistinguishable from Maximum Overdrive. But if you're lost
right now, okay, why do we say Maximum Overdrive, by
the way, is basically just a ridiculous eighties Stephen King
movie where like trucks and appliances and everything every electronic
(26:36):
thing or sometimes just cars and stuff start trying to
kill mostly cars, and like really badass trucks come to
life and start killing things. But but also I think
like toast drivans and lawn mowers and whatnot. So so
what autonomous cyber weapons have already been deployed? We said,
this is already a thing that exists. So in his article,
(26:58):
Sue quote Scott Borg, director in chief economist of the
US Cyber Consequences Unit, and Borg says quote malicious computer
programs that could be described as intelligent autonomous agents are
what steal people's data, build bot nets, lock people out
of their systems until they pay a ransom. And do
most of the other work of cyber criminals. So he's
(27:19):
talking about the fact that generally when they're cyber crime
going on, it's not a cyber criminal sitting at a
computer like manually doing stuff to you. They've they've created
a program that does it autonomously, so they just put it,
they let it go, and it does its work and
then they reap the benefits. But there have been plenty
(27:40):
of examples of this type of warfare in in actual
international relations. So there's the ducks networm. This is a
malicious computer worm that attacked computers controlling nuclear center refuges
in Iran, and it's believed to have slowed down the
development of the Iranian nuclear program. I think this happened
around two thousand ten, and no one's in it responsibility,
(28:01):
but it appears that this was a cyber weapon created
by the United States and Israel to try to prevent
Iran from acquiring nuclear weapons. Another example would be the
want to Cry ransomware worm, which shuts down computers and
demands payment of money before allowing the computers to be
made functional again. And want to Cry has caused real damage.
(28:21):
It attacked computer systems in hospitals at the UK National
Health Service, among a bunch of other important infrastructure machines.
Plus I think you could you could, you could reasonably
argue that there was UH psychological damage inflicted by that
attack as well. I mean, maybe not as much as
like a full blown UH cyber terrorism event, but it's
(28:44):
certainly captured headlines and it was It was the kind
of threat that's kind of central to the the appeal
of a terroristic act. It makes everyone feel like they
could be a potential victim. Yeah, yeah, totally, and in
fact a lot of people you could be a potential victim.
I mean, this is a thing that's that's worth being
concerned about and thinking about what defensive measures could we
(29:06):
put in place. Again, no one has plausibly admitted responsibility
for the want to Cry attack, but a lot of
analysts thinks science point to it being a project of
the North Korean government. Now, these examples demonstrate that, like
any weapon and autonomous UH, an autonomous cyberweapon can be
used for various types of conflicts, right like in the
(29:28):
case of the stuck S networm. Whatever you think about
the U S and Israel, I think most people would
probably agree that they're glad somebody figured out a non
violent way to stop an authoritarian regime from making nuclear weapons.
But then again, the same technology could be used by
the same actors or by others to attack things that
are you know, that would get less sympathy from people
(29:48):
around the world. Attack critical infrastructure anywhere, power, water, security, hospitals, telecommunications, media, banks,
and financial systems, transportation. I mean, all the these things
are increasingly connected now. Yeah, and we've only seen partial
use of these cyber weapons, certainly nothing of the magnitude
of of a full scale cyber war. Uh, something that
(30:10):
that you know that some futurists and cybersecurity analysts have
have written about and discussed like what would this look like?
Uh though though that though many of them stressed that
even these limited uses, they can build legitimacy for the
development of such weapons, as well as for the development
of national cyber response teams. And it's inevitable too that
(30:30):
that such attacks will lean increasingly into machine learning and
the use of AI. So what we've seen so far
is is sadly just like the tip of the iceberg unfortunately. Yeah,
and I want to talk more about that, especially the
role of machine learning and AI later on. But um,
one of the things I really want to stress is
that these days in the modern world, attacks on infrastructure
(30:55):
are not necessarily just inconveniences. It's not like thinking, oh, well,
I just lost internet for a day. It was actually
kind of nice because I went outside and took a walk. Yeah.
I mean, you might have lost power in your neighborhood before,
and you know it was an inconvenience or something, but
you were fine. But at a large enough scale, attacks
like these on infrastructure that are totally plausible under under
(31:18):
the world we live in today, they're pretty much guaranteed
to result in people facing serious material loss, injury, and death.
As an example for comparison, you could look at the
tragedy that's happened in Puerto Rico following the following the
landfall of Hurricane Maria in September. Now, when that happened,
when Puerto Rico was hit by the hurricane, it effectively
(31:41):
knocked out Puerto Rico's electrical power grid and temporarily put
a stop to lots of services in its aftermath, including electricity,
sewage treatment, some types of health and medical care, clean
tap water, and so forth. And in a civilization that's
built on the assumption of continued access to service is
like power and clean water. Sudden interruption of those services
(32:04):
is devastating and genuinely lethal. And while we we can't
be sure of the exact number of people who died
as a result of the devastation caused by the hurricane,
there have been some estimates and it seems like a
lot of people survived the initial storm but died in
the weeks and months afterwards from complications in the aftermath.
A lot of these possibly due to interruptions and services, healthcare, disease,
(32:27):
and so forth. Harvard study published in the New England
Journal of Medicine used survey data from households in Puerto
Rico to try to estimate the amount of the human impact,
and they found that among respondents there was a mortality
rate of about fourteen point three deaths for thousand persons
in September twenty through the end of seventeen, and this
(32:48):
yielded somewhere between seven hundred and ninety three and eight thousand,
four hundred nine excess deaths above the normal rate for
this period. That the mean of those numbers would be
like four six hundred people and that was within that
range with confidence interval. So the authors think the death
holl could actually be higher because the survey data is
(33:09):
contaminated by survivor bias. Right, people who died were not
able to respond to the survey. Now, we don't know
for sure if that estimate is accurate. There could be
flaws in the method. But if it's accurate, that's equivalent
to a sixty two percent increase in the mortality rate
as compared to the same period in st And we
should acknowledge there's been a lot of anger about the
(33:30):
way the US government handled the Hurricane Maria aftermath, with
charges that it basically didn't do enough to get essential
infrastructure and services back online fast enough, and that people
actually suffered and died as a result of not having
this infrastructure online, arguably in a manner that would have
been far different had this been, uh, you know, the
(33:50):
aftermath of a terrorist attack or or something that was
a little more centralized, personified, or too or tied in
with these more pervasive fear. Yeah, there there does appear
to be a weird psychology in the way societies respond
to threats, and that all different kinds of biases can
provide differential motivation in how much effort we put into
(34:12):
fixing the problem afterwards and helping people. So, yeah, it's
been it's been a horrible tragedy there, But we should
think about that. This kind of tragedy is what happens
due to a random attack by the weather. One can
only wonder what it would look like if, say, infrastructure
was attacked not randomly by the weather, but by a
malicious party intentionally trying to do as much damage as
(34:35):
possible with digital weapons. Yeah, I mean obviously a number
of possibilities instantly come to mind. Targeting infrastructure during the
very depths of winter, for example, right, or during a key,
say politically sensitive times like during an election. Yeah. Um, yeah,
I know I've read that there there's belief that there
(34:56):
was Russian digital autonomous agent attacks again to say some
Ukrainian power infrastructure during times of political upheaval and civil unrest,
which you know, it only makes things worse the timing
of those attacks. And a scary part is that we're
constantly connecting more and more devices to the Internet. Yeah,
like our refrigerators to the internet, cars, home appliances, medical devices,
(35:20):
and even medical implants. Every year the connectedness does not shrink,
it grows, and thus every year the world's connectedness and
vulnerability to cyber attacks, you could argue, becomes even greater,
almost as if we really want maximum overdrive to happen,
as if we were saying, you know, you know that movie,
that movie that came out in the eighties that was
(35:40):
so ridiculous, Uh, let's do that. Let's just go ahead
and put our refrigerator and our toaster on the Internet
and have them talk to each other. I mean, it's
hard to predict exactly what attacks would look like, though
there has been a lot of work on this, and
maybe if we revisit this topic in the future, we
could just more explicitly explore scenarios that have been talked
about by cybersecurity x birds. Yeah, in particular that the
(36:02):
concept of full blown cyber warfare. What that would look
like if you had nation states actively and at least
semi openly engaging in attacks and counterattacks against each other,
with the possibility then of actual military attacks on top
of that. Yeah, totally. But I mean, one thing I
really want to drive home, if you take anything away
from this episode, I wanted to be that attacks by
(36:24):
autonomous digital agents just computer viruses. Basically in the common understanding,
worms and stuff like that, things that don't have a
gun or a missile or an explosive attacks to them
can be just as dangerous, just as deadly and probably
more so than conventional weapons can be. Absolutely and remember
another thing is that most of these attacks can, at
least in theory, function without ongoing human input or direction. Right,
(36:49):
It's the set it and forget it model of warfare.
Like you, you make an autonomous digital agent, a computer virus,
a worm, whatever to attack infrastructure out there, and you
might be able to sign it so that you don't
need to go back and do anything to it later.
You don't need to maintain it. Uh directed it functions
on its own. That's what autonomous means. Yeah, it's it's
(37:10):
basically the more perfect version of the nuclear torpedo that
I talked about earlier. Like the the idea there, of course,
is that it's out there, it's not communicating back, and
then it could and then it strikes its target, detonates
and causes you know, lasting radioactive damage to a particular
area uh and and uh and loss of life um
and damage to the infrastructure. Etcetera. But we're talking about
(37:33):
digital agents. It would be able to achieve many of
those same goals. Uh, but without the same risk of
components aging and uh and uh and and the torpedo
itself eventually dying. Right, And the one thing about what
you just said does make me want to emphasize, I'm
not suggesting that cyber warfares, say, worse than a nuclear
(37:54):
attack would be. I don't think that's true. I mean, obviously,
a full scale attack by conventional or new clear weapons
would be a worse outcome than a cyber attack. But
I think a cyber attack is a more a threat
that we need to be even more concerned about because
it can be very, very realistically destructive, and it's very
likely to happen because it's already happening. I mean, it's
(38:17):
something that you can very easily see being deployed, much
more so than you can imagine, say, nuclear war between
currently nuclear armed countries. Right. Yeah, When we try to
imagine an the anonymous use of a nuclear even a
powerful enough conventional weapon, um, it's it's far less less
(38:38):
likely compared to the anonymous use of a particularly volatile
cyber weapon, because we haven't really seen the former, and
we've definitely definitely seen the ladder exactly. So a lot
of this article that I mentioned earlier that got me
thinking about this by Jeremy Sue it it's simply highlighting
the fact that governments and international organizations are not having
(38:58):
enough conversation about sidelines for how to control the threat
of autonomous cyber weapons. We are having some international conversations
about what to do about autonomous conventional weapons, the robot
with a gun, We are not having enough conversations about
what to do about controlling autonomous cyber weapons. And these
autonomous cyber weapons are already here and being used. They're
(39:19):
easier to create and deploy and potentially in many cases
more destructive than autonomous conventional weapons. One of the experts
that Sue quotes in his article is Kenneth Anderson, a
professor of law at American University who specializes in national security.
One of the experts Sue quotes in his article is
Kenneth Anderson, a professor uh law professor at American University
(39:41):
who does national security issues. And Anderson says where is
the band killer apps? In GEO advocacy campaign demanding a
sweeping total ban on the use, possession, transferred to transfer,
or development of cyber weapons all the features found in
today's Stop Killer Robots campaign. I think that's a good question. Yeah,
I absolutely agree, and I mean I hope that that
(40:03):
someone is putting it together. I would very much, especially
after this episode, like to support such a campaign. Yeah.
I mean, we've got nuclear non proliferation agreements and stuff
like that. It seems more than reasonable to be trying
to work on a similar framework for cyber weapons, and
and nuclear proliferation treaties have to a large extent worked.
I mean, for starters, we have not had a nuclear war,
(40:27):
which some commentators have said is nothing short of a miracle. Um,
we've seen the nuclear stockpiles of of of the United
States and Russia uh deplete over the decades, and hopefully
that will remain the trend. Again, We've seen similar scenarios
with biological and chemical agents as well. So something could
be done here if we act and we actually uh,
(40:51):
you know, push for regulations to be made. You know, Robert,
one of the things you mentioned earlier is about the
difficulty in controlling the proliferation of digital agents as compared
to conventional weapons, right, like digital agents, a computer worm
or a virus something like that can replicate in the
wild in some scenarios. So I would say that this
(41:13):
is also a case where just like with nuclear weapons,
it's like if if two countries with nuclear weapons go
to war, it's not just a problem for the people
in those two countries, it's a problem for the entire world.
And I would say that software based digital warfare agents
that operate on the Internet are similarly a problem for
the entire world because you don't know potentially who they
(41:35):
could harm on the sidelines. Yeah, I mean we with
with nuclear, biological and chemical agents. Obviously, we all share
an atmosphere, we all share a global environment, and when
we wage devastating war within that environment, we run the
risk of destabilizing UH everything and harming ourselves in the process.
And with the digital technology, we have created another environment
(41:58):
that we have we have made ourselves dependent upon uh
and and we run the risk of doing the same thing,
poisoning this new ocean that we've created, especially when we
consider the possibility of these autonomous agents becoming more adaptive,
and I think we should explore that after we take
a break. Thank thank you, thank you, Alright, we're back.
(42:20):
We're talking about the future, the dangers, the risk, and
how we should how we should really handle the risk
and handle anxiety over the prospect of of cyber warfare
in the future. So here's something I think is really
worth considering, and it's the convergence of cyber warfare and
cyber weapons with machine learning and artificial intelligence. Because we've
(42:42):
already got autonomous cyber weapons, these these malicious bits of
software out there that you know, enact warfare on on
infrastructure of opposing forces in the world, and we've got
machine learning and AI, and there's really no reason these
capabilities could not be combined. So this is a future
that combines the devastating capabilities of cyber warfare with the
(43:04):
attack dynamics of something like biological or germ warfare. This
is a future that should worry us. Autonomous cyber weapons
that can learn, adapt and change on their own. And
I think it's not hard to see how we could
potentially go down this road of developing dangerous AI cyber
weapons that alter themselves through machine learning and get out
(43:26):
of control. I thought of just a couple of scenarios
that seem plausible to me, at least one would be
cyber terrorism. You know, some forces are not rational actors
seeking to limit the harm they cause and preserve their interests.
But some people are just simply interested in causing harm
and chaos. I imagine how bad things could be if
somebody like the Uni bomber had computer skills to wage
(43:49):
lone wolf cyber warfare of this kind. But then I
also think about dangerous autonomous AI that begins as a
defensive measure to protect against cyber attacks. So most harmful
military technologies, Robert, I bet you would agree, most of
these harmful technologies and strategies in world history have not
come from people claiming to be developing offensive weapons to
(44:12):
maliciously attack on suspecting victims. They've been developed under a mindset,
whether this is really objectively fair or not, of defense.
People think like, I'm under threat, I need to do
something to protect myself exactly. I mean this is that
this is then the arms race throughout history, right. Uh,
if the other side has a large slingshot, I need
an equally size size sling shot. I need something that
(44:34):
is a deterrent, otherwise they're just going to take advantage
of me. Yeah, So what what looks like offensive threatening
behavior from your perspective. From the other person's perspective is like, look,
I just got to defend myself. Uh So, usually by
the time you recognize you've been the victim of a
cyber attack, a lot of damage is already done. So
what if in the future we decide we need autonomous,
(44:57):
adaptive defensive cyber weapons to protect us against offensive cyber weapons,
something like an immune system for our infrastructure, the equivalent
of deployed white blood cells, you know, T cells and
B cells and so forth, to autonomously detect, hunt, and
kill malicious autonomous cyberweapons the same way that white blood
(45:17):
cells in your body behave kind of like an adaptive,
independent organism within the body, hunting and killing autonomously in
the bloodstream. Well that sounds great, Joe, But now you're
right back to sky net, Like this sounds exactly like
sky net, except it is like sky net is just
is the the clunkier metaphor for what the future may become. Well,
it's not terminators holding guns, but it is a distributed
(45:39):
defense network. At this point, you're you're talking about like
an anti virus virus, right, or the possibility of an
of an immune system turning against itself, turning against its host,
which which of course we see in the biological Well absolutely, yeah,
that's exactly where I was going with that. So you've
got autoimmune diseases, if you've got an immune system, you
run the risk of the immune system, in say cases
(46:01):
like arthritis or type one diabetes or MS, misidentifying friendly
tissue as something that needs to be defended against and
attacking its own body, you know, turning, turning parts of
the body into innocent victims. Except the nature of the
internet and the connected world means that if one system
develops the digital equivalent of an autoimmune disease, potentially anybody
(46:24):
could catch it. I'd also pair this this kind of
scary scenario with our episode last year about neurosecurity. The
increasing vulnerabilities were accumulating as you make the connections between
your digital services and your nervous system more robust. Indeed,
the idea that you could have your your brain implant
hacked or your pacemaker device hacked, etcetera. So anyway, just
(46:47):
entertaining these scenarios and thinking about the risks posed makes
me think, should we have international cyber warfare non proliferation
treatise the same way we've got nuclear non proliferation treaties.
I mean, absolutely, as we've been driving home these these
are legitimate threats and therefore they you know, we should
be taking steps to prevent um it from happening. As
(47:08):
far as what countries could do, countries around the world
could do to protect themselves, I mean, I wonder is
there an option other than just trying to revert to
a world of decreased connectedness, And would societies ever do
that without being forced to by some kind of tragedy
where Like decreased connectedness of course would mean fewer rather
(47:29):
than more systems can be accessed by the Internet. Where
you might have crucial systems for controlling infrastructure kept offline
or in isolated networks, they're not plugged into the Internet,
and so it would be a lot harder to infect
them with some kind of autonomous digital weapon. Uh though,
I wonder if that's possible even would it takes some
kind of visceral disaster that calls to mind images through
(47:53):
the availability heuristic to make people think this is worth doing.
And it's a good point because I mean, I've certainly
read predictions for the sort of digital future, both in
sci fi and just in general futurism, the idea that
the Internet will become say more national, more regional, more
more layered, and indeed and they need less worldwide. Um uh.
(48:15):
The alternative, of course, is as a continuation of what
we've already been doing, essentially building what cybersecurity expert Bruce
Schneier referred to as a as a worldwide computer. We're
building a worldwide computer, and it's uh, it's susceptible to
attack at every level, from baby monitors to uh to
nuclear power plants. I get, you know. I would hope
(48:37):
that we're that that we that we will grow into
this globally connected world. It will be like there will
be the people deserving of such a worldwide computer. But
if not, then yeah, we deserve the regional model. I
guess well, I don't like the regional model. I mean,
I understand that some decrease in connectedness might be necessary
to prevent attacks, but then again, I like the world
(49:00):
connected model in terms of communication, I mean, all the
good stuff about connection between cultures. I don't think I
buy into the nationalist mindset that says we should only
be talking and interacting with people within our own national
national boundaries or our own culture that seems very limiting.
I mean, it's a wonderful thing to be able to
communicate across borders and with people all around the world.
(49:21):
That that's something I love about what we get to do, right, Uh,
And so I don't know that that sounds like a
horrible thing to do. But then again, I mean I
wonder if there are ways to to leave open the
good channels while preventing people from from using digital exchange
to hurt one another. Well, I mean, part of this
goes back to our past discussions on just the the
(49:43):
the the origins of the Internet. Some would argue that
that that one of the big problems is just uh
security issues with the Internet itself and the idea that
you have this thing that was built as a as
essentially a private network for for developers that has been
bloated out into this glow system that it was really
never meant to be. So we we need a new Internet,
(50:04):
we need a new human race. Uh, just for starters,
those are two things that would help. Yeah. Well, so
if there there are any takeaways from this episode, I
would say I think it's that people should should understand
the relative severity of different types of autonomous technological weapon
threats like that that cyber warfare is not just an inconvenience.
(50:27):
It's not just like, oh darn, the power went out
for a day, or oh, you know, there was a
d d O S attack on the website I wanted
to go to and it went down. These could be
real serious. This could be the warfare of the future
and every bit as serious as conventional warfare. And uh
and so that's worth considering, and it's so it's worth
promoting people who have good ways of thinking about this.
(50:48):
If if you know of cybersecurity experts, the kind of
people who are doing the best thought about this, coming
up with ways of thinking about defenses, especially as we've
talked about the kinds of defenses that don't lead to
a you know, a shutdown of international communication and and
this worse soft world. Shine a light on those people.
I want to know what our best options are. Who's
(51:10):
doing the best thinking about this right now? Yeah, let
us know. We'll do our part to shine our light
on those people as well. All right, So there you
have it. Um as always stuff to blow your mind.
Dot com that is the mothership, that's where you'll find
this and other episodes of note, we've had a number
of them that have dealt with technology and warfare uh
and likewise, UH. There are a number of issues in
(51:31):
this episode that we could easily return to, such as
cyber warfare even the more traditional autonomous weapon designs that
we could do multiple episodes on that as well. So
as we said, I think one thing definitely worth exploring
would be the more specific nitty gritty of the scenarios
imagined by cyber cyber warfare experts, like what's the most
plausible thing that could happen and how could it be prevented? Um?
(51:55):
So hey, yeah, check it out Stuff to blow your
mind dot com UH, and you'll find links to our
various social media accounts there as well. And if you
want to support the show, rate and review us wherever
you have the ability to do so. Big thanks as
always to our wonderful audio producers Alex Williams and Torry Harrison.
If you would like to get in touch with us
with feedback about this episode or any other to let
(52:16):
us know topic you'd like us to cover in the future,
to just send us your your thoughts to say hi,
let us know where you listen, from, how you found
out about the show. Any of that, You can always
email us at Blow the Mind and how stuff Works
dot com for more on this and thousands of other topics.
(52:40):
Is it how stuff works dot com. B b P
(53:01):
has a found back about a protect