All Episodes

September 20, 2016 55 mins

Are we destined for an AI arms race? Why would nations race to develop AI? And will we ever create superintelligent AI?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking, Hey darn, Welcome to Alward Thinking about podcast
that listen the future and says Einstein can't be classed
as witless. He claimed Adams were the litt list. I'm

(00:22):
Jonathan Strickland and I'm Joe McCormick, and our other host,
Lauren Vocal bamb is not with us today. She is
in New York doing very exciting things. But today we're
gonna be initiating part two of a two part episode
on the AI arms race, and the last episode. If
you haven't heard that yet, you should go back and

(00:42):
listen to that one first, where we lay all the
groundwork for the stuff we're gonna be talking about today.
Last time, what do we do? We talked about sort
of some definitions of different concepts in artificial intelligence, how
likely we are we think to achieve them, what ways
they might be achieved, some potential stumbling blocks to achieving them.
But in the end we want to say, today, okay,

(01:05):
let's assume that people create what's known as an artificial
general intelligence. Is that going to lead to a worldwide
arms race for intelligent machines? Boy, that's spoiled it. No,
We've got a lot to say about this, and this
is if you listen to the last episode you heard.

(01:25):
At the end, I mentioned that we have some interesting
characters to talk about, people who have perspectives on this idea,
and the first person we're going to chat about falls
into that interesting character category quite handily. Yeah, so this
is just one example. There have been lots of people
actually who have written about the idea of a looming
AI arms race. But one example of a guy with

(01:48):
this prediction is a guy named Sultan Istvan who is
the presidential candidate in the United States for the Transhumanist
Party ticket. He's a trans humanist journalist and I guess, now,
politician I suppose, yes, enthusiasts seems too modest a word.
He actually he had on his website a a suggestion

(02:11):
of a trans humanist bill of rights, and it included
things like the right to extend one's life and stuff
like that. But anyway, so, uh so, Istabon has a
thesis that he published in a Vice Motherboard article. Uh
that was I guess a year or two ago, I think,
and he he said, you know what artificial general intelligence

(02:35):
is not going to arise in the lab of a
tech company like Google, or a university research program or
some genius kids garage. A g I is going to
arise through the work of state actors defending their interests
and as so, speaking of coming superhuman artificial intelligence, Zoltan

(02:56):
rights quote, politicians and military commanders around the world will
want this super intelligent machine mind for their countries and
defensive forces, and they'll want it exclusively. Using AI's potential
power and might for national security strategy is more than obvious.
It's essential to retain leadership in the future world. Inevitably,

(03:17):
a worldwide AI arms race is set to begin. So
so here's the thesis that we can explore for the
rest of today's episode. Now, Ralph, the face of it,
I think at least part of Sultan's argument is is wrong. Yeah,
I don't necessarily think the AI arms race part is wrong,
And I don't think necessarily that nations wouldn't want exclusive

(03:42):
use of some sort of super intelligent entity to work
on their behalf. I think that's apparent right, Like it
would be crazy to suggest otherwise that if there was
any given nation were to have the opportunity to have
a super intelligent entity on their side. It would be
insane to say, now we're good, go help someone else

(04:05):
that then what's the part you disagree with? I disagree
with his statement that says that AI is not going
to show up in the lab of a tech company,
but instead will become the will arise because of the
the work of state actors. Yeah, okay, so, and I
know that's not the only thing we're going to disagree with,
but that's the first thing here. Yeah, that's the first
thing I disagree with, because I don't know if you've

(04:25):
ever worked for any government agency, Joe, have you? Have
you ever worked for I've worked for a university's state universe.
That's close. Yeah, Because all right, so you know about bureaucracy,
You know about the barriers that are there and for progress,
because well I know about bureaucracy and barriers to progress
in private industry. Sure, sure, it's just in government, and

(04:46):
I think it tends to be on a on a
level that's exacerbated prodigious, right, And I mean, let's be fair. Okay,
So bureaucracy exists for a reason. Bureaucracy exists so that
p did tasks have a streamlined process to go through, right,
so that that way you have designed a system that

(05:08):
is really good at handling very specific things. The problem
is anything that's outside of that specificity doesn't fit the system,
and then getting any progress on that is laborious at
best or possibly impossible, depending upon the degree. Now, I
would say that companies are far more nimble and have

(05:30):
a greater incentive to invest in research and development of
artificial intelligence on a scale that dwarfs anything that any
government is able to do, unless you're talking about a
massive government that just decides to turn its full attention
on this problem. So king yes, yeah, some sort of

(05:53):
very authoritarian approach saying all right, we're just turning all
of our like if you're playing the game Civilization, you
switch all your cities so that they're just producing science
that kind of thing. Where uh, you know, here in
the United States, for example, yes, we have organizations like
DARPA that's part of the Department of Defense. They are

(06:13):
very heavily into administering projects that guide research and development
in AI and other areas of technology. They're not the
ones who do it themselves. Private companies or even publicly
traded companies or research organizations do the work. So I
think maybe state actors will play a part in the
sense that there might be some funding. But I would

(06:35):
argue that we're far more likely to see the emergence
of something approaching general artificial intelligence from one of these companies,
not from like a specific state sponsored program, because the
companies are more nimble. They're able to respond and change
their course of action much more quickly than any government

(06:59):
agency can. There's a lot of inertia and momentum involved
in government agencies, and trying to change course is very
hard to do. You know. I would say, if you
are a government and you're interested in funding AI research
to win this AI arms race, to beat all the
other countries and international competitors to beat, to establish this

(07:21):
a g I that you can call upon to do
your bidding, I think that the best strategy to go
about that would probably be to set some kind of
huge prize for it. Uh and and governments have done
things like this before. Do yeah, research prizes, So some
gigantic pool of money, you say, first person to produce

(07:42):
a computer that can meet these five requirements gets all
of this, yes and uh and so again, then you're
still talking on indirect kind of influence, right, like that
you've you've created the incentive, but it's not the agency
that actually creates the a g I that has been
the Resulten has been argueing for. Also in another article

(08:02):
at Sultan wrote there's a bit where uh he he says, well,
this would be unlike the space industry, where we see
international cooperation like on the International Space Station, which to
me seems to be very convenient in the fact that
there he's ignoring the origin of the space race, like
he's saying, AI would be different from the space race

(08:23):
because with the space race, or rather the space industry,
because the space industry, we see this relatively peaceful cooperation
between nations to advance our understanding of science and technology. Right,
But the space race started as a branch of the
Cold War. I mean it was, it was a product
of the Cold War. If it weren't for this this

(08:46):
our space program, could could you argue that it's essentially
something that was like hastily cobbled together after sput Nick
because they were like, oh crap, well yeah, I mean,
and spot Nick did exactly what the Russians wanted it
to do, which was to indicate to the United states, Hey,
we can build a rocket that can reach you, right,
that was really the Soviet Union's primary goal. Now, the

(09:08):
people who worked on the spot Nik project, they had
their other like their their own individual motivations and their
own individual goals for that project, but the state goal
was to say, we can launch a rocket that can
reach you. You America, And then America said, well, we
have to make sure that we have the same capability.

(09:29):
Um and they funded it in a way where it
was in the name of science, and there were legitimate
scientists doing legitimate science under this program. But it wouldn't
have existed without the Cold War, right, without this this
competition between two massive world powers. And so really I
would argue that the reason why the space industry is

(09:51):
the way it is is because this history that predates
it of of competition and kind of uh well again
a branch of that Cold War, it was, it was
acts of passive aggression, I guess, uh so, I think
an AI arms race. I agree with this point that
an AI arms race would be uh something we would

(10:13):
expect should we reach that level of AI sophistication. And
you could even argue that we're seeing an AI arms
race right now. It's just we're seeing it in the
context of narrow AI as opposed to general AI. But
I a lot of the premises he sets up to
support that I disagree with. I think that that those
would need to be revisited. I do agree with his

(10:36):
end conclusion, but his argument is not I don't think
it's supportable like the actual premises of his argument. That
is okay, Well, what are some of the basic arguments
than about whether the world is about to be entering
an artificial intelligence arms race? Well, like Sultan says, he says,

(10:57):
an AI a really useful AI. So the furthest extension
we would put to that is the super intelligent general
artificial intelligence. Although I argue that's not necessary for it
to be an issue. Uh. He says, it would be
such a valuable tool that no one would be able
to ignore that possibility, like to to to say, we're

(11:17):
not going to go down that road. Well, I'd agree
that AI is a tool too powerful to ignore. I mean,
in general, intelligence is the most valuable thing we have.
Intelligence is the thing that you know, not just that
empowers us, that makes human life worth living, I mean,
and and increasing your share of intelligence capability is probably

(11:40):
the most important thing you could do to ensure your
future success. Right, and now he makes the argument that
whichever nation out there created the creates the first super
intelligent a g I wins the game. Like, imagine that
that geo political uh influences and forces are all reduced

(12:03):
to a board game. Whoever hits that last square is
the one who hit first, is the one who creates
super intelligent a g I, and they win the whole
thing automatically, hands down. Um. I don't necessarily believe that,
because I think that access to a super intelligent a
g I doesn't immediately solve other issues that a nation
must address, at least not not so quickly that other

(12:28):
nations can't also react to it. Yeah, I I think
I'd agree with you there. I came up with a
little thought experiment to sort of service an analogy. So
imagine you've got two ships trying to win a race
across the Atlantic Ocean. One ship is a nineteenth century
style steamship, is capable of about eight knots, and the

(12:48):
captain is assisted by a team of the world's most
brilliant scientists with access to powerful supercomputers and all the
tools they could want. The other ship is a modern
ocean liner powered by geared steam turbines capable of thirty knots,
but it is captained by a rather dim guy named Todd,
who has no scientists or supercomputers at his disposal at all. Todd, Now, unfortunately,

(13:13):
I think the fact is given what I've stated, Todd
is still going to win the race, right unless unless
Todd is so dumb as to cause damage to his ship. Yes, then,
assuming that Todd is is at least of the level
of intelligence necessary to follow a specific route, Yeah, it's so. Yeah,
it's good to have very smart people at your disposal

(13:35):
to help optimize your chances of winning. But if you're
stuck with a ship that's more than three times slower
and and that's how you start the race, you're just
probably not gonna win. But I would say maybe if
the captain of the first ship is able to consult
with the scientists and the supercomputers over a period of
several years leading up to the race, well then you

(13:55):
you could maybe go in with a clear advantage and
win it. But then again, if if dim Todd knows this,
he may get some scientists and supercomputers of his own
and then you've got a right, you've got an arms
race on your hands. But I would say that just
having this intelligence advantage does not automatically mean you immediately
become the runaway winner for all time, right though. It

(14:17):
though it is a very strong advantage. And ce Zoltan's
point is that, or the argument he makes is that
whichever nation develops at first will have a dominant hand
from that point forward, will become the dominant force on
the face of the planet. Yeah, I know. He's also
of the opinion that I think, if you have the power,

(14:37):
so if you have a g I on your side,
you have the power to shut down and prevent other
nations from achieving a g I. And I think unless
you're willing to take incredibly extreme measures, that's not necessarily
the case either. It also presupposes that that a g
I is going to have those capabilities, and it may

(14:59):
be that it can accomplish things that we cannot easily
do on our own, but doesn't necessarily mean it can
magically accomplish anything, right Like I there's I think he's
giving it so much credit. And to be fair, we
don't know, right we we don't have anything that's super intelligent,
because we've not been capable of creating such a thing.

(15:20):
And maybe he's right, Maybe a super intelligent entity would
be able to accomplish things that most of us would
think of as being, if not impossible, impractical because it
would be so difficult to do. But I agree with you.
I think that it would be something that would give
a country a distinct advantage, but not guarantee what Zoltan

(15:40):
seems to think of as victory. Uh. He he seems
to to, at least in his arguments. I think that
what we're heading toward is a world that's united because
whichever country comes up with a super intelligent a g
I will be able to subjugate all other countries do

(16:01):
its will. And he thinks, gosh, darn it, the good
old US of A should be the country to do that. Ah.
There's some elements to Sultan's philosophy that I find particularly troubling.
Might be might be too strong a word. Well, I
mean to defend I I don't think I agree with
his philosophy either, but to defend what he says, I mean,
he he does say at least what he has in

(16:24):
mind is a sort of benevolent hegemon e. It's not
like we're going to subjugate you and oppress you and
do what we tell you. I think what he has
in mind is that, you know, it's going to be
the Federation from Star Trek. Everybody's happy, and you know,
it's a utopian kind of I think that's what he
sees if the United States is the entity that creates this.
But I think he also imagines a world that would

(16:48):
be in the realm of the dystopian future should some
other country do it, like, for example, China or Russia.
Not that he specifically names these countries, but that would
be example, or North Korea. I think he had actually
does in North Korea. No, I will admit I can
imagine if North Korea were able to develop an artificial
general intelligence long before anyone else. I think that could

(17:09):
have disastrous consequences for the planet. Absolutely. But I think
again saying that in those scenarios, you don't get that
Starfleet future, you also get some other science fiction future
that's probably written by Burgess or something. I also don't
think that's very likely though. Yeah. Uh, Now we have
other people that have weighed in on this issue, who

(17:34):
have some really interesting points to Yeah. One thing we
read is by Anya Cosperson, writing for the World Policy
Journal blog about the idea of an AI arms race,
and what does she say, Jonathan. She says that, first
of all, AI is it's undeniably intrinsically valuable. And Yeah,
I think that. I think there is no way to

(17:54):
dispute that. I think we both agree that AI is
definitely valuable. Um, even in even at the extent that
we have it right now, the the narrow AI that
is our reality, we see that it's valuable because it's
doing stuff for us. I mean that that kind of
defines value. Uh. So there's an incentive for countries to

(18:17):
pursue the development of AI. I think that's undeniable too. Now,
even if those countries agree that weaponized AI is unethical,
that's not enough of a deterrent, she says, to make
sure that this does not, you know, blossom into a
weaponized AI future, because Yeah, there's always gonna be rogue
operators that may not be state sponsored. You may have

(18:41):
independent rogue operators that are willing to either develop or
perhaps more likely appropriate AI technology and then convert it
for weaponized purposes, which creates pressure on all nations to
consider developing weaponized AI as uh, just as a preventive measure.
So in other Mr President, we cannot allow an AI

(19:02):
gap exactly exactly. Uh. There are a lot of doctor
Strange love moments I had while researching this this particular topic.
You can't fight here. This is the war room. So
one of the examples that you could think of is
that the United Nations agrees that weaponized DAI is a
bad idea. This is this is a hYP hypothetical situation.

(19:23):
I actually had more to say about this in a bit. Yeah,
well we'll we'll cover it in greater detail in a second.
So in this hypothetical situation, the U n says, all right, uh,
we as an international community agree that weaponized DAI is
a bad idea. It's gonna lead us down a destructive road.
Once we head down there, there's no turning back, and
the outcome is going to be negative. There's not a

(19:45):
way of envisioning this future where there's like a net
positive outcomes. So we're gonna agree no weaponized DAI. But
then you've got those rogue operators like a terrorist cell
or organized crime that see this as an opportunity to
use AI in a weaponized way, and they don't. They're
not beholden to this international agreement, they have no stake

(20:08):
in that, right, They're they're not getting they're not worried
about international sanctions from the u N. They're trying to
accomplish whatever goals their particular organization has and they're using
whatever tools are at their disposal to do so. Yeah,
I mean you're invoking here the concept that and this
is true generally even peace loving people who are not

(20:28):
looking to start a fight don't tend to opt for
unilateral disarmament. Yeah, because because if you were to do
that and there's that one person who hasn't agreed, then
that one person just runs rampant over everybody else. Right,
Like if you if you had a group of kids,
let's go Lord of the Flies. You have a group
of kids, and each kid is given a whiffleball bat,

(20:51):
and it's just to keep this from being like too
violent for for even for a hypothetical situation. Each kid
has a whiffleball bat, and you tell all the kids, like, well,
you can either have that whiffleball bat and wail on
your classmates, or you can put the whiffleball bat down
and we'll have a fun activity. But you all need
to put the whiffleball bat down in order for that

(21:12):
to happen. And one by one they start dropping their
whiffleball bats until there's that one bully in the back.
It's like, no, I know what my fun activity is,
hit all the kids who don't have whiffleball bats. That's
the same sort of argument here and up. So you know,
I made a little funny note in here saying, next thing,
you know, we've got gangster drones shaking down local businesses
for extortion money, Like you know, it's a you gotta

(21:35):
you got a nice pizza joint here, and be a
shame as someone I don't know assimilated it into what's
growing technological layout, you know. So so you know, I
think that's a legitimate argument, and I agree very much
with those those assertions that if in fact, you know

(21:56):
there's a possibility to go down this, then someone, someone,
whether state sponsor or not, is going to do it.
I would also argue we're already at a point where
this can happen. You don't need a g I for
this to be a problem. Narrow AI is enough exactly,
So you don't need a general intelligence that can you know,

(22:19):
be super intelligent and produce the next next generation of
technology and innovations and everything. You just need like a
really good weapon that's smart. Yeah. Or even if you're
going to go the semi autonomous route right like, you
could argue we're there with drones. Were there, we're using
technology that removes human operatives from conflict and replaces them

(22:45):
with robotic ones. We'll have a lot more to say
about about that concept a little bit further too, because
that that ties into one of the other big concerns
about weaponized AI in general. Sure, okay, so let's uh,
let's speculate little more. Sure, if there is an AI
arms race, what's it gonna look like. What are some
scenarios that could actually happen? Okay, So we have the

(23:09):
state sponsored approach where countries decide not to essue weabinized AI.
Whether that ends up being a direct implementation of like
a robotic soldier or AI that's assisting in some military
capacity for strategical or tactical planning, it doesn't really matter.
You could have topped down AI or bottom up AI exactly,

(23:32):
or both you know, you could have some integrated approach
that's both. Uh. So one scenario is that you have
established nations like the United States, Russia, China developing weaponized
AI like webinized robots is a good example. So there's
still some they're already examples like drones that are kind
of in this category, whether they are autonomous or semi

(23:53):
autonomous or remote controlled. Uh, there are various types. Obviously
remote controlled you wouldn't call it AI because it's under
the operation of a of a human, but semi autonomous
or autonomous would fall into the AI category, even the
narrow AI band. So these future robots, uh that these
hypothetical future robots that are weaponized AI, they take the
place of our soldiers, or they augment the presence of soldiers.

(24:18):
So let's say that you would normally be able to
field a couple of thousand soldiers. Well now you've got
a couple of thousand soldiers and ten thousand robots. Or
you get to a point where you just have the
robotic soldiers and you don't put any humans in the
field at all. It starts to sound like I'm playing
war gaming here, but this is serious business. Uh. They
those robots then use narrow or general AI, depending upon

(24:41):
the level sophistication we've reached in this hypothetical future to
identify targets and differentiate them from non targets. They can
enter hostile zones and make snap decisions without any emotional considerations.
So they're very cold and calculating in that sense. Because
they are things and not people. They remove some of
the checks that would be in place for a nation

(25:02):
before entering into conflict. This is one of those common
arguments about if we go down the road of weapon
iszed ai, it makes war so easy. Yeah, war becomes
such an easy decision because I mean yes, because you're
you are not committing human life. You're committing resources, so
it's still costs money and time and resources. But you're

(25:22):
not You're not saying, well, we're gonna you know, we're
not gonna enter into this because human lives will be lost,
and we will lose citizens and families will be affected.
You're like, well, the robots, you're putting more layers of
abstraction between yourself and the actual carnage too. I mean,
I'm sure you've read that old idea. I can't remember
who it was, who suggested it. But somebody who suggested,
you know what, Uh, you've got the person who follows

(25:44):
around the president with a briefcase for the you know,
nuclear launch materials that you would require in order to
initiate a nuclear strike. And the suggestion was, you know,
the president really, uh shouldn't be able to just initiate
the strike. The president he or she should have to
kill that person physically with their bare hands or with

(26:07):
a cleaver of some kind in order to get the
code out of some kind of implant in the person's
body in order to use it to launch the nukes,
because it show a level of commitment that that is
necessary considering the outcome of the choice. Yeah, it sounds insane,
like that's that's insanity. Why would you ever institute a
policy like that, But you're talking about a policy of

(26:30):
launching a nuclear strike killing millions of people. Yeah, and
it just shows you that like you're not you're not
really considering those deaths. You just you've got them sort
of like a step away from you. Those those are
numbers and their numbers that exist far way away from
where you are. So the exactly that that abstraction is

(26:50):
what makes that action easier to undertake. When you make
it a real thing that you have to confront, it's harder.
And again, like you're saying, you know, with the robotic soldiers,
you've got that one concern that's taken away, the idea
that well, our soldiers won't be put in danger. But
then you you should start asking your question. Yeah, but

(27:12):
the other side, that's that's people. These robots, we would
be killing human beings. They're not our human beings, but
they're human beings. And that's where you know, depending upon
the the UH motivations and the psychology psychology of the
nations involved and the military personnel involved, you get very

(27:34):
different outcomes. Um. So there is that argument that we
would become more warlike in this future because we would
have fewer impediments to entering into a war um which
also kind of sounds like the Terminator scenario. If you remember,
Terminators were robotic soldiers that were meant to um be

(27:56):
in the service of humans, but because of the emergent
general intelligence, the super intelligence of Skynet, they turn on humans. Well,
they don't have to possess general AI to be a threat.
They could have narrow AI. They could even be semi
autonomous where there they're consistently being guided by human intervention

(28:18):
for it to still be an issue, to still be
a very real escalation AI arms race problem. Um, the
super intelligent general AI think makes it absolutely terrifying, but
it's not necessary for it to be a problem, right, Like,
that's that's just the the degree of the scenario. Uh So,

(28:40):
if you do get to that general AI super intelligent,
then you start entering into other possible disasters, the classic
science fiction one being that the robots themselves decide that
they have to unite against humans in general, not just
work for one group of humans against a different group

(29:01):
of humans. For some reason, I've never found the robots
decide to exterminate humanity scenario all that plausible. I have
found very destructive scenarios plausible. Like I could see how
artificial intelligence could cause catastrophic damage to human civilization, but
the I don't see it coming in the form of

(29:21):
exterminate all humans. You have to you have to have
some form of you know, the robots would have to
have some motivation, right as opposed to just the directive,
Like I can easily understand imagine scenarios where robots through
a directive pursue it in a way that was not
anticipated by the people who there are disastrous side effects

(29:43):
of their behavior. The classic example being you've got a
super intelligent computer and you say I want to bring
about world peace, and it decides to eliminate all humans
because by eliminating all humans, you've eliminated the potential for conflict. Right.
That's that's the classic example of Oh gosh, I wish
I had thought of that. Uh. It also falls very
much into a realm that I am I love very dearly,

(30:05):
which is dungeons and dragons. Whenever your character receives a wish,
you will see players spend hours trying to craft the
perfect wish to to create a a full proof scenario
where the dungeon master can't misinterpret the wish and create
a terrible outcome for the players. I've never seen this,

(30:27):
so they're like genies or someone. Yes, So so you
might say like, like, uh, you know the Simpsons version
make me a sandwich and turns them into a sandwich
because it's misinterpreted the wish on purpose. But same sort
of thing with the general AI, Like there there are
disaster scenarios where it's not that the general AI was

(30:48):
setting out to cause harm. It's just that through the
process of trying to achieve whatever the goal was set,
there were unintended consequences that were harmful. Um. There are
other scenario as though. There's the one that I think
is the most likely, which is that companies, some sort
of corporations, maybe multiple corporations, developed more competent AI that

(31:10):
ends up having a negative impact through unintended consequences. Kind
of similar to what I was just saying. UH, governments
will end up relying on these companies, contracting with them
or even just buying off the shelf components to put
toward UH military use again either top down or bottom up,
essentially saying like, well, I know that this this piece

(31:32):
of software was intended to do this, but with a
little modification, we can have it do this other thing.
We see this all the time in all sorts of
technology industries. The VR industry is a great way of
pointing out how this happens in VR before we got
to some of the consumer headsets and and controls that
are out on the market. Now you had researchers who

(31:55):
were they were they were appropriating video game technology working
with it programming new interfaces for it, and using that
as tools to develop VR applications, And it was all
it was taking something that was in a related field
but changing it, transforming it to do something else. We

(32:16):
could see that, and I think that's the more likely
future that we see various types of AI developed specifically
to do some non military function but then get adopted
into into into uses applications that it was not necessarily
intended for at the beginning. I think that's very likely

(32:39):
simply because companies have a lot of incentive to continuously
innovate in this space, and we're going to see a
lot more rapid development there than we would in some
state sponsored program where uh, you know you've got you've
got a top down pressure that work harder, do do
more innovation. Um. And also we could just see AI

(33:02):
cause massive amounts of harm just in its normal operation,
not even when it's being used for something else. So
we've talked about this in the past too, Like the
stock trading AI, like the various simple algorithms that just
look at the stock market and make very small but
very rapid transactions. Uh, once you get a bunch of

(33:24):
those acting at the same time, you don't know how
they affect one another, and there can become a cascading
effect where you create market and stability simply because of
the collective activity of all these where if maybe it
was one AI or one simple algorithm that's acting on
the stock market, no one would notice. It wouldn't really

(33:45):
make a big difference. But when you have thousands of
them all working to try and make a profit for
whichever company is employing them, there could be these unintended
consequences leading to like a market crash, which we've um
We've seen market crashes that last minutes and then in
return normal, but it could be catastrophic. Uh. Then we

(34:08):
have the rogue actors scenario. This is the one where
the countries, the various nations in the world have agreed, Okay,
we're gonna take a hands off approach on weaponizing AI.
We don't want to do that. The rogue actors one
is that these other entities, whether they're terrorist cells, organized crime. Uh,

(34:30):
you know, maybe it's a a group within a nation
that wants to break free of whatever the the government
in power is, they might be willing to take those steps. Uh.
One of the examples kiss person actually brought up is
to imagine a weaponized drone with facial recognition capabilities flying

(34:51):
over a crowd, and it analyzes the crowd and it
looks for a specific person, and if it identifies that person,
that person is a target. It's weaponized drone, so it
fires on the target and you essentially have assassination by robot.
This means that doesn't sound implausible to me at all.
I mean, you could you could rig something up like

(35:12):
that today. It would not be terribly sophisticated, it would
not be terribly reliable. But the other point that Casperson
makes is for rogue nations, they don't not even just
rogue nations, rogue actors, they don't necessarily have a high
threshold of reliability that they must meet in order for
them to be comfortable investing in this technology or utilizing

(35:35):
this technology. Right like if if there if their device
ends up killing someone mistakenly, well that's unfortunate, but it's
they don't necessarily care, like, that's not they don't have
anyone else to answer to apart from their own organization.
And if their organization doesn't value that humans life, there's

(35:55):
from their perspective, there's no problem nations is different. Right,
you have a lot of people to answer to, and
you can't just go killing people indiscriminately. There huge problems
with that obviously, I mean you know everything everything well,

(36:17):
like like not just the not just the obvious ethical issues.
But then you've got things like the Geneva Convention to
contend with two. Right, So you've got this this possibility
of UH agents that are outside of state control that
could potentially be using this technology, which therefore puts the

(36:40):
pressure on all the states to at least figure out
a way to counteract that, if not keep pace with it. So,
by the way, I also really love the word rogue
and also uge, which some people well interchange because they
make a type of So I'm not talking about yeah,

(37:04):
Mulan rogue are rouge actors. As an actor, I have
a warn rouge before. I'm not ashamed to admit it.
I'm sure it look good on you. Okay. So one
more thing I want to point out is that very
often the ideas that get tossed around when discussing the
concept of an AI arms race are specifically about weaponized AI,

(37:26):
so it has military connotations. You're talking about military robots
or drones or at the top level, a g I
that's some kind of military commander or creating new weapons
or something like that. I was thinking about how there
could very much be an AI arms race that's not
necessarily explicitly military in nature, or at least not at first, right,

(37:52):
I mean, so developing an a g I. If you
have a superhuman general intelligence in a machine, that's not
just useful for military advantage, it's useful for every possible competition,
right right, any any application of intelligence, it would by
definition be useful toward that. Right, So in market competition

(38:14):
and trade wars and stuff like that between countries, and
a g I would be a supreme advantage. I mean
in any in any place you're looking to solve a
problem or excel, the a g I is is an advantage.
So it almost seems like an overly confined way of
envisioning the transformation of geopolitics by AI to just think

(38:37):
about AI robot soldiers and AI military commanders and stuff
like that. Yeah. No, I agree entirely, which is why
one of those scenarios I was talking about companies like
can you imagine the first company to provide a a
computer or a smartphone or a tablet or some piece

(38:57):
of personal electronics that incorporate a g I into it
in some way, whether the device itself possesses it or
it has uh an Internet link to a cloud based
a g I solution that would be enormous, right, that
would be that would be a game changer. That would
be a killer app a killer product. And so assuming

(39:18):
it's it's super intelligent, yeah, well even if it's not
super intelligent, just just imagine having a pretty intelligent person
you could rely upon to do stuff on your behalf.
Like think of all the narrow AI applications we have
right now, but you have all of them wrapped up
in a single format. Factor, So you would have something

(39:40):
that could do everything from helping you keep track of
your activity throughout the day, to making sure that you
are on time for all your scheduled appointments, to following
up with you about concerns, all this kind of stuff
like like that that ideal personal assistant that really creates
your reality. Uh as far as how you interact with technology,

(40:02):
that would be a huge product. And you can bet
that any company that works in consumer technology is looking
to incorporate more and more sophisticated AI into their products.
So it's there already is an AI arms race in
that sense. It doesn't necessarily mean an arms race that

(40:25):
is going to lead toward violent confrontation. But there is
this competition already in the market for companies to try
and develop in that space, and maybe several generations down
the line, that evolves into something that is more akin
to a weapon's arms race, But it doesn't necessarily have

(40:49):
to start out that way. That's one of the scary
things about this, right that we might not even intend
this to be used in any way that would be
remote related to military applications or acts of hostility. But
generations down the line, and by generations, I'm talking about

(41:09):
technology generations, which happened way faster than human generations. That
might end up being the case. All right, So what happens? What?
What are the what are the outcomes of the AI
arms race? Like, what are the various things we see this?
How how do we see this turning out? People stop
making terminator jokes on every article about AI. I mean

(41:33):
they would certainly think twice about it before the AI
the AI kneecappers come by. I saw that joke you
made real funny. You know what else is funny? This
battling clamps. Now they don't need bats, but you know,
sometimes you just got to do stuff for style. Okay, no,

(41:53):
what we're what are some things? So? What are some things?
So one outcome is that we get this uh competitive approach.
This is what we're seeing right now. Everyone's competing to
make more robust, useful AI. Eventually someone creates a super
intelligent AI that's harmful to humans, either intentionally or otherwise,

(42:15):
and then we have to deal with that reality. Ultimately,
dealing with that might mean that we don't succeed, that
we as a species are eliminated, because if you're talking
about super intelligent AI, by definition, it is smarter than
we are. It's more capable than we are at processing information,
and thus, any strategy we might come up with to

(42:37):
try and take down that AI will be something it's
already considered and figured out a way to counteract um
at least that's that's the general way we we frame that, Like,
if it is truly super intelligent, nothing we think of
will be something it hasn't already considered by definition, because
it's more intelligent than we are capable of being unless

(42:59):
we also become super intelligent. Yeah, well then there's a
totally different AI arms race, right. Uh So that's one
outcome that's kind of depressing. You know, everyone dies. I
don't really like that, big fan. I mean, I love Shakespeare,
but you know, we don't have to have it be
a tragedy. Were canna have a be a comedy. Maybe
everyone gets married, we get married to robots. How about
that one. That's a good outcome. Uh So another outcome

(43:20):
is super intelligent AI ends up being impossible or so
impractical to create that we never really implemented. And in
this scenario, we would discover either that we lack the
capability to achieve super intelligent machines or we could do it,
but it requires so much energy and it's so resource

(43:44):
heavy that no one's willing to actually go to the
step of making it. Yeah. I think I stick by
the conclusion we talked about it in the last episode
that it is pretty much demonstrated that super intelligence is
in principle possible, but that doesn't mean that it is
in practice feasible. Right, it might be possible in principle,

(44:07):
but we just can't ever do it. Yeah, if it
if it turned out to be something where we just
don't have the physical resources to design the system, or
maybe we do, but it would be to to do
so would create a hardship because we're diverting so many
resources from other things, or we're just too locked into
patterns of stupidity to ever figure out how to do it. Right. Yeah,

(44:30):
then we never have to worry about the super intelligent
part at all. But here's the problem with that. We
don't need it to be super intelligent for it to
be a problem. Right. You don't need a driverless car
to be super intelligent for it to be able to
deliver an explosive, for example, which is terrifying, but a
reality that we could see. Um. So then we would

(44:51):
enter an era of rapid development to both create and
prevent harmful uses of AI. UM These would be human guided, right,
not necessary fairly AI guided, because if we aren't able
to make superhuman intelligent or super intelligent rather AI, then
we would mostly be relying on human ingenuity in that case.

(45:12):
Zoltan's outcome was that idea that we would become a
united species under one government, one flag, one nation, indivisible.
His goal would be to have America be the dominant
force that would then bring the rest of the world together. Uh. So,
essentially the US would annex everybody else and then we
would just have to worry about the possibility of super

(45:35):
intelligent AI that we had created turning against us, and
not worry so much about each other. Um. So are
you voting for Zultan President? No? I I don't think
that's a realistic outcome. Um. Either. Well, first of all,
I don't think it's realistic that we would have unity
on any on any global level. We don't even have

(45:57):
unity on a national level, so unity on a global
level to me seems idealistic to a point of being implausible. Um.
But beyond that, I think again, as I've said many times,
the the AI being an issue will happen long before
we get to any super intelligent type of AI or

(46:18):
general intelligence. Um. Another outcome is that we create a
series of guidelines to shape AI so that it works
for the benefit of humans, not to the detriment, and
that involves creating a set of practices that allow us
to guide AIS development a way that doesn't lead to
the terminator scenario. This is where you have the people
arguing we need to come up with ways to instill

(46:39):
human value into machines problem yeah, and the and of
course this leads to other problems too, right, like who
defines what human value is? Under whose definition do we
create this AI? And like who writes the guidelines? And
who wants to stop someone from developing AI that doesn't

(47:02):
you know, abide by them? These are all tough questions
I don't have answers to. Well, I guess the last
major question is is there any way to avoid an
AI arms race? Well? Can it? Can it be averted?
We first, we first have to think about that you in, right, Yes,

(47:25):
the international agreements the AI equivalent of the Nuclear Test
Band treaties. So the international community, Uh, they've done stuff
like this before, with varying degrees of success. They've attempted
to halt or limit the proliferation of nuclear weapons and
the acceleration of the twentieth century at least nuclear arms
race by coming up with these International Nuclear Test Band Treaties.

(47:48):
And these treaties banned various types of nuclear weapons testing
among ratified parties. For example, the Partial Test Band Treaty
of nineteen sixty three was signed by nuclear powers like
the US, the UK, and Soviet Union, which mutually agreed
to prohibit all testing of thermonuclear weapons except for underground tests. Right.

(48:11):
The same thing is true with chemical weapons, like they
were essentially international agreement that that is a as a
no go. Right, So there's a question, I guess, could
we do something similar with AI? People recognize the potential
for danger and make AI development illegal everywhere in the world.

(48:31):
First of all, I'd say this seems unenforceable even by
international treaty, Like, even assuming you could get every country
in the world to sign on as a party to
that treaty, which I don't necessarily think you could. For example,
the Comprehensive Nuclear Test Band Treaty of nineteen six that
attempted to extend the you know, the scope of the

(48:52):
nuclear test band treaties that existed. They couldn't get that
signed or ratified by key countries. And by my estimation,
I think it's much easier to hide the development of
artificial intelligence than it is to hide the testing of
nuclear weapons. You can probably perform key AI research in
ways that can be contained in a single nondescript warehouse.

(49:15):
It's just not that hard. There's no radiological evidence to
sniff out with an atmospheric collection aircraft or anything like that.
There'd be no seismographic signature. All these things we can
use to check and see if people are performing nuclear
tests they're not supposed to be doing. You could have
decentralized development, Yes, how would you? How would you do

(49:35):
anything like that with AI? So in the end, I
think even if you're able to get all the member
states of the u N to sign a comprehensive AI
Test band treaty, there would be no way whatsoever to
enforce it. It would just be paper. Right, And we
have to keep in mind that, like we said before,
there are a lot of potential uses for a GI.

(49:56):
The scary, right, But do we even want this? Mean,
let's not forget how useful AI could be if it's
a G. I's promise this, this almost certainly yields major
breakthroughs in medicine, energy, all kinds of fields that would,
if implemented properly, improve human life, increase wealth for the
whole world, and protect our environment. But I guess the question.

(50:20):
You know that that key phrase, if implemented properly is
a big if. Okay, So would there be any other
ways to prevent it? I would say nothing that I
can think of. I mean, my intuitive ruling is that
if a g I is technologically feasible, a different question
than whether it's possible in principle, which I agree that
it is. It's just not possible to prevent the development

(50:43):
of it. You can't put anything in place to stop
people from doing it, unless you're talking about some something
extreme like just well, wipe out the whole human race
or destroy all computers or something right like like the
human race as a as an entire species gets wiped
out before we have they to create an a g I,
which which case that's like trying to prevent I mean,

(51:06):
or that some other thing caused it and it wasn't
It wasn't even and the the a g I wasn't
even remotely connected to the reason for for the extermination
of the species. Assuming human progress continues as normal without
major catastrophe, if there's a way for us to do it,
then we're going to do it. Someone will do it. Yeah,
it doesn't. I mean, if it's possible to create it,

(51:30):
someone will create it, because that's what we are as
a species. It's it's not necessarily that someone would create it.
In an effort to create to do malicious activity, but
someone will do it. If it's possible, then we will
get around to doing it sooner or later. The question
is how hard is it? Right? Like if it's if

(51:51):
it's so hard that it's possible, but we're not going
to be there for another century, Well we can have
this conversation, but it's not really really going to matter
until a century from now. That's it's still going to matter,
but it will be a hundred years later. This whole
discussion does seem to make that control problems seem of
of imminent importance. Like even if the development of a

(52:14):
g I is not imminent, which I don't think it is,
I don't get the feeling that it's going to happen
anytime real soon. We still do need to be thinking
about these control problems, like what what do you do
to UH to you know, to limit the unintended consequences
of a g I to UH to understand how we're

(52:36):
going to use it and prevent people from using it
in a destructive way. UH. They're they're really hard problems,
but we need to be working on them. I think
what we're gonna see is a new dawn of electromagnetic
pulse technology to take out take out robo soldiers and computers.
Is I mean that's I'm I'm kind of making a joke,

(52:58):
but I'm kind of not because as if you get
to a point where you're thinking, well, I can't stop
anyone from trying to pursue this line of development, UM,
and even if it takes a long time, people are
going to go after it. What do I do in
order to prepare for the eventuality that someone is incorporating

(53:21):
this on some scale, whether it's a bunch of narrow
AI implementations, a general AI into implementation, or super intelligent
a g I. You gotta have you gotta have a
plan in place. I'm I've kind of made a joke
about e m p s, but I think that's gonna
be a big thing. I really do think that, uh,

(53:42):
those that technology would be an important element because if
nothing else, you could shut stuff down, anything that's electronic,
You could shut it down long enough for you to
have a different plan to go in place too, UM
potentially take care of things. Obviously, if you get to
a point where it's a distributed intelligence, that's a lot

(54:04):
harder because there's no place to aim at. It's everywhere,
and at that point we're having we're having a totally
different conversation. Maybe that's the point where humans and machines
merge and we we truly do, become transhumanist and Zoltan
rules over us all well said, yeah, um, and I
know we had a lot of fun with this, but

(54:24):
I mean, it's it is one of those things where
you've got to kind of stretch your mind to think
about because obviously we're talking about situations that aren't in
the realm of possibility right now, but we can see
kind of where we're headed, so it's easy to imagine it,
even if it's not something that is pressing right now. Uh.

(54:45):
I kind of curious what you guys think. If you
have any thoughts on the subject of the AI arms race,
you should write in and let us know. Our email
addresses FW Thinking at how Stuff Works dot com, or
drop us a line on Facebook or Twitter. You can
go to Facebook search FW Thinking, our profile will pop up.
You can leave us a message or send us a
tweet over on the Twitters. Our handle is f W

(55:06):
Thinking and we will talk to you again. Really see
for more on this topic in the future. Of technology.
Visit forward thinking dot Com, brought to you by Toyota

(55:32):
Let's Go Places,

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.