Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
In films like Terminator, autonomous weapons become an existential threat
to humanity. In reality, experts from around the world are
urging countries to back away from developing smart weapons. I'm
Jonathan Strickland, and this is text stuff. They it's a
terrifying thought. A weapon controlled by an artificially intelligent program
(00:25):
identifies targets on its own and unleasha's deadly violence against them.
There's no human guiding the machine. It's effectively a gun
that chooses whom it will shoot, all on its own.
It's not a far fetched idea. In an arrow where
computer programs can defeat humans at games like chess or go,
or one where cars are getting closer to taking over
(00:47):
the wheel and driving on their own, it's not difficult
to imagine a robot design specifically to perform the role
of soldiers. Militaries around the world already depend upon systems
that have semi autonomous capabilities, from tracking technologies to drones,
and many experts warned that consequences of developing and deploying
this technology are dire. One potential scenario is that such
(01:09):
devices could and likely will occasionally fire upon people who
aren't actually targets. During a gathering at the United Nations
on November four, a group of concerned AI experts presented
a film that includes such a scenario. In the movie,
a fleet of drones flies down and fires upon a
classroom filled with students. It's a horrifying thought and one
(01:32):
that could become reality should the development of this technology continue,
warrant these experts. Even if we assume everything always works
as intended, there are still huge ethical problems. For example,
if a nation has a sizeable army of robot soldiers,
whether in drone form or otherwise, would that nation be
more likely to enter into armed conflict with others? After all,
(01:55):
the stakes are lower for this country. Its soldiers are
made of metal and plastic, and while expensive, they can
be replaced. There are fewer dark consequences from the perspective
of loss with such an army. This isn't a new fear.
AI experts have been urging the United Nations to adopt
a ban on autonomous weapons for a few years now.
(02:15):
Back in when the first serious discussions began, many people
may have felt the whole conversation was premature, but as
the fields of machine intelligence, machine learning, and robotics have
all advanced over the years, it's apparent that this future
isn't as far off as we may have thought just
a few years ago. There are more than sixty non
governmental organizations that have banded together to call for an
(02:38):
outright ban on autonomous weapon development. Unfortunately, it's not as
simple as taking the floor at the u N and
convincing everyone that this technology poses far more threats than
benefits to humanity. For now, the forum for these discussions
is a subcommittee called the Convention on Certain Conventional Weapons,
a terrible name. It's a consensus based forum, and each
(03:00):
nation has the power to veto any proposed ban. As
you might imagine, it's pretty difficult to convince more than
a hundred twenty countries to all agree on a single topic.
Earlier in a group of entrepreneurs, researchers, computer scientists, and
others signed an open letter to the u N on
the subject. In that letter, rather than asking for a
(03:21):
band the experts applauded the u n's decision to create
a new Group of Governmental Experts, or g g E
in the field of lethal autonomous weapons systems. The group
also extended an invitation to the g g E to
ask for any sort of technical guidance it might need.
The letter stated, we entreat the high contracting parties participating
(03:41):
in the g g E to work hard at finding
means to prevent an arms race in these weapons, to
protect civilians from their misuse, and to avoid the destabilizing
effects of these technologies. The development of AI in general
has been likened to an arms race. You don't have
to attach artificial intelligence to a weapon to make it dangerous.
Plenty of people have expressed concerns about the possibility of
(04:04):
AI causing harm to humans in general, whether intentionally or otherwise.
Elon Musk, the entrepreneur behind Tesla Motors and SpaceX, has
expressed on numerous occasions that we should be cautious with
artificial intelligence. Meanwhile, Russian President Vladimir Putin has said that
the nations that advance AI will become the dominant powers
of the future world. Specifically, he said, whoever becomes the
(04:27):
leader in this sphere will become the ruler of the world.
It seems unlikely that the U N will outright ban
the development of autonomous weaponry. The chair of the first
Meeting of the Convention on Conventional Weapons is Ambassador I'm
on Deep Singh gil from India, who has said it
would be very easy to just legislate a ban. Whatever
(04:47):
it is, let's just ban it. But I think that we,
as responsible actors in the international domain, we have to
be clear what it is that we are legislating on.
It's more likely that we will see the u N
try to create a framework in which they will outline
generally agreed upon boundaries for what is legal and ethical
in the realm of autonomous weapons. To some, including myself,
(05:09):
it may seem a little weird to talk about what
is an ethical versus unethical means to end another human life,
But that's an entirely different discussion. Maybe someday wars will
be fought by robots on both sides and the winner
will be whichever side has the most working units by
the end of the conflict. But between now and then
will likely have many discussions about the cold, calculating, and
(05:30):
potentially terrifying future of warfare. To learn more about artificial
intelligence and robots, subscribe to the tech Stuff podcast. We
exp more topics like these in much greater detail. New
episodes go live every Wednesday and Friday. I'll see you
against It