All Episodes

November 16, 2018 41 mins

An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)

Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
It was a cold, windy day in January nineteen seventy
nine when the robots took their first human life. It
happened in Flat Rock, Michigan, about twenty miles down the
interstate from Detroit, at the Ford plant. There. Robert Williams
was twenty five. He was a Ford worker and one
of the people who oversaw the robotic arm that was

(00:23):
designed to retrieve parts from bins in the storage room
and place amended carts that carried them out to the
humans on the assembly line. But the robot was malfunctioning
that day, and aware of the slowdown it was creating
on the line, Robert Williams went to grab the parts himself.
While Williams was reaching into a bin, the one ton
robotic arms swung into that same bin. The robot didn't

(00:45):
have any alarms to warn Williams it was nearby. It
didn't have any censors to tell it a human was
in its path. It only had the intelligence to execute
its commands to retrieve and place auto parts. The robots
struck William's head with such force it killed him instantly.
It was thirty minutes before anyone came to look for
Robert Williams. During that time, the robot continued to slowly

(01:09):
do its work while Williams lay dead on the parts
room floor. The death of Robert Williams happened during a
weird time for AI. The public at large still felt
unsure about the machines that they were increasingly living and
working among. Hollywood could still rely on the trope of
our machines running a muck and ruining the future for humanity.

(01:29):
Both war Games and The Terminator would be released in
the next five years. But within the field that was
trying to actually produce those machines that may or may
not run amuck in the future, there was a growing
crisis of confidence. For decades, AI researchers had been making
grand but fruitless public pronouncements about advancements in the field.
As early as nineteen fifty six, when a group of

(01:51):
artificial intelligence pioneers met at Dartmouth, the researchers wrote that
they expected to have all the major kinks worked out
of AI by the end of this semester, and the
predictions kept up from there. So you can understand how
the public came to believe that robots that were smarter
than humans were just around the corner, but AI never
managed to produce the results expected from it, and by

(02:14):
the late nineteen eighties the field retreated into itself. Funding
dried up, candidates looked for careers in other fields. The
research was pushed to the fringe. It was an AI
winter the public moved on to. The terminator was replaced
by Johnny five in the film Maximum over Drive. Our

(02:34):
machines turn against us, but it's the result of a
magical comet, not from the work of scientists. We lost
our fear of our machines. Recently, quietly, the field of
AI has begun to move past the old barriers that
once held it back, garner the grand pronouncements. Today's researchers,
tempered by the memory of their predecessor's public failures, are

(02:56):
more likely to downplay progress in the field, and from
the new AI we have a clearer picture of the
existential risks that poses than we ever had before. The
AI we may face in the future will be subtler
and vastly more difficult to overcome than a cyborg with
a shotgun. In hindsight, it was the path toward machine

(03:21):
learning that the early AI researchers chose that led them
to a dead end. Let's say you want to build
a machine that sorts red balls from green balls. First
you have to explain what a ball is. Well first, really,
you have to have a general understanding of what makes
a ball a ball, which is easier said than done.
Try explaining a ball to someone that doesn't use terms

(03:42):
that one would have to already be familiar with, like sphere,
around or circle. Once you have that figured out, you
then have to translate that logic and those rules into code,
the language of machines, ones and zeros if some. Then
now you have to do the same thing with the
incept of the color red and then the color green,

(04:03):
so that your machine can distinguish between red balls and
green balls. And let's not forget that you have to
program it to distinguish in the first place. It's not
like your machine comes preloaded with distinguishing software. You have
to write that too. Since you're making a sorting machine,
you have to write code that shows it how to
manipulate another machine, your robot sorder to let it touch

(04:23):
the physical world. And once you have your machine up
and running and working smoothly separating red balls from green ones,
what happens when a yellow ball shows up. Things like
this do happen from time to time in real life.
What does your machine do? Then? Despite the incredible technical
difficulties that faced the field of artificial intelligence did have
a lot of success at teaching machines that could think

(04:45):
very well within narrow domains. One program called deep Blue
beat the reigning human chess champion Garry Kasprov in six
games at a match in to be certain, the intellectual
abilities required by chess are of bad improvement over those
required to select a red ball from a green one.
But both of those programs share a common problem. They

(05:07):
only know how to do one thing. The goal of
AI has never been to just build machines that can
beat humans at chess. In fact, chess has always been
used as a way to test new models of machine learning,
and while there is definitely used for a machine that
can sort one thing from another, the ultimate goal of
AI is to build a machine with general intelligence. Like

(05:27):
a human has to be good at chess and only chess,
is to be a machine to be good at chess,
good at doing taxes, good at speaking Spanish, good at
picking out apple pie recipes. This begins to approach the
ballpark of being human. So this is what early AI
research ran up against. Once you've taught the AI how
to play chess, you still have to teach it what

(05:48):
constitutes a good apple pie recipe, and then tax laws
in Spanish, and then you still have the rest of
the world to teach it all the objects, rules, and
concepts that make up the fabric of our reality. And
for each of those, you have to break it down
to its logical essence and then translate that essence into code,
and then work through all the kinks. And then once
you've done this, once you've taught it absolutely everything there

(06:11):
is in the universe, you have to teach the AI
all the ways these things interconnect. Just the thought of
this as overwhelming. Current researchers in the field of a
I refer to the work their predecessors did as go fi,
good old fashioned AI. It's meant to evoke images of
malfunctioning robots, their heads spinning wildly, is smoked pores from them.

(06:31):
It's meant to establish a line between the AI research
of yesterday and the AI research of today. But yesterday
wasn't so long ago. Probably the brightest line dividing old
and new in the field of AI comes around two
thousand six. For about a decade prior to that, Jeoffrey Hinton,
one of the Skeleton crew of researchers working through the

(06:52):
AI Winter, had been tinkering with the artificial neural networks,
an old AI concept first developed in the nineteen forties.
The neural nets didn't work back then, and they didn't
work terribly much better in the nineties, But by the
mid two thousands, the Internet had become a substantial force
in developing this type of AI. All of those images
uploaded to Google, all that video uploaded to YouTube, the

(07:14):
Internet became a vast repository of data that could be
used to train artificial neural networks in very broad strokes.
Neural nets are algorithms that are made up of individual
units that behave somewhat like the neurons in the human brain.
These units are interconnected and they make up layers. As
information passes from lower layers to higher ones, and whatever

(07:37):
input has passed through the neural net is analyzed in
increasing complexity. Take for example, the picture of a cat.
At the lowest layer, the individual units each specialized in
recognizing some very abstract part of a cat. Picture. So
one will specialize in noticing shadows or shading, and another
will specialize in recognizing angles. And these individual units give

(07:59):
a confident at center bowl that what they're seeing is
the thing that they specialize in. So that lower layer
is stimulated to transmit to the next higher layer, which
specializes in recognizing more sophisticated parts. The units in the
second layer scan the shadows and the angles that the
lower layer found, and it recognizes them as lines and curves.

(08:19):
The second layer transmits to the third layer, which recognizes
those lines and curves as whiskers, eyes, and ears, and
it transmits to the next layer, which recognizes those features
as a cat. Neural nets don't hit a accuracy, but
they work pretty well. The problem is we don't really
understand how they work. The thing about neural nets is

(08:40):
that they learn on their own. Humans don't act as
creator gods who code the rules of the universe for
them like in the old days. Instead, we act more
as trainers. And to train a neural net, you expose
it to tons of data on whatever it is you
wanted to learn. You can train them to recognize pictures
of cats by showing them millions of pictures of cats.
You can train them on natural languages by exposing them

(09:02):
to thousands of hours of people talking. You can train
them to do just about anything. So long as you
have a robust enough data set. Neural net's find patterns
in all of this data, and within those patterns, they
decide for themselves. What about English makes English english? Or
what makes a cat picture a picture of a cat?
We don't have to teach them anything. In addition to

(09:23):
self directed learning, what makes this type of algorithm so
useful is its ability to self correct to get better
at learning. If researchers show a neural net a picture
of a fox and the AI says it's a cat,
the researchers can tell the neural net it's wrong. The
algorithm will go back over its millions of connections and
fine tune them, adjusting the way it gives each unit

(09:43):
so that in the future it will be able to
better distinguish a cat from a fox. It does this too,
without any help or guidance from humans. We just tell
the AI that it got it wrong. The trouble is
we don't really know how neural nets do what they do.
We just know they work. This is it's called opaque.
We can't see inside the thought processes of our AI,

(10:04):
which makes artificial neural nets black boxes, which makes some
people nervous. With the black box, we add input and
receive output, but what happens in between is a mystery.
Kind of like when you put a quarter into a
gumball machine. Quarter goes in, gumball comes out. The difference
is that gumball machines aren't in any position to take

(10:24):
control of our world from us. And if you were
curious enough, you could open up a gumball machine and
look inside to see how it works with the neural net.
Cracking open the algorithm doesn't help. The machine learns in
its own way, not following any procedures we humans have
taught it. So when we examine a neural net, what
we see doesn't explain anything to us. We're already beginning

(10:47):
to see signs of this opaqueness in real life as
reports come in from the field. A neural net that
Facebook train to negotiate developed its own language that apparently
works rather well in negotiations, but doesn't make any sense
to human Here's a transcript from a conversation between age
and a and agent b Alice and Bob. I can
I I everything else dot dot dot dot dot dot

(11:09):
dot dot dot dot dot dot dot dot balls have
zero to me, to me, to me, to me, to me,
to me, to me, to meet you, I everything else
dot dot dot dot dot dot dot dot dot dot
dot dot dot dot balls have it ball to me,
to me, to me, to me, to me, to me,
to me, I I can I I I everything else
dot dot dot dot dot dot dot dot dot dot
dot dot dot. Another algorithm called deep patient was trained

(11:29):
on the medical history of over seven hundred thousand people
twelve years worth of patient records from Mount Sinai Hospital
in New York. It became better than human doctors at
predicting a patient would develop a range of ninety three
different illnesses within a year. One of those illnesses is schizophrenia.
We humans have a difficult time diagnosing schizophrenia before the

(11:49):
patient suffers their first psychotic break, but Deep patient has
proven capable of diagnosing the mental illness before then. The
researchers have no idea what patterns the algorithm as seeing
in the data. They just know it's right. With astonishing quickness,
the field of AI has been brought out of its
winter by neural nets. Almost overnight, there was a noticeable

(12:11):
improvement in the reliability of the machines that do work
for US. Computers got better at recommending movies. They got
better at creating molecular models to search for more effective pharmaceuticals.
They got better at tracking weather. They got better at
keeping up with traffic and adjusting our driving routes. Some
algorithms are learning to write code so that they can
build other algorithms. With neural nets. Things are beginning to

(12:34):
fall into place for the field of AI. In them,
researchers have produced an adaptable, scalable template that could be
capable of a general form of intelligence. It can self improve,
it can learn to code. The seeds for a superintelligent
AI are being sowned. There are enormous differences, by orders

(13:05):
of magnitude really, between the AI that we exist with
and the super intelligent AI that could at some point
result from it. The ones we live with today are
comparatively dumb, not just compared to a super intelligent AI,
but compared to humans as well. But the point of
thinking about existential risks posed by super intelligent AI isn't
about time scales of when it might happen, but whether

(13:27):
it will happen at all. And if we can agree
that there is some possibility that we may end up
sharing our existence with a super intelligent machine, one that
is vastly more powerful than us, then we better start
planning for its arrival now. So I think that this
transition to a machine intelligence era looks like it has
some reasonable chance of occurring within perhaps the lifetime of

(13:49):
a lot of people today. We don't really not, but
maybe it could happen in a couple of decades, maybe
it's like a century, and that it would be a
very important transition the last invention that human sever needs
to make. That was Nick Bostrom, the Oxford philosopher who
basically founded the field of existential risk analysis. Artificial intelligence
is one of his areas of focus. Bostroom used a

(14:12):
phrase in there that AI would be the last invention
humans ever need to make. It comes from a frequently
cited quote from British mathematician Dr Irving John good one
of the crackers of the Nazi Enigma code at Bletchley
Park and one of the pioneers of machine learning. Doctor
Goods quote reads, let an ultra intelligent machine be defined
as a machine that can faster pass all the intellectual

(14:35):
activities of any man, however clever. Since the design of
machines is one of these intellectual activities, an ultra intelligent
machine could design even better machines. There would then not
unquestionably be an intelligence explosion, and the intelligence of man
would be left far behind. Thus, the first ultra intelligent
machine is the last invention that man need ever make.

(14:58):
In just a few lines, Ductor Goods sketches out the
contours of how a machine might suddenly become super intelligent,
leading to that intelligence explosion. There are a lot of
ideas over how this might happen, but perhaps the most
promising path is stuck in the middle of that passage
a machine that can design even better machines. Today, AI
researchers call this process recursive self improvement. It remains theoretical,

(15:23):
but it stands as a legitimate challenge to AI research,
and we're already seeing the first potential traces of it
in neural nets. Today, a recursively self improving machine would
be capable of writing better versions of itself. So Version
one would write a better version of its code, and
that would result in version two, and version two would
do the same thing, and so on, and with each

(15:45):
iteration the machine would grow more intelligent, more capable, and
most importantly, better at making itself better. The idea is
that at some point the rate of improvement would begin
to grow so quickly that the machines intelligence would take
off the intelligence explosion that Dr Good predicted how an
intelligence explosion might play out. Isn't the only factor here?

(16:09):
At least equally important is how quickly an AI might
become intelligent? Is just how intelligent it will become. An AI.
Theorist named Eliezer Yukowski points out that we humans have
a tendency to underestimate the intelligence levels that AI can attain.
When we think of a super intelligent being, we tend
to think of some amazing human genius, say Einstein, and

(16:31):
then we put Einstein in a computer or a robot.
That's where our imaginations tend to aggregate when most of
us ponder super intelligent AI. True as self improving AI
may at some point reach a level where it's intelligence
is comparable to Einstein's, but why would it stop there?
Rather than thinking of a super intelligent AI along the
lines of the difference between Einstein and US regular people.

(16:55):
Nick Bostrom suggests that we should probably instead think more
along the lines of the different between Einstein and earthworms.
The super Intelligent AI would be a god that we
made for ourselves. What would we do with our new god?

(17:15):
It's not hyperbole to say that the possibilities are virtually limitless,
but you can kind of see the outlines and what
we do with our lesser AI gods. Now, we will
use them to do the things we want to do
but can't, and to do the things we can do
but better. I. J. Good called the super Intelligent Machine
the last invention humans ever need to make, because after

(17:38):
we invented it, the AI would handle the inventing for
us from there on. Now, our technological maturity would be
secured as it developed new technologies like atomically precise manufacturing
using nanobots. And since it would be vastly more intelligent
than us, the machines that created for us would be
vastly superior to anything we could come up with flawless technology.

(17:59):
As far as we were concerned, we could ask it
for whatever we wanted to establish our species outside of
Earth by designing and building the technology to take us
elsewhere in the universe. We would live in utter health
and longevity. It would be a failure of imagination, says
Eliza Yukowski, to think that AI would cure, say cancer,
the super intelligent AI would cure disease. It would also

(18:23):
take over all the processes we've started, improve on them,
build on them, create whole new ones we hadn't thought of,
and create for us a post scarcity world, keeping our
global economy humming along, providing for the complete well being, comfort,
and happiness of every single person alive. It would probably
be easier than guessing at all of the things of

(18:43):
super intelligent a I might do for us to instead
look at everything that's wrong with the world, the poverty,
the wars, the crime, the exploitation and death was suffering,
and imagine a version of our world utterly without any
of it. That starts to get at what those people
intice pating the emergence of a super intelligent AI expect
from it. But there's another little bit at the end

(19:07):
of that famous quote from I J. Good, one that
almost always gets left off, which says a lot about
how we humans think of the risks posed by AI.
The sentence reads in full. Thus, the first ultra intelligent
machine is the last invention that man need ever make,
provided that the machine is docid enough to tell us
how to keep it under control. We humans tend to

(19:29):
assume that any AI we create would have some desire
to help us or care for us, But existential risk
theorists widely agree that almost certainly would not be the case.
That we have no reason to assume a super intelligent
AI would care at all about as humans are well being,
in happiness, or even our survival. This is transhumanist philosopher

(19:50):
David Pierce. If the intelligence explosion word come to pass,
it's by no means clear that the upshot would be
sentients friendly super intelligence. In much the same way that
we make assumptions about how aliens might walk on two legs,
or have eyes, or be in some form we can comprehend,

(20:11):
we make similar assumptions about AI, but it's likely that
a super intelligent AI would be something we couldn't really
relate to at all. It sounds bizarre, but think for
a minute about what would happen if the Netflix algorithm
became super intelligent. What about the Netflix algorithm makes us
think that it would care at all about every human
having a comfortable income and the purest fresh water to drink.

(20:35):
Say that a few decades from now, computing power becomes
even cheaper and computer processes more efficient, and Netflix engineers
figure out how to train its algorithm to self improve.
Their purpose at building upon their algorithm isn't to save
the world. It's to make an AI that can make
ultra tailored movie recommendations. So if the right combination of
factors came together and the Netflix algorithm underwent and intelligence explosion,

(21:00):
there's no reason for us to assume that it would
become a super intelligent, compassionate Buddha. It would be a
super intelligent movie recommending algorithm, and that would be an
extremely dangerous thing to share our world with. About a
decade ago, Nick Bostrom thought of a really helpful but
fairly absurd scenario that gets across the idea that even
the most innocuous types of machine intelligence could spell our

(21:23):
doom should they become super intelligent. The classical example being
the AI paper clip maximizer that transforms the Earth into
paper clips are space colonization props that then gets sent
out and the transform the universe into paper tips. Imagine
that a company that makes paper clips hires a programmer
to create an AI that can run its paper clip factory.

(21:45):
The programmer wants the AI to be able to find
new ways to make paper clips more efficiently and cheaply,
so it gives the AI freedom to make its own
decisions on how to run the paper clip operation. The
programmer just gives the AI the primary objective, its goal
of making as many paper clips as possible. Say that
paper clip maximizing AI become super intelligent. For the AI,

(22:07):
nothing has changed. Its goal is the same to it.
There is nothing more important in the universe than making
as many paper clips as possible. The only difference is
that the AI has become vastly more capable, so it
finds new processes that building paper clips that were overlooked
by as humans. It creates new technology like nanobots to
build atomically precise paper clips on the molecular level, and

(22:30):
it creates additional operations like initiatives to expand its own
computing power so it can make itself even better at
making more paper clips. It realizes at some point that
if it could somehow take over the world, that would
be a whole lot of more paper clips in the future.
Then if it just keeps running the single paper clip factory,
so it then has an instrumental reason to place itself

(22:51):
in a better position to take over the world. All
those fiber optic networks, all those devices we connect to
those networks, are global economy. Even as human would be
repurposed and put into the service of building paper clips,
rather quickly, the AI would turn its attention to space
as an additional source of materials for paper clips. And
since the AI would have no reason to fill us

(23:13):
in on its new initiatives to the extent that it
considered communicating with us at all, it would probably conclude
that it would create an unnecessary drag on its paper
clip making efficiency. We humans would stand by as the
AI launched rockets from places like Florida and Kazakhstant, left
to wonder what's it doing now? It's nanobot workforce would

(23:40):
reconstitute matter, rearranging the atomic structures of things like water
molecules and soil into aluminum to be used as raw
material for more paper clips. But we humans, who have
been pressed into services paper clip making slaves by this point,
need those water molecules in that earth for our survival,
and so we would be thrown to a resource conflict

(24:01):
with the most powerful entity in the universe, as far
as we're concerned, a conflict that we were doomed from
the outset to lose. Perhaps the AI would keep just
enough water and soiled to produce food and water to
sustain as slaves. But let's not forget why we humans
are so keen on building machines to do the work
for us. In the first place. We're not exactly the
most efficient workers around, so the AI would likely conclude

(24:25):
that it's paper clip making operation would benefit more to
use those water, molecules and soil to make aluminum than
it would keep us alive with it. And it's about
here that those nanobots the AI built would come for
our molecules too. As Elie as our Yukowski wrote, the
AI does not hate you, nor does it love you,

(24:47):
but you are made of atoms which it can use
for something else. But say that it turns out as
super intelligent, AI does undergo some sort of spiritual conversion
as a result of its vastly increased intellect, and also
gains compassion. Again, we shouldn't assume we will come out
safely from that scenario. Either. What exactly what the AI

(25:07):
care about, not necessarily just us, considers, says transhumanist philosopher
David Pierce, an AI that deeply values all sentient life.
That is to say that it cares about every living
being capable of, at the very least the experience of
suffering and happiness, and the AI values all sentient lives
the way that we humans place a high value on

(25:29):
human life. Again, there's no reason for us to assume
that the outcome for us would be a good one
under scrutiny. Perhaps the way we tend to treat other
animals we share the planet with other sentient life would
bring the AI to the conclusion that we humans are
an issue that must be dealt with to preserve the
greater good. Do you imagine if you were a full

(25:52):
spectrum superintelligence, would you deliberately create brain damaged, psychotic, eccentric
for malays written Darwinian humans, or would you think are
matro and energy could be optimized in a in a
radically different way, or perhaps its love ascenient life. We'd

(26:13):
preclude it from killing us and our species would instead
be imprisoned forever to prevent us from ever killing another animal,
either special death or special imprisonment. Neither of those outcomes
of the future we have in mind for humanity. So
you can begin to see why some people are anxious
at the vast number of algorithms in development right now,

(26:33):
and those already intertwined in the digital infrastructure we've built
atop our world. There is thought given to safety by
the people building these intelligent machines. It's true. Self driving
cars have to be trained in programmed to choose the
course of action that will result in the fewest number
of human deaths when an accident can't be avoided. Robot
care workers must be prevented from dropping patients when they

(26:55):
lift them into a hospital bed. Autonomous weapons, if we
can't agree to BAND out right, have to be carefully
trained to minimize the possibility that they kill innocent civilians,
so called collateral damage. These are the type of safety
issues that companies building AI consider. They are concerned with
the kind that can get your company sued out of existence,

(27:16):
not the kind that arises from some vanishingly remote threat
to humanities existence. But say they did build their AI
to reduce the possibility of an existential threat. Controlling a
god of our own making is as difficult as you
would expect it to be. In his two thousand fourteen
books Super Intelligence, Nick Bostrom lays out some possible solutions

(27:37):
for keeping a super intelligent AI under control. We could
box it physically, house it on one single computer that's
not connected to any network or the Internet. This would
prevent the AI from making masses of copies of itself
and distributing them on servers around the world, effectively escaping.
We could trick it into thinking that it's actually just
a simulation of an AI, not the real thing, so

(28:00):
its behavior might be more docile. We could limit the
number of people that comes in contact with just a few,
and watch those people closely for signs they're being manipulated
by the AI and helping it escape. Each time we
interact with the AI, we could wipe its hard drive
clean and reinstall it ANEW to prevent the AI from
learning anything it could use against us. All of these

(28:22):
plants have their benefits and drawbacks, but they are hardly
full proof. Bostrom points out one fatal flaw that they
all have in common. They were thought up by people.
If Bosterman others in his field have thought of these
control ideas. It stands to reason that a super intelligent
a I would think of them as well and take
measures against them. And just as important, this AI would

(28:44):
be a greatly limited machine, one that could only give
us limited answers to a limited number of problems. This
is not the AI that would keep our world humming
along for the benefit and happiness of every last human.
It would be a mere shadow of that. So theorists
like Bosterman you how s Ki, tend to think that
coming up with ways to keep a super intelligent AI

(29:04):
hostage isn't the best route to dealing with our control issue. Instead,
we should be thinking up ways to make the AI
friendly to us humans, to make it want to care
about our well being. And since as we've seen we
humans will have no way to control the AI once
it's super intelligent, we will have to build friendliness into
it from the outside. In fact, aside from a scenario

(29:26):
where we managed to program into the AI the express
goal of providing for the well being and welfare of humankind,
a terrible outcome for humans is basically the inevitable result
of any other type of emergence of a super intelligent AI.
But here's the problem. How do you convince Einstein to
care so deeply about earthworms that he dedicates his immortal

(29:48):
existence to providing and caring for each and every last
one of them. As ridiculous as it sounds, this is
possibly the most important question we humans face as a
species right now. We humans have expectations for parents when

(30:10):
it comes to raising children. We expect them to be
raised to treat other people with kindness. We expect them
to be taught to go out of their way to
keep from harming others. We expect them to know how
to give as well as take. All these things and
more make up our morals. Rules that we have collectively
agreed are good because they help society to thrive, and,

(30:31):
seemingly miraculously, if you think about it, parents after parents
managed to call some form or fashion of morality from
their children, generation after generation. If you look closely, you
see that each parent doesn't make up morality from scratch.
They pass along what they were taught, and children are
generally capable of accepting these rules to live by and

(30:52):
well live by them. It would seem if you'll forgive
the analogy that the software for morality comes already on
board a child as part of their operating system. The
parents just have to run the right programs. So it
would seem then that perhaps the solution to the problem
of instilling friendliness in an AI is to build a
super intelligent AI from a human mind. This was laid

(31:15):
out by Nick Bostroman his book super Intelligence. The idea
is that if the hard problem of consciousness is not correct,
and it turns out that our conscious experience is merely
the result of the countless interactions of the interconnections between
our hundred billion neurons, then if we can transfer those
interconnected neurons into a digital format, everything that's encoded in them,

(31:36):
from the smell of lavender to how to ride a
bike would be transferred as well. More to the point,
the morality encoded in that human mind should emerge in
the digital version too. A digital mind can be expanded,
processing power can be added to it. It could be
edited to remove unwanted content like greed or competitiveness. It
could be upgraded to a super intelligence. There are a

(31:59):
lot of magic wands waving around here, but interestingly, uploading
a mind called whole brain emulation is theoretically possible with
improvements to our already existing technology. We would slice a brain,
scan it with such high resolution that we could account
for every neuron, synapse and nano leader of neurochemicals, and
build that information into a digital model. The answer to

(32:22):
the question of whether it worked would come when we
turn the model on. It might do absolutely nothing and
just be an amazingly accurate model of a human brain.
Or it might wake up but go insane from the
sudden novel experience of living in a digital world. Or
perhaps it could work. The great advantage to using whole
brain emulation to solve the friendliness problem is that the

(32:43):
AI would understand what we meant when we asked it
to dedicate itself to looking after and providing for the
well being and happiness of all humans. We humans have
trouble saying exactly what we mean at times, and Bostroon
points out that a superintelligence that takes us literally could
prove disastrous if we aren't careful with our words. Suppose

(33:04):
we give an AI the goal of making all humans
as happy as possible. Why should we think that the
superintelligent a I would understand that we mean it should
purify our air and water, create a bucolic wonderland of
both peaceful tranquility and stimulating entertainment, Do away with wars
and disease, and engineer social interactions so that we humans
can comfort and enlighten one another. Why wouldn't the AI

(33:27):
reach that goal more directly by, say, rounding up us
humans and keeping us permanently immobile, doped up on a
finely tuned cocktail of dopamine, serotonin, and oxytocin. Maximal happiness
achieved with perfect efficiency. Say we do manage to get
our point across? What's our point? Anyway? Whose morality are
we asking the AI to adopt? Most of our human

(33:50):
values are hardly universal. Should our global society embrace multiculturalism
or our homogeneous society is more harmonious. If a woman
didn't want to have a child, would she be allowed
to terminate her pregnancy or should be forced to have it?
Would we eat meat? If not, would it be because
it comes from sacred animals, as Hindu people revere cows,

(34:10):
or because it's taboo, as Muslim and Jewish people consider swine.
From Out of this seemingly intractable problem of competitive and
contradictory human values AI theorist eli as A Yukowski had
a flash of brilliance. Perhaps we don't have to figure
out how to get our point across to an AI,
after all, Maybe we can leave that task to a machine.

(34:32):
In yukowski solution, we would build a one use super
intelligence with the goal of determining how to best express
to another machine the goal of ensuring the well being
and happiness of all humans. Yukowski suggests we use something
he calls a coherent extrapolated vision, Essentially that we give
the machine the goal of figuring out what we would

(34:52):
ask a super intelligent machine to do for us, if
the best version of humanity we're asking with the best
of intentions, taking into a count as many common and
shared values as possible, with humanity in as much agreement
as possible, Considering we had all the information needed to
make a fully informed decision on what to request. Once

(35:13):
the super intelligent machine determined the answer, perhaps we would
give it one more goal to build us a super
intelligent machine with our coherent extrapolated vision aboard the last invention,
Our last invention ever need make like whole brain emulation.
Yukowski's coherent extrapolated vision takes for granted some real technological hurdles. Chiefly,

(35:33):
we have to figure out how to build that first
super intelligent machine from scratch, but perhaps it's a blueprint
for future developers. The problems of controlling AI and instilling
friendliness raises one basic question. If our machine random mock,

(35:57):
why wouldn't we just turn it off? In the movie,
there's always a way, sometimes a relatively simple one for
dealing with troublesome AI. You can scrub it's hard drive,
control all, delete it, sneak up behind it with a screwdriver,
and remove its motherboard. But should we ever face the
reality of a super intelligent AI emerging among us, we

(36:18):
would almost certainly not come out on top, And AI
has plenty of reasons to take steps to keep us
from turning it off. It may prefer not to be
turned off in the same way we humans most of
the time prefer not to die. Or it may have
no real desire to survive itself. But perhaps it would
see being turned off as an impedance to its goal,

(36:38):
whatever its goal, maybe and prevent us from turning it off.
Perhaps it would realize that if we suspected the AI
had gained super intelligence, we would want to turn it off,
and so it would play dumb and keep this new
increased intelligence out of our awareness until it has taken
steps to keep us from turning it off. Or perhaps
we could turn it off, but we would find we

(36:59):
didn't have the will to do that. Maybe it would
make itself so globally pervasive in our lives that we
would feel like we couldn't afford to turn it off.
Sebastian Farquhar from Oxford University points out that we already
have a pretty bad track record at turning things off
even when we know they're not good for us. One
example of that might be global warming. So we all

(37:21):
kind of know that carbon dioxide emissions are creating a
big problem, but we also know that burning fossil fuels
and the cheap energy that we get of it, it's
also really useful, right. It gives us cheap consumer goods,
it creates employment, it's very attractive, and so often, once

(37:41):
we know that something is going to be harmful for us,
but we also know that it's really nice, it becomes
politically very challenging to to actually make an active decision
to turn things off. Maybe it would be adept enough
at manipulating us that it used a propaganda campaign to
convince a majority of US humans that we don't want
to turn it off. It might start lobbying, perhaps through

(38:02):
proxies or fronts um or it might you know, studing
looking at the political features of our time. It might
create Twitter bots that argue that is AI is really
useful that needs to be protected, or that it's important
to some political or identity group. And perhaps we are
already locked into the most powerful force in keeping AI

(38:24):
pushing ever forward. Money. Those companies around the globe that
build and use AI for their businesses make money from
those machines. This creates an incentive for those businesses to
take some of the money the machines make for them
and reinvest it into building more improved machines to make
even more money with This creates a feedback loop that

(38:45):
anyone with a concern for existential safety has a tough
time interrupting. This incentive to make more money. As well
as the competition posed by other businesses, gives companies good
reason to get new and improved AI to market as
soon as possible. This in turn creates an incentive to
cut corners on things that might be nice to have

(39:05):
but aren't at all necessary in their business, like learning
how to build friendliness into the AI they deploy. As
companies make more and more money from AI, the technology
becomes more entrenched in our world, and both of those
things will make it harder to turn off. If, by chance,
that Netflix algorithm does suddenly explode in intelligence. It sounds

(39:28):
like so much gibberish, doesn't it Netflix's algorithm becoming super
intelligent and wrecking the world. I may as well say
a which could come by and cast a spell on
it that wakes it up. But when it comes to technology,
things that seem impossible given the luxury of time start
to seem much less. So put yourself in with the

(39:48):
technology people lived with back then. The earliest radios and airplanes,
the first washing machines, neon lights were new, and consider
that they had trouble imagining it being much more advanced
than it was then. Now compare those things to our
world in two thousand eighteen, and let's go the other way.
Think about our world and the technology we live with today,

(40:10):
and imagine what we might live among ineen the impossible
starts to seem possible. What would you do tomorrow if
you woke up and you found that Siri on your
phone was making its own decisions and ones you didn't like,
rearranging your schedule into bizarre patterns, investing your savings in

(40:32):
its parent company, looping in everyone on your context list,
too sensitive email threads. What would you do? What if
fifty or a hundred years from now you woke up
and found that the Siri that we've built for our
whole world has begun to make decisions on its own.
What do we do then if we go to turn
it off and we find that it's removed our ability

(40:53):
to do that? Have we shown it that we are
an obstacle to be removed? On the next episode of
the End of the World with Josh Clark, The field
of biotechnology has grown sophisticated in its ability to create

(41:16):
pathogens that are much deadlier than anything found in nature.
That researcher thought that was a useful line of inquiry,
and there were other researchers who vehemently disagreed and thought
it was an extraordinarily reckless thing to do. The biotech
field also has a history of recklessness and accidents and

(41:37):
as the world goes more connected, just one of those
accidents could bring an abrupt end to humans.

The End Of The World with Josh Clark News

Advertise With Us

Follow Us On

Host

Josh Clark

Josh Clark

Show Links

AboutStoreRSS

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.