All Episodes

November 28, 2018 46 mins

We humans are our own worst enemies when it comes to what it will take to deal with existential risks. We are loaded with cognitive biases, can’t coordinate on a global scale, and see future generations as freeloaders. Seriously, are we going to survive? (Original score by Point Lobo.)

Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; Toby Ord, Oxford University philosopher; Anders Sandberg, Oxford University philosopher; Sebastian Farquahar, Oxford University philosopher; Eric Johnson, University of Oklahoma professor of law 

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Ill conceived physics experiments, reckless experiments with viruses, unfriendly superintelligent AI.
Dealing with each one of these will be a quagmire
onto itself. But remember, existential risks are like nothing we've
ever encountered before. We humans haven't been prepared by millennia

(00:25):
of evolution like we have for other disasters. We're not
equipped out of the box to deal with the existential
risks that loom ahead in our near future. In fact,
it's almost as if we're wired not to be able
to deal with them properly. And the more we look
into it, it turns out the question of whether we'll
be able to navigate our existential risks to a safe

(00:48):
future is actually the same question as whether we'll be
able to overcome ourselves. But for the moment, let's leave
all that and go inside your body instead. You have
tiny invisible robots moving through your blood stream. I should

(01:13):
say we're in the future. Let's say it's for the
sake of keeping numbers nice and round, and pretty much
everyone has tiny invisible robots moving through their blood stream.
It's a good thing, actually, because these tiny robots act
as a human design backup force for your immune system.
You took them in a pill back in and the

(01:35):
moment you pop that capsule into your mouth, your healthy
life expectancy increased by a hundred years. As the enzymes
in your gut began to dissolve the capsule, your digestive
fluids poured into it, and the sudden change in temperature
and pH activated the first generation of nanobots inside. They

(01:56):
came online, connected to their shared WiFi network, activated their
propulsion systems, and passed through your gut wall into your
blood stream, fanning out through your body. Over the years,
each of those first gen nanobots assemble copies of itself,
and those copies make copies, and now three years on,

(02:17):
you have a stable colony of tiny, invisible robots living
inside of you. They search for pathogens to destroy. They
prune cells that show signs of growing into tumors and
repair the DNA inside to make sure they won't turn
cancerous again. The clear plaque from the interiors of your
blood vessel. They assist insulin in removing sugars, fats, and

(02:40):
proteins from your blood stream after you eat for storage.
Later on, they assist in clearing neurotransmitters from your synapses
after you've had a thought. They target fats to burn
in areas of your body that you select through their
app Everything your body did before, or should have done
to keep itself in harmony, it does remarkably better now

(03:02):
since you took that capsule back in. The nanoscale is
the scale of atoms. It's the smallest scale that we're
able to manipulate, and we've only recently become able to
do that. I should say we're back in the present
time now. Depending on the species, a female mosquito will

(03:27):
drink about five micro leaders of your blood five millions
of a leader before flying off. Inside those five micro
leaders slashing around to that mosquito's tiny stomach are around
twenty five million red blood cells. Just one of those
red blood cells is made up of around one and

(03:47):
twenty trillion atoms, and just one single hydrogen atom is
a tenth of a nanometer in size. That's the scale
of the world where nanobots will dwell. On this tiny level,
nanobots are expected to eventually be able to do amazing things,
magical things in the Arthur C. Clark sense of the term.

(04:11):
There are so many promises with nanotechnology that perhaps no
other emerging field has such a wide scope of applications
ready and waiting to be applied. Because they are the
size of atoms, nanobots will be able to rearrange atoms,
and so the materials they make will be manufactured to
atomic precision. To us up here on the human scale,

(04:34):
the things nanobots make will be flawless, since virtually any
material could be turned into any other material. Anything will
qualify as raw material for anything else, which means that
our current global waste problem will vanish, a happy byproduct
of the global increase in material wealth that nanobots will provide.

(04:55):
This will also mean the end of scarcity, since anyone
with the nano factory at home, which will eventually be
everyone as the technology spreads, will be able to make
whatever they like. But despite all of the golden promises,
nanotechnology potentially holds in store, as we saw in the
chapter on artificial intelligence, it poses an existential threat to

(05:18):
us as well. Like every other technology that poses an
existential threat, it is dual use. It can be used
to create both positive and negative outcomes for us, and
really you can make the same case for basically any
technology we humans have ever come up with Just to
take one example, you can use paper towels as a

(05:39):
handy way to clean up a spill or to start
a house fire. But as with all other existential risks,
the potential negative outcomes associated with nanotechnology have a vastly
wider scope than a house fire started with the roll
of paper towels. There is, of course, that unpleasant outcome
where in the way of our arch enemy, the paper

(06:01):
clip maximizer, they disassemble us for use in some other form.
But even now, in the air before nanobots, we've already
identified hazards from the current nanotechnology we have today. Because
of their minute size, nanoparticles can irritate the lung tissue
of humans who breathe them in much the same way
that asbestos and silica can, possibly leading to cancer if

(06:25):
the scar tissue that results isn't repaired in the body properly.
Today's nanoparticles are also concerning because they are inorganic and
there's no mechanism for them to degrade, which means they
may persist in the environment forever as far as we
can tell right now. Ironically, both of these current problems
with nanoparticles, they're potentially cause cancer and the possibility they

(06:49):
persist forever can be solved by the nanobots of the future.
There are other speculative ways that nanotechnology could turn out poorly.
The most famous of the mall it was called the
gray goo hypothesis, which was first put into words back
in x by m I T Engineering professor and Future
of Humanity Institute member Eric Drexler in his book Engines

(07:13):
of Creation. Gray Goo, as Drexler's pointed out, is a
possible outcome from a poorly considered nanotech design where nanobots
capable of replicating themselves and able to sustain themselves using
energy harvested from the environment, say from plant material, would
be able to exist outside of our control. At a

(07:34):
certain point, they might enter a runaway exponential population explosion
where their numbers grow so massive that they collectively become
visible to us. Is what would seem like a fluid,
gooey substance actually made up of untold numbers of nanobots,
all feeding on our environment, eventually overwhelming Earth and ruining

(07:54):
the global biosphere, not to mention resulting in the eventual
extinction of humanity. Ye Drexler has since publicly denounced his
gray goog hypothesis, pointing out that it could only arise
from an obvious and foreseeable flaw and design, not some
sort of trait that's inherent in nanobox. Drexler believes his

(08:14):
hypothesis planted a seed in the media, which grew into
a sensational thicket of vines covering the real work of
nanotechnologists and choking the life from the field of research
that he helped establish. A future event like gray goo

(08:40):
eating the world from around us would qualify as what
existential risk. Philosopher Nick Bostrom calls subsequent ruination. Things start
out for us just fine with the technology that we've
built and mastered, but it somehow takes an unexpected left turn,
and things ultimately end up broughten for us because of it,
resulting in our eventual extinction. Bostrom wasn't the first philosopher

(09:03):
to think about existential risks, but he innovated how we
see them. Here's Bostrom's colleague, philosopher Toby Ord. Thinking about
existential risks really started in the twentieth century with the
advent of nuclear weapons and the threat of major nuclear war.
In the nineteen sixties, Bertrand Russell wrote about the threat

(09:25):
of human extinction due to nuclear weapons, and in the
nineteen eighties there are a few people, almost simultaneously who
really got the bigger picture about extinction. Those people were
Jonathan Shall, Carl Sagan, and Derek Parfit. Then in the
nineteen nineties, John Leslie wrote a fantastic book on extinction

(09:48):
called The End of the World Um. And then in
the two thousands, Nick Bostrom, my colleague, he really made
a large number of major breakthroughs on this area. He
was the one who expanded it out from ext action
to existential risk, including a large number of other possibilities.
All of what they will have in common is that
they would be the permanent loss of humanity's potential. Nick

(10:11):
Bostrom realized that there are other possible outcomes of existential
catastrophes beyond just the extinction of our species. There are,
he realized, some fates for humanity that are even worse
than death. There is of course, subsequent ruination like grey Goo,
where our technology takes a bad turn and extinction follows later.

(10:32):
But Bostrom also realized that we humans don't actually need
to go extinct to undergo an existential catastrophe. There are
some scenarios where we could be broken as a species,
left to limp along and definitely without the possibility of
ever regaining the place in history where we fell from.
Perhaps a virus killed most of humanity, and the genetic

(10:54):
bottleneck that resulted lead to humans who were no longer
capable of solving extremely complex problems, or who lost the
ability to coordinate with one another in large groups. Our
species would be alive, sure, but our existence would be
a shadow of what it was before, and the potential
sunny future that may have been in store for humanity

(11:15):
would be lost forever. Eventually, this loss of potential would
be made permanent when a natural existential risk like an
asteroid or supervolcano came along tens or hundreds of thousands
of years down the road and drove us to extinction
once and for all. Bostrom calls this kind of scenario
permanent stagnation, where the existential catastrophe comes long before extinction.

(11:40):
It's a kind of catastrophe from which we could not
recover possible end to the human story. Some way to
permanently lock ourselves into some radically suboptimal state. There's also
flawed realization an algorithm we create growing super intelligent beyond

(12:01):
our expectations, and running a muck is an example of that.
It's akin to subsequent ruination, but without giving us even
the brief period where we get to enjoy the full
benefits of the technology before things go badly for us
because of it. As much as Eric Drexler wishes that

(12:23):
he had never written the words gray goo, he may
have very well saved the world when he did, for
better or worse. For the people currently working in the
field of nanotechnology, he identified a potential catastrophic outcome that
we can plan to design against. One of the great
benefits of thinking about the existential risk nanotechnology poses is

(12:46):
that it's still in its infancy as a field and
can be guided in safe ways, so that when we
do live in a world of nanobots, we can be
assured that they won't pose a threat then or down
the road. But how we get there? How do we
plan for a technology that doesn't actually exist yet in
a field of research that only a minute fraction of

(13:08):
humans actually understand and feel qualified to talk about. How
do we manage the media's understanding of the issues surrounding
the field so that it doesn't cause unjustified panic among
the public, which could turn against it and choke the
life from it once and for all. And just as important,
how do we ensure that the corporate and academic labs

(13:30):
working on nanotechnology don't pursue dangerous lines of research and design.
If you've been asking yourself questions like these about the
other existential risks I've talked about so far in this series,
then you may have already hit upon the idea that
we might need a singleton to guide us through the
coming years to technological maturity. A singleton, in the sense

(13:52):
that Nick Bostrom has applied it to existential risks, is
a body capable of making the final decision for everyone
on the plan in it. I'll let him explain it
has singled on. Is uh just a world order? We're
at the highest level of decision making. There is only
one decision making process. So, in other words, our world

(14:14):
is where global coordination problems are. At least the most
varitable coordination problems have been solved, So no more wars
or arms races or technology races. Our pollution and destruction
of the global commons one of the biggest challenges we
will face in the coming decades and centuries. Is coordinating

(14:36):
on a global level, In other words, getting everyone to
agree on the best way to move forward and addressing
existential risks. We will need to study the issues, funnel
some of the world's brightest minds towards identifying future existential risks,
throw lots and lots of money at the problems, and
figure out the best, safest way forward towards technological maturity.

(15:01):
But all the study, bright ideas, intricately mapped ways forward
don't amount to anything if one person can undermine everything
with a single accident. So we will need every single
country on Earth to buy into this process right now.
The geopolitical arrangement on Earth is based on the sovereignty

(15:22):
of nations. Each country has its own borders and citizenry,
and it's up to the country's government to make its
own decisions. There are lots of exceptions to this. Some
governments make agreements that stitch their nations together to some degree,
as seen in the European Union or the North American
Free Trade Agreement, and sometimes one nation will invade another nation,

(15:45):
resorting to force to influence the other government's decisions. For
the most part, though the nations of the world leave
it to the other nations of the world to make
their own choices about how they function, this won't really
work in tackling a essential risks. We will need to
all agree to abide by whatever we decide is the

(16:05):
best way to proceed. But getting to this level of
consensus can be messy, and you could see just how
it could get that way with existential risks by taking
a look at how humans have dealt with climate change.
The Inner Governmental Panel on Climate Change the i p

(16:26):
c C is an offshoot of the United Nations that
was set up back to study climate change and provide
the world's governments with the best science about the issue
and how to tackle it. An issue like climate change
requires international cooperation because climate change effects everyone. It crosses
the borders of the world, and so not only does

(16:48):
it affect everyone, it also requires action from everyone. To
combat climate change, we need the cooperation of all nations
for the collective public good, and it's exactly the same
with existential risks. The i p c C was chartered
during a time when global geopolitics respects the sovereignty of nations,

(17:10):
and that has proven a problem for it to take
one example, Back in two thousand seven, when the i
p c C issued its fourth Assessment on Global Climate Change,
the body's reports on the current cutting edge scientific understanding
of the issue. Words spread to media that the report
had been watered down by diplomacy. Saudi Arabia, one of

(17:31):
the world's leading producers of fossil fuels, and China in
the United States, two of the world's leading producers of
emissions from burning those fossil fuels, use their influence to
temper the report's findings on how fossil fuel use contributes
to climate change, to make fossil fuel's role seem less
scientifically certain. As a result, the public was presented with

(17:54):
findings that seemed much more doubtful about the role of
fossil fuel emissions and climate change, a out that's still
alive today. This could not be allowed to happen with
existential risks. Climate change is one of the most important
issues facing humanity today. Existential risks are the most important.

(18:20):
So how do we create a body that's immune to
diplomatic and economic pressures of countries as strong as the
U S, Saudi Arabia, and China. Answer is a singleton
Our hypothetical singleton could arise from an international body organized
to study and deal with existential risks. Let's call it
our Existential Risks Commission. Just out of necessity, as the

(18:43):
world wakes up to the real scope and severity of
these risks, we may give that commission an enormous amount
of power to override any nation's opposition to its findings
and guidelines, which would mean an enormous change for global geopolitics,
but one we would likely feel was nest necessary. Our
Existential Risks Commission would need to have teeth. One way

(19:05):
it might ensure compliance among all nations is through a
global surveillance network. We would need to keep tabs on
all the scientists who work in fields that pose an
existential threat, to make sure that they weren't secretly working
on designs or experiments the Commission deemed too risky to pursue.
The same goes for corporations that make products that use

(19:25):
risky technology. Our Commission would need to keep tabs on
everyone really to monitor for signs of a black market
developing and banned technology. So each government would be required
to set up a surveillance network within its own borders
and the existential risks, Commission would have access and ultimate
control over all of them. It should probably also monitor

(19:46):
each nation's government as well, and we would probably also
need to grant our Commission with some sort of military
or policing power as a last resort, with a force
that is capable of overwhelming any nations in the world.
Or perhaps to make it easier, we would just allow
the Commission to disband the world's militaries and maintain its

(20:07):
own small force it could use to invade and easily
occupy any non complying nation. With a single decision making
body in charge of determining the best way forward towards
technological maturity, one equipped with unchecked authority, able to monitor
every person alive on the planet, and to use the
threat of violence to ensure that we all stay in

(20:28):
line on our march toward a safe future. We may
just make it through the next century or two and
arrive at a point where the future of humanity is assured.
But as you may have noticed as I was describing it,
a singleton can also pose an existential threat itself. That
same global body we create to manage our existential risks

(20:50):
could easily become totalitarian forming a permanent global dictatorship that
no future generation could possibly overthrow m And it's about
here that you might start to feel like, no matter
what we do, humanity is doomed. In the early nineteen seventies,

(21:19):
the world started thinking about the environment. Everything we think
of as normal today, recycling not throwing your trash out
of your car window, using less energy, generally considering ourselves
as stewards of the global biosphere. All of that finds
its origin in the early seventies, and it's largely because
of two books that came out around then. In Stanford University,

(21:45):
entomology professor Paul Arelick and his wife Anne published a
book they co wrote called The Population Bomb. It was
the culmination of years of Airlic's thoughts about the sustainability
of the massive increase in population of humans and our
effects on the Earth's finite resources. He decided that the
outlook was not good. Aerlic prophesies that by the middle

(22:08):
of the seventies the world would begin to see massive
die offs of humans from starvation as we surpassed agricultures
carrying capacity. People didn't pay attention to the Airlocks books
until Dr Erlic appeared on Late Night with Johnny Carson
in nineteen seventy and spoke about the coming horror for
an hour. Then they really began to pay attention. Around

(22:30):
the time Paul Erlic was on Late Night, a handful
of scientists from around the world have been assembled into
a group by a wealthy Italian industrialist. They were called
the Club of Rome. The scientists had devised computer models
to build forecasts of humanity's future based on trends like
resource use, pollution, and population growth. They saw pretty much

(22:52):
the same doom in their crystal ball that Aerlic did,
mass starvation, collapsing society, widespread pollution, and the attendant negative
impacts on health that carries. The only silver lining to
the Club of Rome's report, which they called The Limits
to Growth, was that we had perhaps until before we
saw the worst of it. Both books and the media's

(23:14):
coverage of them got the world's attention, but this was
not a new idea. The Club of Rome and Paul
Alick followed in the tradition of Thomas Malthus, the eighteenth
century clergyman and demographer who was the first to write
about the limits of agriculture. Malthus pointed out that while
humans can multiply exponentially the resources we get from the Earth,

(23:37):
what we call natural capital, do not, which means that
because of our propensity to place an emphasis on growing
our species, we humans are essentially doomed to outstrip Earth's
resources at some point, including, as Malthus pointed out, our
food supply. In the mid sixties, before Airlick's book was published,

(23:58):
there was widespread famine in India, and during the seventies
and eighties there were additional widespread payments in the Horn
of Africa. But if anything, the Population Bomb is the
story of a global catastrophe that was averted. As bad
as the famines around the world have been, things could
have been much much worse. Unbeknownst to most of the world.

(24:23):
Thirty years before The Population Bomb and The Limits to
Growth were published, a few groups, like the Rockefeller Foundation
began working to figure out how to expand the carrying
capacity of agriculture, and they were successful, thanks in large
part to a man raised on a farm in Iowa
named Norman Borlog. Borlog had been hired by the Rockefeller

(24:44):
Foundation to oversee their research station in Mexico, which was
established with the Mexican government to find ways to improve wheat.
It's difficult to think of any kind of work that
sounds more boring than improving wheat, but Borlog managed to
do just that. He improved wheat, and today he is
widely and frequently credited with saving the lives of a

(25:07):
billion people who would have otherwise starved to death without it.
Brologs high yield wheat and an improved type of rice
that was developed at the same time at another research
station in the Philippines could triple the amount of grain
a single plant could produce, which means that farmers could
suddenly get three times more grain from the same amount

(25:27):
of land. So both the food supply and the income
of much of the world's global poor increased dramatically in
a very short time, which means that the world was saved.
Between nineteen seventy and nineteen seventy five, the amount of
rice produced in Asia grew by on it doubled in

(25:50):
just a few years. The needle on Earth's caring capacity
for agriculture moved from quivering worryingly at the red line
along the end of the dial to somewhere comfortably back
around the halfway mark. Norman Borlog rightly or in the
nineteen seventy Nobel Peace Prize for his work Ever the
White Knight. He used the attention to stress that he

(26:11):
only bought the world some breathing room while it figured
out how to deal with its population monster, as he
called it. The story of how Norman Borlog defused Paul
Alick's population bomb pretty well gets across the idea of
what's called techno optimism. Techno optimism is the full faith

(26:35):
some people play some technology to get us out of
any jam. Really, it's faith in human ingenuity. One proposal
for dealing with global warming brought on by climate change
is to add air assaults that reflects solar radiation into
the atmosphere. We would in effect be bolstering the atmosphere's
ability to already do this, lending a very important natural

(26:58):
process a technological hand. This would actually be a comparatively
easy task. We could do it with current technology, and
so a techno optimist would say, we probably don't need
to worry much about global temperatures from climate change, since
we already have a way of inventing a solution. But
what if a hundred years from now we find that

(27:19):
those aerosols we've added are working too well. Global temperatures
are actually starting to drop, and our crops are in
danger of failing worldwide. No problem. A hundred years from now,
we will almost certainly have mastered nanotechnology. We can just
deploy them into the atmosphere to deal with the issue.
We could probably program them so they not only disintegrate

(27:40):
the aerosols, they could also rearrange them into a different
type of aerosol, like black soot or sulfates that absorb sunlight,
which would heat the glow back up more quickly, so
we could avoid those widespread crop failures back on Earth.
In fact, now that we think of it, we might
as well just leave the nanobots up there to keep
tabs on global tempera tures and adjust the atmosphere is

(28:01):
ability to absorb or reflect solar radiation at any given moment,
kind of like how they'll eventually keep our bodies humming
along in an optimal state. But what if our nanobots
turn out to not have been designed perfectly and they
end up entering a runaway replication scenario like the gray
goo hypothesis or something worse. At this point, most techno

(28:24):
optimists would pinch the bridge of their nose and try
to muster more patients gray goo, they might answer, is
almost certainly not going to happen, and even if it did,
it would be so far off in the future that
you can rest assured we would find a way to
use some other type of technology we haven't even thought
of yet to handle it. The more you drill into

(28:44):
any given problem, the more it seems like technology can
get us out of it. It has so far, and
it probably can continue to in the future. But the
fatal flaw of techno optimism is that it tends to
discourage planning foresight, which I'm hoping that by now in
the series you've come to realize is of vital importance.

(29:06):
Rather than taking steps to head off the problem today,
like reducing carbon emissions, we can instead keep a fairly
sunny outlook that we will eventually handle it with aerosols,
then nanobots, then something else we haven't even thought of yet.
The problem, though, is that if any link in that
chain breaks down, if the innovation doesn't work or it
comes too late, then we've missed our chance to avoid

(29:29):
the crisis. And there's another issue with techno optimism. Sometimes
our solutions to the problem actually make things worse. You
would be hard pressed to find a person alive that
faulted Norman Borlog for his work. But the green revolution,
the expansion of agriculture's care and capacity that he midwifed,
requires farmers to use enormous amounts of fertilizer and irrigation,

(29:53):
which tends to cause runoff to waterways that absorb the
nutrients themselves, harming the aquatic ecosystems. This intensive farming also
depletes the nutrients in the soil, which means that today,
decades after Borlogs wheat made its debut, farmers put in
more fertilizer than ever before, while the amount of food
they harvest has plateaued. To put the icing on the cake,

(30:16):
the fertilizer requires large inputs of energy to produce, which
in two thousand and eleven amounted to emissions of about
six billion metric tons of greenhouse gases from the world's
farm were about of the global total. Rather than taking
the breathing room Borlog gave us to deal with the
underlying issues we have. The collective techno optimism it brought

(30:38):
on encouraged us to just kick the can down the road.
Well we've come upon the can again. Hopefully we'll figure
out a way to kick it. Those unwanted knock on
effects of the green revolution of techno optimism, even the
hypothetical singleton I talked about earlier, they formed the basis
of an argument against taking steps to mating existential risks.

(31:01):
Doing something could possibly make things even worse. You could
call this the Gilligan effect. Helpfulness results in calamity. It is,
perhaps unsurprisingly, not the only argument people make against taking
existential risks we face seriously. It is to our great

(31:37):
misfortune that we are being presented with the responsibility of
dealing with existential risks at this point in human history.
It was only perhaps fifty to a hundred thousand years
ago that humans started being born with the full package
of behaviors and intelligence that make us uniquely human. Our
ability to reason and think abstractly, to imagine different futures,

(32:00):
our ability to organize. Imagine if we had had another
hundred thousand years to continue to evolve before the existential
risks will have to address appeared on our horizon. But
that's not how the chips have fallen. Instead, it has
come upon us while we are in what Carl Sagan
called our technological adolescence the most dangerous phase on the

(32:21):
way to technological maturity. It is up to those of
us alive in the twenty one century. We bear responsibility
for saving the future of the human race, and we
have come up with plenty of reasons why we shouldn't, or,
more to the point, why we won't. Probably first among
them is that the chance of one of these risks

(32:42):
befalling us is so small, so utterly remote, they're not
even worth considering. It is true that the chance of
an existential catastrophe like an altered pathogen escaping a lab
and creating a pandemic, is extremely remote, But as more
labs conduct more risk experiments around the world, the probability

(33:02):
of that remote risk begins to compound. And the same
is true in other fields. As new particle colliders run
higher energy experiments, as more companies deploying more self improving
algorithms on the global networks, what was once a mere
remote possibility of existential catastrophe becomes decidedly less remote. I

(33:23):
think there are various forms of arguments against dealing with existentialies,
some of them much better than others. So I think
the heads in the sound ob the action, Oh, it
won't happen or it hasn't ever happened before. That is
a really bad one, But it's of course psychologically rabbi
common because it fits with the cognitive biases. That was

(33:44):
Ander Sandberg, philosopher from the Future of Humanity Institute. In
addition to all of the advanced behaviors that we humans
have evolved that had served us so well, we also
operate using some extremely ancient techniques too, short cuts that
allow us to deal with everyday life, but can break
down when we're faced with things that are out of

(34:05):
the ordinary. What results are called biases. Take, for example,
being presented with the fork in the road, given that
both paths look equally inviting, we might have trouble choosing.
But say we've been presented with the same decision elsewhere,
with other forks and other roads before, and we've usually
taken the left path. Since nothing bad happened to us

(34:28):
all those other times we've chosen to go left, we
would feel pretty sure nothing will this time either, So
we head down the left path this time too, whistling
without a care in the world, totally unaware of the
family of hungry bears ahead. Our cognitive biases can make
us overconfident, suspicious of new things, optimistic, pessimistic, frozen within decision.

(34:52):
We are, you could say, a little hamstrung by them.
But even when we managed overcome our biases or set
them aside, which we will need to when we're dealing
with existential risks, there are still plenty of other reasons
we can come up with to avoid addressing them. For one,
even discussing this type of risk can be dangerous. Such

(35:15):
talk can have a chilling effect on a field that's
struggling to establish itself, as Eric Drexler found when he
let the gray Goo genie out of the bottle and
engines of creation, talking about things like AI becoming super
intelligent and taking control of our world can really go
a long way to turning the public off from the
idea of scientists working on building self improving thinking machines. Besides,

(35:40):
as most machine intelligence researchers will point out, at this
stage in its development, the field is capable of producing
a machine that's perhaps as smart as a three year old,
or even if it is advanced, it's advanced at just
one thing like finding patterns and medical charts or identifying
cap pictures, we don't need to worry about a. In
other words, this argument seems shortsighted. If it is the

(36:05):
case that we're at a point where we can still
fully control our artificial intelligence, then now is the best
time to plan for the potential future outcomes they might bring,
so that we can ensure as best as we can
that they will continue to remain under our control. It's
probably not the best idea to wait until tomorrow simply
because they don't pose a threat today. Here is Oxford

(36:27):
philosopher Sebastian Farquhar. If we had started working on really
small nuclear explosives in the eighties and thought that we
could sort of maybe c OA to scale it up
to weapons with vastly more explosive power than we'd ever

(36:49):
imagined before, but it wasn't quite clear that it was
going to work or not. Um, I think it would
have been irresponsible at that point not to invest at
least some thought and what would happen if nuclear weapons
did reach the stage that they reached the forties. And
so that's sort of where I see us now, is not,

(37:09):
you know, not confidently saying super intelligent a g I
is around the corner, but rather saying, you know, this
might be it might be a turning point for um
intelligence on this planet, and if that turning point is
around the corner, it would be useful for some people
to start laying the groundwork for making that turning point safe.

(37:31):
But it's difficult to fault people who work in fields
like AI and others. The very same people who witness
firsthand the extremely slow and frustrating progress in the state
of the technology that we on the outside don't see,
not to mention it's their careers that are on the
line off the public turns cold to their field. People

(37:51):
who work in AI, nanotechnology, particle physics, and other fields
that will eventually emerge have dedicated and will dedicate their
adult lives to this research. Currently, we rely on these
same people who are working on the science and technology
that may pose an existential risk to tell us whether
they're safe or not, which puts the whole world in

(38:14):
a very difficult position. Here's Eric Johnson, the law professor
who investigated the potential risks posed by particle colliders. I
don't think there's any particle physicists out there who are
mad scientists bent on destroying the earth. They're they're good people,
and there's there's none of them who are you know,
as sociopaths who would uh knowingly put the earth at

(38:35):
a at a big risk of being destroyed. But there's
a question about when you're when you're making these subjective
judgments about how to build this model. If you are
self interested, if if your employer, if all of your friends,
if your whole professional life is built around this community,
this project, are you likely to go a little easier

(38:57):
on the risk assessment than someone else might be. And
I think that that's an open question. I think it's
a fair question. In addition to careers, there's also money
at stake, not just public funding for research projects at universities,
but perhaps most intractable of all corporate profits. Scaring the
public can cause these funds and these profits to dry up,

(39:19):
which has a real world effect like people losing their
jobs and fields of research freezing over. It's happened before.
In October, the US Congress effectively shuttered the American physics
community when it cut off funding for the super Conducting
super Collider, a particle accelerator outside of Dallas. It would

(39:43):
have had a track almost four times the circumference of
the Large Hadron Collider and would have been capable of
achieving particle collisions at three times the energy of the LHC.
It would have been a landmark particle collider, one with
power that we still haven't reached today would have been
but it never had the chance to because Congress decided

(40:04):
that the project was too expensive and too difficult to understand.
So after having spent already two billion dollars on the project,
Congress withdrew any further support, and the super Conducting super
Collider was never finished. Uh six months ago, the Congress
voted to terminate the super Conducting super Collider project. As

(40:24):
Eric Johnson explained in the previous chapter, the particle physics
community tends to be arranged around the most powerful collider
at any given time. So American physicists reeled from the
loss of their collider and the subsequent funding cuts that
followed in physics departments and universities across the country. In

(40:45):
the meantime, the LHC began to rise, and the seat
of physics moved through the Earth like a new trino,
from beneath the plains of Texas to a hundred meters
below the countryside between Switzerland and France. This underscores an
extremely important point. The ways to alleviate existential risks in

(41:09):
the future is to deal with them now. But dealing
with them may mean that those of us living today
would be asked to sacrifice our jobs or careers, money, comfort, health,
all for the sole benefit of people who we will
never meet, people whose great great grandparents great grandparents haven't

(41:30):
even been born yet. To put it in other terms,
what have future humans ever done for us? You could
actually make a pretty good case that if they were
able to figure out a way to pay us to
alter our behavior so that their safe future was guaranteed,
future humans would almost certainly give us whatever we wanted.

(41:50):
But they can't, So it's entirely up to our good
will to choose whether we will take steps to mitigate
the threats to the future of the human rights, which
doesn't necessarily bode well for the future of the human rights.
Like we talked about before with climate change, there are

(42:12):
things that everyone around the world shares resources, air, water,
anything that everyone is affected by and benefits from. We
call those things global commons. There's a widely held viewpoint
that commons of any type must be collectively managed because
we humans have a propensity to take as much as

(42:32):
we can from them. If everyone has equal access to
the commons, and the commons is some limited resource, then,
speaking at a very basic level, every rational person has
an incentive to take as much as they can before
everyone else does. If this mentality is present among enough people,
then we quickly deplete whatever common resources plentiful before. This

(42:57):
is what ecologists Garrett Harden called the tragedy of the
commons in a paper which he wrote among the same
climate of doom that the population bomb was published in.
It too follows the reasoning of Thomas Malthus. The commons
works just fine until there are too many people taking
too much from it. Then it crosses a threshold and

(43:18):
it becomes spoiled for everyone. Harden wrote that he used
the word tragedy not in the sense of unhappiness, but
rather the remorseless working of things. The tragedy of the
commons is in inevitability, he reasoned. Now it is true
that at any moment, there are plenty of people who
will take no more than their fair share from the commons,

(43:39):
and in some cases even less, and some will act
as stewards for the greater good. But the tragedy of
the commons does exist, and we see it in resistance
to things like caps on carbon dioxide emissions, which affect
the global commons of the atmosphere. We have a hard
enough time managing our current commons, but the future that

(44:01):
we're being asked to protect is also a commons as well,
and one with an added twist. Not only is the
future of commons for those of us alive today, we
also share it with those to come. If you think
about what existential risk mitigation is, it's a common squared
as it were, in that not only is it a
global public good existential risk mitigation, but it's also of

(44:23):
transgenerational public good in that most of the benefits would
be bestowed to these titure generations that could come into existence.
They have no say whatsoever and what we are doing now. Unfortunately,
those people that come can't do anything to be good
stewards of our shared comments. It's entirely up to us
alive today to take whatever steps are necessary to protect

(44:46):
the global pan generational commons that is the future, which
makes people who haven't been born yet what economists call
free riders. They reap the benefits of the sacrifices others
make without contributing their fair share in this case, simply
because it's impossible for them too. But we humans tend

(45:07):
to resent all free riders, and not just us humans.
We found behavior all over the animal kingdom that punishes
free riders, which means that resentment is deeply ingrained. Free
riders violate a basic sense of fairness that we hold dear.
The trouble is when people sense free riders in their
midst they tend to cut off their contributions, so everyone loses.

(45:36):
We have a lot to overcome if we're going to
take on our existential risks. On the next episode of
the End of the World with Josh Clark, it is
possible to take something which is not really part of
common sense morality, and then within a generation, children are

(45:59):
being raised everywhere with this as part of just a
background of beliefs about ethics that that they live with.
So I really think that we could achieve that there
is hope

The End Of The World with Josh Clark News

Advertise With Us

Follow Us On

Host

Josh Clark

Josh Clark

Show Links

AboutStoreRSS

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.