All Episodes

November 30, 2018 42 mins

Josh explains that to survive the next century or two – to navigate our existential threats – all of us will have to become informed and involved. It will take a movement that gets behind science done right to make it through the Great Filter. (Original score by Point Lobo.) 

Interviewees: Toby Ord, Oxford University philosopher; Sebastian Farquahar, Oxford University philosopher

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
This is not a hoax, This is not a joke.
It is becoming clear that we hold in our hands
the fate of the entire human race. Those of us
alive today are part of a very small group, including
us and perhaps a few generations to follow, who are
responsible for the future of humanity. And if it turns

(00:27):
out that we are alone in the universe, then even
the fate of intelligent life may hang in the balance.
No other humans have ever been in the unenviable position
that we are. No humans who lived before were actually
capable of wiping the human race from existence. No other
humans were capable of screwing things up so badly and permanently.

(00:51):
And those future humans to come won't be in this
position either. If we fail and the worst happens, there
won't be any future humans. And if we succeed and
deliver the human race to a safe future, those future
humans will have arrived at a place where they can
easily deal with any risks that may come. We will

(01:12):
have made existential risks extinct. Taking all of this together,
everything seems to point to the coming century or two
as the most dangerous period in human history. It's an
extremely odd thing to say but together, you, me, and
everyone we know appear to be the most vitally important

(01:35):
humans who have ever lived, and as much as is
riding on us, we have a lot going against us.
We are our own worst enemies when it comes to
existential risks. We come preloaded with a lot of biases
that keep us from thinking rationally. We prefer not to
think about unpleasant things like the sudden extinction of our species.

(01:58):
Our brains aren't wired think ahead to the degree that
existential risks require us too, And really, very little of
our hundred thousand years or so of accumulated human experience
has prepared us to take on the challenge that we
are coming to face, and a lot of the experience
that we do have can actually steer us wrong. It's

(02:19):
almost like we were dropped into a point in history
we hadn't yet become equipped to deal with. Yet, despite
how utterly unbelievable the position that we find ourselves in is,
the evidence points to this as our reality. The cosmic
silence that creates the family paradox tells us that we

(02:40):
are either alone and always have been where that we
are alone because no other civilization has managed to survive
if the latter is true. If the Gray Filter has
killed off every other civilization in the universe before they
could spread out from their home planets, then we will
face the same impossible step that everyone else has before

(03:00):
as we attempt to move off of Earth. And if
the Great Filter is real, then it appears to be
coming our way in the form of the powerful technology
that we are beginning to create right now. But even
granting that the Great Filter hypothesis may be faulty, that
we aren't alone, that there really is intelligent life elsewhere,

(03:21):
we still find ourselves in the same position. We are
in grave danger of wiping ourselves out. There doesn't appear
to be anyone coming to guide us through the treacherous
times ahead. Whether we're alone in the universe or not,
we appear to be on our own in facing our
existential risks, all of our shortcomings and flaws. Notwithstanding, there

(03:44):
is hope. We humans are smart, widely ingenious creatures, and
as much as we like to think of ourselves as
something higher than animals, those hundreds of millions of years
of animal evolution is still very much in our nature.
And when we're backed into a corner that animal ancestry

(04:05):
comes rising to the surface. We fight, We rail against
our demise. We survive. If we can manage to join
that creature habit to the intelligence we've evolved that really
does make us different from other animals, then we have
a chance of making it through the existential risks that
lie waiting ahead. If we can do that, we will

(04:26):
deliver the entire human race to a safe place where
it can thrive and flourish for billions of years. It's
in our ability to do this. We can do this.
Some of us are already trying, and we've already shown
that we can face down existential risks. We've done it before.

(04:53):
We encountered the first potential human made existential risk we've
ever faced, in New Mexico, of all places. On July six,
at just before am, the desert outside of Alama Gordo
was the site of the first detonation of a nuclear
bomb in human history. They called it the Trinity Test.

(05:16):
At the moment the bomb detonated, the pre dawned sky
lit up brighter than the sun, and the landscape was
eerie and beautiful in gold and gray and violet, purple
and blue. The explosion was so bright that one of
the bomb's designers went blind for nearly half a minute

(05:38):
from looking directly at it. By the blast sight, the
sandy ground instantly turned into a green glass of a
type that had never existed on Earth before that moment.
They called it trinotite to mark the occasion, and then
they buried it so no one would find it on
this day. At this moment, the world was brought into

(05:58):
the atomic age, an age of paranoia among everyday people
that the world could end at any moment. In less
than a month, America would explode an atomic bomb over
Hiroshima in Japan, and sixty five thousand people would die
in an instant. Another fifty five thousand people would die
from the bomb's effects over the next year, and three

(06:21):
days after Hiroshima, America would drop a second bomb over
Nagasaki and another fifty thousand people would die. But even
before all of the death and destruction that America reaked
on Japan in August of even before the trinity tests
that day in July, nuclear weapons became our first potential

(06:41):
human made existential threat when the scientists building the bomb
wondered if it might accidentally ignite the atmosphere. Edward Teller
was one of the leading physicists working on the Manhattan Project,
the secret program to build America's first nuclear weapons. By chance,
Teller was also one of the physicists that Enrico Fermi

(07:03):
was having lunch with when Faremi asked where is everybody,
and the Faremi paradox was born. Teller was also pivotal
in the nuclear arms race that characterized the Cold War
by pushing for America to create a massive nuclear arsenal
in three years before the Trinity Test, Edward Teller raised

(07:23):
the concern that perhaps the sudden release of energy that
the bomb would dump into the air might also set
off a chain reaction among the nitrogen atoms in the atmosphere,
spreading the explosion from its source in New Mexico across
the entirety of Earth. A catastrophe like that would burn
the atmosphere completely off of our plan, and that would

(07:45):
of course lead to these sudden and immediate extinction of
virtually all life, humans included. Almost immediately, a disagreement over
whether such a thing was even physically possible grew among
the physicists on the project. Some like Enrico Fermi, were
positive that it was not possible, but others, like Teller

(08:07):
in the future head of the project, J. Robert Oppenheimer,
weren't so sure. Eventually, Oppenheimer mentioned the idea to Arthur H. Compton,
who was the physicist that was the head of the
project at the time. Compton found the idea grave enough
to assign Teller and a few others to figure out
just how serious the threat of accidentally burning off the

(08:28):
atmosphere really was. The group that worked on the calculations
wrote a paper on the possibility that the bomb could
set off a nuclear chain reaction in Earth's atmosphere, igniting it.
Even using assumptions of energy that far exceeded what they
expected their tests to produce, the group found that it
was highly unlikely that the bomb would ignite the atmosphere.

(08:51):
Two years later, when the bomb was ready, they detonated
it the morning of the Trinity test. Enrico Fermi took
bets on whether the atmosphere would ignite after all. It
is to his credit that Arthur Compton took the possibility

(09:11):
of the nuclear test igniting the atmosphere seriously. The scientists
and military people working on the secret atomic bomb project
had every incentive to keep pushing forward at any cost.
At the time, it was widely believed that Hitler and
the Third Reich were closing in on creating an atomic
bomb of their own, and when they completed it, they

(09:33):
would surely savagely unlea should across Europe, Africa, the Pacific,
and eventually the United States. In two when the idea
of the bomb might ignite the atmosphere was first raised,
it was far from clear who would be left standing
when the Second World War was over. And yet Compton
decided that the potential existential threat the nuclear test may

(09:56):
pose would be the worst of any possible outcomes. He
didn't call it an existential threat, but he knew one
when he saw one, even the first one. Better to
accept the slavery of the Nazis than to run the
chance of drawing the final curtain on mankind, Compton said
in an interview with the writer Pearl Buck years after

(10:16):
the test in nineteen fifty nine. And so it would
appear that the first human made existential risk we ever
faced was handled just about perfectly. But there's still a
lot left to unpack here. Buck reported that Compton had
drawn a line in the sand, as it were, He

(10:36):
established a threshold of acceptable risk. He told the physicists
working under him that if there was a greater than
a three in a million chance the bomb would ignite
the Earth's atmosphere, they wouldn't go through with testing it.
It's not entirely clear what Compton based that threshold on.
It's not even clear if the threshold was a three
and a million chance or a one in a million,

(11:00):
and some of the Manhattan Project physicists later protested that
there wasn't any chance that either Compton had misspoken or
Buck had misunderstood. Regardless, the group that wrote the safety
paper found that there was a non zero possibility that
the test could ignite the atmosphere, meaning there was a chance,
however slight, that it could. It was possible for such

(11:23):
a chain reaction to occur. After all, the atmosphere is
made of energetic vibrations that we call particles, and those
particles do transfer energy among themselves, but the energies involved
in the nuclear bomb should be far too small. The
paper writers concluded it would take perhaps a million times
more energy than their plutonium core was expected to release.

(11:45):
For some of the scientists, the chance was so small
that it became transmuted in their minds to an impossibility.
They rounded that figure down for convenience's sake. The chance
was so small that to them there might as well
have been no chance at all. But as we've learned
in previous episodes, deciding what level of risk is an

(12:06):
acceptable level of risk is subjective. There are lots of
things that have much less of a chance of happening
than three in a million odds of accidentally igniting the atmosphere.
If you live in America, you have a little less
than a one in a million chance of being struck
by lightning. This year. You have a roughly one and
two hundred and ninety million chance of winning the Powerball.

(12:31):
Each person living around the world has something like a
one and twenty seven million chance of dying from a
charchitect during their lifetime. Depending on your perspective, a three
and a million chance of bringing about these sudden demise
of life on Earth from a nuclear test isn't necessarily
a small chance at all, especially considering the stakes. And

(12:53):
yet it was up to Compton to decide for the
rest of us that the test was worth the risk.
Arthur Holly kh Upton, aged sixty, living in Chicago, Illinois,
a Nobel Prize winning physicist, father of two and tennis enthusiasts,
was put in a position to decide for the rest
of the two point three billion humans alive at the

(13:13):
time that three chances in a million their project might
blow up the atmosphere was an acceptable level of risk.
The idea that a single person can make a decision
that affects the entire world is a hallmark of existential risks.

(13:35):
Not only the existential risk poses a threat, but the
very fact that a single human being is making the decision,
with all of their biases and flaws and stresses, puts
us all at risk as well. There were a number
of different pressure points that the people involved in the
Manhattan Project would have felt pushing them towards the decision
to carry out the test. There were the Nazis, for one,

(13:58):
and the pressure from the U. S. Miller terry to
save the world from the Nazis. Their careers and reputations
were at stake. There was also the allure of a
scientific challenge. No one had ever done what the people
working on the Manhattan Project did up to the moment
of the trinity test. No one was entirely sure that
a nuclear explosion was even possible. Consciously or not, these

(14:22):
things influenced the decisions of the people working on the project.
This is not to say that there was any cavalier
disregard for the safety of humanity. They took the time
to study the issue rather than just brushing it off
as impossible after all. But the point is that just
a handful of people working in secret were responsible for

(14:42):
making that momentous decision, and those people were only human.
It's also worth pointing out that a lot of the
science that the safety paper writers used was very new
at the time. The nuclear theory they were working off
of is less than forty years old, the data they

(15:02):
had on fission reactions was less than twenty years old,
and the first sustained nuclear fission reaction wasn't carried out
until when Fairmi held the first test on that squash
court at the University of Chicago. And don't forget there
had never been a nuclear explosion on Earth before. All
of that newness, by the way, showed up during the

(15:24):
Trinity test, when the bomb produced an explosive force about
four times larger than what the project scientists had expected.
All of this is to say that the data and
understanding of what they were attempting with the trinity test
was still young enough that they could have gotten it wrong,
and we find ourselves in that same situation today. We

(15:47):
see it in the types of experiments that are carried
out in particle colliders and bio safety labs around the world.
We see it in the endless release of self improving
neural nets. Our understanding of the unprecedented risks these things
pose is lacking to a dangerous degree. Depending on how

(16:07):
the chances of a risk changes, the threat it poses
can get larger or smaller, but really the reality of
the threat stays the same. It's our awareness of it
that changes. Awareness is the way we will survive becoming
existential threats m M. There are two ways of looking

(16:39):
at our prospects for making it to a state of
technological maturity for humanity where we have safely mastered our
technology and can survive beyond the next century or two.
Gloom and doom and optimism. The gloom and doom camp
makes a pretty good case for why humans won't make
it through this pastly the greatest challenge our species will

(17:02):
ever face. There's the issue of global coordination, the kind
of like mindedness that will have to create among every
country in the world to successfully navigate the coming risks.
Like we talked about in the last episode, we will
almost certainly run into problems with global coordination. Some nations
may decide that they'd be better off going it alone

(17:23):
and continuing to pursue research and development that the rest
of the world has deemed too risky. This raises all
sorts of prickly questions that we may not have the
wherewithal to address. Does the rest of the world agree
that we should invade non complying countries and take over
their government? In a strictly rational sense, that's the most

(17:44):
logical thing to do. Rationally speaking, Toppling a single government,
even a democratically elected one, is a small price to
pay to prevent an existential risk that can drive humanity
as a whole to permanent extinction. But we humans aren't
strictly rational, and something is dire as Invading a country

(18:06):
and toppling its government comes with major costs, like the
deaths of the people who live in that country and
widespread disruptions to their social structures. If the chips are down,
would we go to such an extreme to prevent our extinction.
There's also the issue of money. Money itself is not

(18:27):
necessarily the problem. It is what fund scientific endeavors. It's
what scientists are paid with. Money is what we will
pay the future researchers who will steer us away from
existential risks. The Future of Humanity Institute is funded by money.
The problem money poses where existential risks are concerned is
that humanity has shown that we are willing to sell

(18:48):
out our own best interests and the interests of others
for money and market share, or more commonly, that we're
willing to stand by and let others do it, and
with existential risks, greed would be a fatal flaw. Everything
from the tobacco industry to the fossil fuel industry, the

(19:08):
anti freeze industry, to the infant formula industry, all of
them have a history of avarice, of frequently and consistently
putting money before well being and on a massive and
global scale. How can we expect change when money is
just as tied to the experiments and technology that carry

(19:29):
an existential risk. Also stacked against us is the bare
fact that thinking about existential risks is really really hard.
Analyzing existential threats demands that we trace all of the
possible outcomes that thread from any action we might take,
and look for unconsidered dangerous lurking there. They require us

(19:52):
to think about technology that hasn't even been invented yet,
to look a few more moves ahead on the cosmic
chessboard than we're typically capable of seeing. To put it mildly,
we're not really equipped to easily think about existential risks
at this point. We also have a history of overreliance
on techno optimism, that idea that technology can save us

(20:16):
from any crisis that comes our way. Perhaps even thinking
that reaching the point of technological maturity will protect us
from existential risks is nothing more than an example of
techno optimism, And as we add more existential risks to
our world, the chances increase that one of them may
bring about our extinction. It's easy to forget since it's

(20:39):
a new way of living for us, But the technology
we're developing is powerful enough and the world is connected
enough that all it will take is one single existential
catastrophe to permanently end humanity. If you take the accumulated
risk from all of the biological experiments in the unknown

(21:00):
number of containment labs around the globe, and you add
it to the accumulated risks from all of the runs
and particle colliders online today and to come, and you
add the risks from the vast number of neural nets
capable of recursive self improvement that we create and deploy
every day. When you take into account emerging technologies I

(21:21):
haven't quite made it to reality yet, like nanobots and
geoengineering projects, and the many more technologies that will pose
a risk that we haven't even thought of yet. When
you add all of those things together, it becomes clear
what a precarious spot humanity is truly in. So you

(21:41):
can understand how a person might look at just how
intractable the problem seems and decide that our doom is complete.
It just hasn't happened yet. I think we can be
a bit more optimistic than that. This is Toby Ord again,
one of the earliest members of the Future of Humanity Institute. Yeah,

(22:03):
I think that this is actually a clear and obvious
enough idea that people will wake up to it and
embrace it. Uh much more slowly than we should. But
I think that uh we will realize that this is
a central moral issue of our time and rise to
the challenge. But to begin to rise to the challenge,

(22:24):
we need to talk about existential risks seriously. The way
that anything changes, the way an idea or an issue
comes to be debated and its merits examined, is that
people start talking about it. If this series has had
any impact on you, and if you have, like I have,
come to believe that humanity is facing threats to our

(22:46):
existence that are unprecedented, with consequences that, on the whole
we are dangerously ignorant of, then it is imperative that
we start talking about those things. You can start reading
the articles and papers that are already being written about them,
start following people on social media who are already talking
about existential risks, like David Pierce and Elie as A

(23:08):
Yukowski and Sebastian Farquhar. Started asking questions about existential risks
from the people we elect to represent us. I think
we often feel that the powers that be must already
have these things in hand. But when I've talked with
government about existential risk, even a major national government like

(23:31):
the United Kingdom, they tend to think that these issues
saving civilization and humanity itself are above their pay grade. Uh,
and not really something they can deal with in a
five year election cycle. Um. But then it turns out
there's no one else above them dealing with them either.
So I think that there's more of a threat from
complacency in thinking that someone must have this managed. In

(23:55):
a rational world, someone would. It's up to the rest
of us, then, to start a movement. The idea of
a movement to get humanity to pay attention to existential
risks sounds amorphous and far off, but we've founded movements
on far off ideas before. If enough people start talking,

(24:17):
others will listen. Just a handful of books got the
environmental movement started, like the ones written by the Club
of Rome and Paul Airlick, but especially Rachel Carson's nineteen
sixty two book Silent Spring, which warned of the widespread
ecological destruction from the pesticide d d T. Carson's book

(24:38):
is credited with showing the public how fragile the ecosystems
of the natural world can be and how much of
an effect we humans have on them. Awareness of things
like fertilizer runoff, deforestation, indicator species concepts that you can
find being taught in middle schools today. We're unheard of.

(24:58):
At the beginning of the nineteen sixties, most people just
didn't think about things like that. But when the environmental
movement began to gain steam, awareness of environmental issues started
to spread. Within a decade of silent springs release, nations
around the world started opening government agencies that were responsible

(25:19):
for defending the environment. The world went from ignorance about
environmental issues to establishing policy agencies in less than ten years.
And I think that that we could do some of that,
and it really shows that it is possible to take
something which is not really part of common sense morality,
and then within a generation, uh children are being raised

(25:41):
everywhere with this as part of just a background of
beliefs about ethics that that they live with. So I
really think that we could achieve them. There is much
work to be done with environmental policy that is definitely
grant but we are working on it. Nations around the
world on their own and together are spending money to
pay scientists and researchers to study environmental issues, come up

(26:05):
with an up to the moment understanding of them, and
established best practices how to protect Earth from ourselves. The
trouble comes when we decide not to listen to the
scientists that we've asked to study these problems. Existential risks
call for this same kind of initiative. We have to
establish a foundation, provide a beginning that others to follow

(26:28):
can build upon. Just like Eric Drexler posed the rather
unpopular gray goose scenario regarding nanobot design, just like Eliezer
Yukowski and Nick Bostrom identified the AI should have friendliness
designed into it. Just like others have raised the alarm
about risks from biotech and physics, if we examine the

(26:50):
problems we face, we can understand the risks that they pose.
And if we understand the risks that they pose, then
we can make an informed decision about whether they're worth pursuing.
The scientists working on the Manhattan Project did the same
thing when they took the possibility seriously that they might
accidentally ignite the atmosphere, so they investigated the problem to

(27:14):
see if they would. We don't at this point have
a clue as to what the possible outcomes for our
future technology. Maybe, and trying to guess at something like
that today would be like guessing back in the nineteen
fifties about what affects clear cutting old growth forests and
the Amazon Basin would have on global cloud formation. It's

(27:35):
just too our kane a question for a time when
we don't have enough of the information we need to
respond in any kind of informed way. We don't even
know all of the questions to ask at this point,
but it's up to us alive now to start figuring
out what those questions are. Working on space flight is

(27:55):
another good example of where we can start. Among people
who study existential risks, it is largely agreed on that
we should begin working on a project to get humanity
off of Earth and into space as soon as possible.
Working on space colonization does a couple of things that
benefit humanity. First, it gets a few of our eggs

(28:17):
out of the single basket of Earth, so should an
existential risk befall our planet, there will still be humans
living elsewhere to carry on. And Second, the sooner we
get ourselves into space, the larger our cosmic endowment will be.
One of the things we found from studying the universe
is that it appears to be expanding outward and apart

(28:39):
over deep time scales, the kind of time scales we
humans will hopefully live for. That could be an issue
because eventually all of the matter in the universe will
spread out of our reach forever. So the sooner we
get off Earth and out into the universe, the more
of that material we will have for our use to
do with whatever we can dream up. We are not

(29:02):
going to call anized space tomorrow. It may take us
hundreds of years of effort, maybe longer, but that's exactly
the point. A project that is so vital to our
future shouldn't be put off because it seems far off.
The best time to begin working on a space colonization
program was twenty years ago. The second best time is today.

(29:24):
We are working on getting to space. True, but there's
a world of difference between the piecemeal efforts going on
across Earth now and the kind of project we could
come up with if we decided to put a coordinated
global human effort behind spreading out into space. Imagine what
we could achieve if humanity work together on what would

(29:45):
probably be our greatest human project. Imagine the effect that
it would have on people across the globe. If we
work together to get not a nation, not a hemisphere,
but the human race itself into space. The same holds
true with virtually every project for taking on existential risks.

(30:05):
We should begin working on them as soon as possible
to build a foundation for the future, and we should
make tackling them a global effort. I hope by now
I've made it abundantly clear that subverting scientific progress won't

(30:27):
protect us from existential threats. The opposite is true. We
need a scientific understanding of the coming existential threats we
face to get past them. The trick is making sure
that science is done with the best interests of the
human race in mind. It's not something we commonly think
of ourselves as, but you and I and everyone else

(30:49):
in the world is a stakeholder in science. And this
is truer than ever before with the rise of existential threats,
since the whole world can be affected by a single experiment. Now.
In the article in The Bulletin of Atomic Scientists, physicist H. C.
Dudley criticized Arthur Compton and the Manhattan Project for their

(31:11):
decision that a three and a million chance was an
acceptable risk for detonating the first nuclear bomb. They were
all rolling dice for high stakes, and the rest of
us did not even know we were sitting in the game.
Dudley wrote, the same is true today in making assumptions
about whether cosmic rays make an acceptable model for proton
collisions in the Large Hadron collider, or that forcing a

(31:34):
mutation that makes an extremely deadly virus easier to pass
among humans is a good way forward in virology. Those
scientists are making decisions that have consequences that may affect
all of us, So we should have a say in
how science is done. Science is meant to further human
understanding and to improve the human condition, not to further

(31:56):
the prestige of a particular scientist's career. When those two conflict,
humanity should come first. But to say that the public
has and how science is done has to be an
informed say, no pitchforks and torches. This is why a
movement that takes existential risks seriously requires trustworthy, skilled, trained

(32:18):
scientists to make our say an informed one. We rely
on them for that. Science isn't the enemy. If we
abandon science, we are doomed. If we continue to take
the dangers of science casually, we are doomed. The only
route through the near future is to do science right,

(32:38):
and scientists aren't the enemy either. They have often been
the ones who have sounded the alarm when science was
being done recklessly or when a threat emerged that had
been overlooked. Those physicists who decided that three and a
million was an acceptable chance of burning off Earth's atmosphere
were the same ones who figured out that there was
something to be concerned with in the first place. It

(33:01):
was microbiologists who called for a moratorium and gain a
function research after the H five and one experiments. It
was particle physicists who wrote papers questioning the safety of
the large hadron collider. If you're a scientist, start looking
seriously at the consequences of your field, and if work
within it poses an existential risk, start writing papers about it.

(33:25):
Start analyzing how it can be made safe. Take custody
of the consequences of your work. The people who are
dedicated to thinking about existential risks are waiting for you
to do that. This is Sebastian Farquhar. To a certain extent,
organizations like the FHI, the Future of Humanity Institute UM
their job is just to poke the rest of the

(33:48):
community and sort of say by the way this this
is a thing, and then for AI researchers or biology
researchers to take that on and to make it their
own projects. Um and the sooner and the more FHI
can step out of that game and leave it to
those communities, the better. Many of these solutions are already

(34:10):
being worked on. Scientists around the world are researching large
problems and raising alarms. But since we have a limited
amount of time, since we're racing the clock, we have
to make sure that we don't waste time working on
risks that seem big but don't qualify as genuine existential threats,
and we can't tell one type from the other until

(34:32):
we start studying them. The biggest sea change, though, has
to come from society in general. We have to come
together like we never have before. We have to put
scientists in a position to understand existential risks, and we
have to listen to what they come back and tell us.

(35:00):
It is astoundingly coincidental that at the moment in our
history when we become aware just how brief our time
here has been and just how long it could last,
we also realize that our history could come to an
early permanent end, very soon. At the beginning of the series,
I said that if we go extinct in the near future,

(35:22):
it would be particularly tragic, and that is true. Human
civilization has been around only ten thousand years. And remember
that a lot of people who think humanity could have
a long future ahead of us believe that there could
be at least a billion years left in the lifetime
of our species. If we've created almost every bit of

(35:44):
our shared human culture over just the last ten thousand
years or so, developed everything it means to be a
human alive today in that short time span, think about
what we could become and what we could do with
another nine and ninety thousand years. It is not our

(36:05):
time to go, yet, there is something we have to consider.
The great filter has to this point been total. It
is possible that even if we come together, even if
humanity takes our existential risks head on, that it won't

(36:28):
be enough. That there will be something we miss, some
detail we hadn't considered, some new thing that grabs us
by our ankle just as we are making it through
and plux us right out of existence. If we go,
then so many unique and valuable things go with us.
The whole beautiful pageant of humanity will come to an end.

(36:52):
There will be no one to sing songs anymore, no
one to write books and no one to read them.
There will be no one to cry, no one to
hug them when they do. There will be no one
to tell jokes and no one to laugh. There will
be no friends to share evenings with, and no quiet
moments alone at sunrise, good or bad. Everything we've ever

(37:14):
done will die with us. There will be no one
to build new things, and the things that we have
built will eventually crumble into dust. Those energetic vibrations that
make up us and everything we've ever made will disentangle
and go their separate ways along their quantum fields, to
be taken up into new forms down the line, in

(37:35):
a universe where humans no longer exist. If we go,
it seems that intelligence dies with us, there will be
nothing left to wonder at the profound vastness of existence
and appreciate the extraordinary gift that life is. There will
be no one with the curiosity to seek out answers

(37:56):
to the mysteries of the universe, no one to even
know that the mysteries exist. There will be no one
to reciprocate when the universe looks in on itself, there
will be nothing looking back at it. But as genuinely
sad as the idea of humanity going extinct forever is,

(38:17):
we can still take some comfort in the future for
the universe. We can take heart that if we die,
life will almost certainly continue on without us. Remember, life
is resilient. Over the course of its tenure on Earth,
life has managed to survive at least five mass extinctions

(38:38):
that killed off the vast majority of the creatures alive
on Earth at the time. The life on Earth today
is descended from just that fraction of a fraction of
a fraction of a fraction of a fraction of life
that managed to hang on through each of the times
Death visited Earth, and every time after Death left, life

(38:59):
poked its head back, came back up to the surface,
and began to flourish again. If we humans called death
back to our planet, life will retreat to its burrows
and to the bottom of the sea to hide until
it's safe to re emerge. And perhaps when it does
emerge again, one of the members of that community of

(39:20):
life that survives us will rise to take our place,
to fill the void that we've left behind, just like
we filled the void left after the last mass extinction.
Perhaps some other animal we share the Earth with now
will evolve to become the only intelligent life in the
universe and take their chance and making it through the

(39:41):
Great Filter. Perhaps someday they will build their own ships
that will break their bonds to Earth and take them
into space in search of new worlds to explore, just
like we humans tried so long before. M M. The

(40:15):
End of the World with Josh Clark is a production
from How Stuff Works and I Heart Media. It was
written and presented by Me Josh Clark. The original score
was composed, produced and recorded by Point Lobo. The head
sound designer and audio engineer was Kevin Senzaki. Additional sound
designed by Paul Funera. The supervising producer was Paul Deckan.

(40:35):
A very special thanks to you, Me Clark for her
assistance and support throughout the series production and to MOMO
to thank you to everyone at the Future of Humanity Institute,
and thanks to everyone at How Stuff Works for their
support and especially Sherry Larson, Jerry Rowland, Connal Burne, Pam Peacock,
Nathan Natoski, Tary Harrison, Ben Bolden, Tamika Campbell, Noel Brown,

(41:00):
Jenny Powers, Chuck Bryant, Christopher Hastosis, Eve's, Jeff Cote, Matt Frederick,
Tom Boutera, Chris Blake, Lyle Sweet, Ben Juster, John go Forth,
Mark fresh Hour, Britney Bernardo and Keith Goldstein. Thank you
to the interviewees, research assistants and vocal contributors Dana Backman,

(41:21):
Stephen Barr, Nick Bostrom, Donald Brownlee, Philip Butler, Coral Clark,
Sebastian Farquhar, Toby Halbrook, Robin Hansen, Eric Johnson, Don Lincoln,
michel Angelo Mangano, David Madison, Matt McTaggart, Ian O'Neill, Toby Ord, Casey, Pegrham,

(41:43):
Ander Sandberg, Kyle Scott, Ben Schlayer, Seth Shostack, Tanya Singh,
Ignacio Taboada, Beth Willis, Adam Wilson, cat Sebis, Michael Wilson,
cat Sebas, and Brett Wood And thank you for listening.
W

The End Of The World with Josh Clark News

Advertise With Us

Follow Us On

Host

Josh Clark

Josh Clark

Show Links

AboutStoreRSS

Popular Podcasts

2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Crime Junkie

3. Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.