All Episodes

December 21, 2018 44 mins

How long do humans have left on Earth? As species go, humanity has had a brief, incredibly transformative run here on Earth. We've mined resources, farmed food, hunted animals, built cities and polluted ecosystems across the globe. There's no denying we've also made tremendous technological breakthroughs -- but could some of those same innovations ultimately become the agents of our collective demise? Join the guys as they interview Stuff You Should Know cohost Josh Clark about the science behind his newest podcast, The End Of The World.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn the stuff they don't want you to know. M Hello,

(00:24):
welcome back to the show. My name is Matt, my
name is They called me Ben. We are joined with
our super producer, Paul Mission controlled decade. Most importantly, you
are you. You are here that makes this stuff they
don't want you to know. Gentlemen, we are at the
end of the year, and today we are talking about
the end of the world. Yes, how humanity shall perhaps Yes,

(00:49):
and we you know, do a better job and fix things. Yeah. Yeah, sure,
we're gonna be able to fix it. Now. We are
not confronting uh, these existential threats alone. Today we are
joined with an expert and a friend of the show,
longtime friend of ours both on and off the air.
He's kind of like our big brother in a way
friends and neighbors. Uh, Josh Clark, Hey, guys, hey man,

(01:13):
thank you for having me on. I've always thought of
you as a bit of a big brother figure. Kind
of feel like you're always watching. Like that's a little
more my style for sure, Like, hey, you can do it.
Kind of well, in my mind, it's more of watching
you walk and then trying to find a way to
fit in those footsteps, at least in some small way
like matches gate like kind of like yeah, yeah, really

(01:36):
just try And that explains so much. My gate is
more of like a um it's kind of like a
speedwalk slash jazzer sized cross things. So it's tough to replicate. Mat.
I've seen you try and I think it's adorable. So
so hopefully, hopefully, Matt, you will have some time to
perfect that gate. And that leads us to the big question, Josh.

(02:00):
You recently created a podcast called The End of the World,
and one of the questions that a lot of people
have when we talk about the end of the world
is really a question of timeline. You know, tick tick
tick tick. How much time does Matt Frederick have to
practice his gait? Matt, I would give you one to

(02:22):
two centuries tops, whoa which which sounds like a lot
of time. It does You're You're like, well, I'll be
long in the grave most likely, although I don't know
you might live to a substantial portion of that. But
if you step back and think, what about the children,
what about the grandchildren, what about the future humans who
will will come down the line over the next hundred

(02:44):
two hundred years. And then if you look beyond that
and really take a step back and look at just
how long some people expect humanity to continue on into
the billions of years, all of a sudden, the idea
of going extinct in the next hundred to two hundred
years suddenly becomes really um terrifying and scary. If you
can kind of step outside of yourself, well quickly, let's

(03:07):
just get kind of an idea of what happens if
humanity just does great. We don't we don't kill ourselves off,
some giant external thing doesn't wipe humanity out. That billion
years that you talked about for humanity is is that
cut off date when the Sun essentially creates um all
death everywhere in our solar system? Right, Yeah, Yeah, that's

(03:30):
a that's a thing for sure. Um. A lot of
people say, if humanity didn't do anything, just kept plotting
along and rather than doing everything right, we just kind
of got lucky at every possible break. Um, a billion
years is about how long we would last, because that's
about how long the Earth will last in its place
in the Solar system. The Sun is gonna grow and

(03:50):
it will eventually um basically swallow the Earth just totally
subsume it, trying everything. Yes, it's gonna it'll be bad
news for the Earth. But we've got a good billion
years and we can have a lot of fun and
do a lot of cool stuff in a billion years.
That's the low end, right, that's if we do nothing
but just hang around on Earth. But the people are
kind of dumb, you can make that case, it's true.

(04:13):
But but there's also a lot of hope to future ingenuity.
And then also I think, um, as far as dumbness
goes contemporarily, um, it's more complacency than than being dumb,
you know what I'm saying. It's almost like there's some
weird death cult mentality among a lot of people on
Earth where it's just like, if humanity goes extinct, that's
just what happens. Maybe we deserve it, which drives me

(04:35):
bananas that that sentiment, that idea that maybe humanity deserves
to go extinct. We've screwed it up for so long.
Maybe this is what we have coming. And I disagree
with that tremendously. Um. But but some experts say, okay, uh,
if we if we continue on like being humans and
the kind of humans that we are at this point, um,

(04:57):
we're probably going to do some pretty interesting things like
get off of Earth in the near future. Actually maybe
a hundred years, maybe two hundred years, maybe five years,
even if it's a thousand years, that's plenty of time
for us to just basically be like Audios Earth, that
was nice. Or if you're starting to look into like
the millions of years and all the things we could
possibly do, maybe we can move Earth, or maybe we

(05:18):
can prevent the Sun from growing, because actually what it's
doing is burning the last of its fuel. There's plenty
of ways we could make the Sun burn more efficiently
and extend its life by billions of years and then
save Earth in the process. There's a lot of stuff
we could be doing. So when you start to look
into that, then you begin to run into the the
top limit for humanities. Um lifetime lifespan into the billions

(05:42):
and billions and billions of years, possibly to the heat
death of the universe, which is untold billions of years
away into the future. That's the final end of the franchise, right,
not necessarily not necessarily, because one thing I came across,
and it's kind of comes in at the end of
the Physics episode is we are starting to figure out
how we could theoretically create lab grown universes. So who

(06:06):
knows what, maybe we can learn to grow universes and
when this one is starting to wind down, we can
be like, oh, well, we're gonna move over here into
this fresh new universe, and then that extends it indefinitely,
humanity's lifespan indefinitely. So when you take all of this
into account, all of the possibility that we have laying
out ahead of us, it's really unnerving to think that

(06:27):
those of us alive today have the weight of all
that on our shoulders. It's up to us to save
that future for the rest of humanity to come. And
that's the stakes right now. So let's put we've got
the context in terms of timeline here, let's look at
the basic, the bare bones definitions. The end of the

(06:49):
world deals with the phrase that we mentioned earlier in
the show, existential risk. What for for audience, what is
an existential risk? And are some more let's see, more
imminent than others? So um, great question, Um, existential risks
are something that have been around for a very long time.

(07:11):
We've lived under like natural existential risks like the Sun
growing and overwhelming Earth and burning into a crisp. That's
a natural existential risk. And existential risk is uh, anything
that can wipe out humanity forever, like we're just gone,
there's no more humans left, or the humans that are
left can never get back to whatever place in history

(07:33):
that we fell from. That's an existential risk. And there's
this guy over in Oxford, uh, the Oxford University, who
spends his time thinking about these things and is assembled
this basically like super team of thinkers who are all
thinking about what to do about existential risks, what existential
risks were not thinking about, and then what to do

(07:55):
if we do manage to pass these existential risks that
are coming our way, um and of into the billions
of years, all the amazing things we could do so um.
The guy's name is Nick Bostrom, and the center that
he founded is called the Future of Humanity Institute. And
they're not the only group thinking about this, but um,
they're kind of like the O G group who started
thinking about these things. And Nick Bostrom is kind of

(08:17):
widely viewed as basically the father of the field of
existential risk mitigation um and and he really kind of
took some disparate thoughts that guys like Derek Parfitt and
um Carl Sagan had backing about the eighties and put
him together into like a genuine, refined philosophy that's basically

(08:37):
based on we need to do something pretty soon or
we've were we probably aren't going to make it talk talk, Yeah,
that's true. That's my question though, like, is our existential
risks inherently something that's external to our actions as humans
on the planet of things that we do that could potentially,

(08:57):
you know, cause us to not exist as a species,
like no climate change and things like that that like
impacts that we have on the planet. Or is it
all about like things that are beyond our control, that
are bigger than us that we are. It's it's both,
for sure, um And. So there's there's natural existential risks
like the Sun growing an asteroid like the kind that
took out the dinosaurs, wiping it out, wiping us out. Um.

(09:20):
Those we can't do a whole lot about. Now, we
can all imagine a time like maybe a few decades
from now where we can direct redirect the course of
like an asteroid that could be colliding with Earth. Right,
but right now we can't do anything about the natural risks,
the ones we can do something about our anthropogenic existential risks,
which are human made existential risks. And um, we have

(09:43):
a few of those coming down the pike coming our way. Um.
There is artificial intelligence is a big one. Another one
is physics. Uh. Surprisingly high energy physics experiments may or
may not pose an existential risk. But if some of
the um, some of the theories of wantum gravity that
that seek to move Mary's stay the standard model with relativity,

(10:05):
if some of those are right, then actually, yes, these
physics experiments are quite dangerous that we're doing right now. Um.
And then biotechnology is another big one. And there's some
other ones too, like nanotech could conceivably be an existential
risk to us, and all of these things. As we're
starting to wake up to the concept of existential risks,
as our field of vision is kind of coming into focus, um,

(10:29):
we're suddenly recognizing like, oh, there's one. Oh, there's another one.
Oh there's one. Oh, oh jeez, there's another one. Right there,
and they're starting to come our way, and very few
people are paying attention to him. That's really the alarming
part of the whole thing, isn't It also mentioned earlier
that it's a situation um rooted in apathy for a
lot of people. But one of the questions people will

(10:53):
have when they hear this is if we're talking about
a global existential risk, if we're talking about not just
a risk, a threat that's bigger than an individual's action,
how would people mitigate something global? Like how how Let's
say my name is um, Dave Everyman and I'm listening

(11:14):
here in Manitoba for some reason, and I say, man,
the wild animals are dying at this incredibly cartoonist rate,
which will topple the ecosystem right in the food change.
But what can I do? I'm just Dave Everyman. So
first of all, I was wondering when it was going
to get crazy, And now that Dave every Man has

(11:36):
made an appearance, it's quite obvious it just happened. Um.
So that kind of ties into something you said earlier.
All that you were asking about was like climate change
is you know, the climate change actually doesn't necessarily count
as an existential risk as bad as it would be now,
it would for plenty of species that will be affected
and wiped out. Like it's starting to look like coral

(11:57):
Um climate change is going to be an existential threat
to coral Um or it is already, but to humans.
And that's the thing, like we I really want to
to focus this in, Like I'm talking about humans when
I'm talking about existential risks in the entire series, it's
all about things that can wipe out humanity, but existential
risks exist for basically any living thing on Earth. Um.

(12:21):
And it turns out from what I saw, something kind
of came up is humans are actually an existential threat
to just about everything else on Earth, right, And I
think kind of part and parcel to us saving ourselves
and saving the world, right, UM is at the same
time simultaneously learning that UM kind of being at the

(12:41):
top of the food chain, being Dave, everyman who has
the ability to think an act about this kind of stuff,
that that makes us stewards for the rest of the planet. So,
even if climate change is not an existential threat to humans,
which it seems like it's not, from taking on existential risks,
I'm taking on existential threats. We should, in my opinion,

(13:05):
kind of change our mentality, whether we like it or not,
whether we're trying to or not. Are though our outlook
would change, and I think that things like climate change
would be mitigated. And and this idea that Dave Everyman
can't do anything to help, that sense of hopelessness, that
just kind of presses all of us, you know, down
into our couches and and into this funk. That kind

(13:25):
of thing will will go away. And the reason why
it will go away, the reason why we can do anything,
why Dave every Man can do anything at all, um,
is because it turns out no one at the top
is doing anything. I talked to this philosopher named Toby
Ord who's one of the guys that the Future of
Humanity Institute, and he has spoken to people in the
highest echelance of government about this. One of the things

(13:47):
they do is just try to like warn people, including government,
and say, hey, you're a policymaker, Um, they're not designing
AI very well right now, and one of them could
get out of control and take over the world. What
do you think about that? What are you guys doing
about that? Well, you know that's really kind of above
my pay grade. I'm sure someone else is handling this,
and Toby or is like, there's nobody above your pay grade,

(14:08):
Like it's up to you. Guys. If you're not doing anything,
then that means no one's doing anything. Well, yeah, they're
they're stuck in like a cycle of elections, right, That's yeah,
that's that's a big part of the problem as as
far as like leadership goes, is a is you know,
not just with existential risk, but basically any large project,
any long term thing. That's one of the things that

(14:30):
climate change is running run up into. But it's politicized too,
so it's like literally you're appealing to a particular base
by choosing to say something is not really a problem
or ignoring it. That's almost like a power move. I
say this isn't really happening. I know, because I'm in
charge or I'm the smartest guy in the room, and
I ignore all these other people that are saying that
it is, you know what I mean, Like, it's not

(14:51):
it's not it's almost ignorance as a like a move
kind of you know, it's like and it's a disdain
for expertise to I think it's a really popular thing
right now, and that kind of ties into that whole
death cult thing that bothers me so much. You know,
it's it's like, you're a scientist, I don't care, you know,
get out of my face, egghead, I don't care about

(15:11):
the climate. Um. That's just kind of a sentiment, just
a feeling. That's not the entire zeite, guys, but it's
definitely a part of the zeit guys right now for sure.
It's almost like we don't have that much time anyway,
So let's just get the most out of it that
we can in the short time we have, not really
worry about the next part. You know, it's basically like
the disco era took over the entire world. You know,

(15:35):
that's kind of what it feels like. So we'll pause
here and continue after a word from our sponsor. Here's
the question. Here's the turn for our show, because now
we're talking about will fill ignorance or anti intellectualism and

(15:57):
and all these other um, all these other flavors of
governance and society, right or the problematic flavors. Is there,
to your knowledge, any sort of cover up or conspiratorial
events surrounding any of these existential risk you knew you
knew we were going to ask this. I was hoping,

(16:19):
but I think I prematurely caught the where it got
crazy with the Dave every Man thing. Okay, but I
really appreciate you doubling down on Dave every Well. I
think I quadrupled down on Dave every Man. Um, and
it's pronounced every Man, the British pronouncing and say, okay, gotcha.
Now now I know exactly who you're talking about. Um

(16:41):
so uh, I am quite sure that there have been
I don't think that there are in necessarily in fields
of like AI. I know that the physics community is
actually quite the opposite of that. They the people at
certain in particular, have been working over time while also
simultaneously bending over backwards to show that the large had

(17:04):
run collider is safe. But the thing that made that episode,
the physics episode, the most interesting to me is um
a lot of the suggestions that it might not be
safe or coming from physicists. It's not from just external
you know, guys who who you know declared their own
patch of land in Nevada Country or anything like that.

(17:25):
It's like actual physicists who work with particle colliders who
work in theory um. And also I didn't mean to
denigrate a lot of the demographic just now um that
that they they are the they are the people who
are kind of raising the alarm on particle colliders possibly
creating microscopic bike holes and that kind of thing. So

(17:46):
if I had to zero in on one group where
there was, if not necessarily like conspiratorial cover ups, at
the very least a lot of kind of brushing stuff
under the rug, it would be the biotech field for sure.
For sure, the level of recklessness and um accident proneness,

(18:08):
I guess is a terrible way to put it, that
comes out of the biotech field, and certainly not the
entire biotech field. There's plenty of people in the biotech field,
and I would say the vast majority of the biotech
field is very, very careful. But the problem with biotech
is that even if you are careful, accidents still happen.
And if you go back and you look at the statistics, um,
not even statistics like actual numbers of of like accidental

(18:32):
releases of deadly deadly pathogens from labs into the great
wide world just over a very short time, we're talking
hundreds and hundreds of them. And the thing that bothers
me the most about biotech when I researched this, I
started to get like kind of mad, like this this
angers me. That there's this field and no one. I

(18:53):
can't say no one, but very few people and certainly
not enough people are paying attention to and regulating the
biotech field. There are some really reckless experiments that are
being carried out, and there just a small fraction of
labs in the world are required to report accidents even
let alone what they're doing or what kind of experiments

(19:15):
to carrying out accidents, like meaning a deadly pathogen made
it out in the skin of a lab worker who
didn't realize it and it started to kind of spread
or whatever. You don't like. If you were a um
a private corporate lab working in the biotech field, and
an accident happens, you don't have to tell anybody about it.

(19:36):
You don't have to tell a single soul. You have
to be funded by the National Institutes of Health in
the United States or be affiliated with a lab in
the United States funded by the National Institutes of Health
to be required to report accidents. So of the like
six D and fifty accidental releases between I think two
thousand four and two maybe the as we're specifically accidents

(20:02):
reported by labs that are funded by the n N
i H. That's it. There are so many more labs
BSL three and four labs. Those are the most um,
the the most secure labs, but that means that they're
also the ones who are working with the deadliest pathogens.
There are so many of those in the world, and
it's become such a like a cool thing to be

(20:24):
a corporation or like a university or uh to have
a bs L three or four lab. No one has
any idea how many there are in the world. Not
a single person on planet Earth knows exactly how many
BSL three and four labs there are on Earth right now.
So where is the oversight coming from for those labs?
I mean basically nowhere. I think the U s d

(20:46):
A has some jurisdiction um like OSHA or something they
don't have like work site hazard you know, inspections and
things like that. So as far as UM, I'm sure
that they do. As far as like Ocehan goes, yeah,
I think that that would that would be extended to
every thing, including those laps. But as far as like
reporting it to like the CDC where the n I
H there's no requirements unless you are NIH funded. That's terrifying.

(21:10):
It is extremely terrible. We're just talking about the United States,
right exactly. So if you are, you know, working in
a lab in Korea, Korea, South Korea, I'm sure has
UM like legislation or laws that say, you know, you
have to do this or you have to follow these
rules or whatever. But there's there's no there's certainly no
universal oversight agency. The U n has is toothless in

(21:32):
this respect and almost every respect, but certainly in this respect,
right like toothless. UM, the World Health Organization has like
zero say in this. It's just it's just the wild West.
And unfortunately, UM, there are microbiologists and plenty of them
in the field who are like, whoa, wha, wha what what?

(21:53):
What experiment did you just run? Like? You just escalated, um,
the the contagiousness of this deadly flu virus so that
it can be passed more easily among mammals. Why did
you just do that? Uh? We should have a two
or three or four year moratorium on those kind of
experiments and and that kind of thing does get um

(22:14):
observed in the field, but it takes people in the
field to do that to raise the alarm, and they
don't always do it, and it's it's not necessarily having
this sweeping effect. And then after they raise the alarm
and study the problem, there's nobody stepping in and saying, yeah,
don't do that anymore. Really quickly. I just want to
stay in here because something that I kind of knew

(22:37):
about but I learned a lot more about through listening
to your podcast was gain a function research and what
exactly that is? Can you just tell us about that
and why it's so dangerous? Sure, so I talked to
a few experts, like legitimate experts on this, and um,
gain a function research is where you take a wild virus,
I guess a natural virus, and you force a mutation

(22:58):
in it so that it becomes deadlier or it becomes
more contagious, or it becomes less susceptible to treatments or
drugs or something like that. And when you when you
force mutations in this virus or a pathogen of any sort,
um and it it becomes deadlier or more contagious or
less susceptible. It has gained function. That's what they call it,

(23:20):
so gain a function. Research is basically taking evolution and
speeding it up. And they'll do things like, um, they'll
force a bunch of mutations and kind of selectively breed
of virus until they think it has the kind of um,
the kind of like uh maybe contagiousness they're looking for,
and then they'll introduce it into like a ferret's nose
and um, they'll take that and introduce it to another

(23:41):
ferrets nose and they'll just basically speed up the process
of an infection of pandemic among you know, lab animals.
And this was done with I think H five and
one uh in a couple of different labs simultaneously but independently.
It's really weird. They took this really really deadly via
russ and they basically taught it to be transferred from

(24:07):
one ferret to another through like sneezes and coughs. What
the saving grace, the thing that's kept us all alive
as far as H five n one goes, except for
a handful of unlucky people, UM, is that it's really
tough to spread. It's really really deadly I think it
has like a seventy or eight percent mortality, right, but
it's really hard to catch. Well, these guys, we're forcing

(24:30):
gain a function to make that virus much easier to catch.
And when this when they started publishing these these studies, um,
the field just went nuts. They were not having it.
They were very upset about this. They said, this was
very reckless. Why are you doing this in the first place.
And then some people have even said these experiments did
not need to take place at all. You can actually

(24:52):
study the same kind of stuff just by studying proteins.
You don't have to have a wild or live active,
deadly vie rust that you're you're you're creating this in
You can just study the proteins. So if you kind
of step back, or I should say, if you dig
a little deeper into it, you start to get the
impression that there's a lot of ego that are driving
experiments like this just to kind of show that it

(25:14):
can be done. And this the field has a really
extensive history of that kind of like gun slinging recklessness,
Like look what I did. And the problem is when
you create a virus like that, it's alive, it lives
on Earth with us, and that means that it is
your responsibility for the rest of time to either eradicate
it from Earth or you have to keep up with

(25:35):
it make sure it doesn't get out. And if it
does get out, well, that's a big problem because now
you have an H five N one virus that's extremely
deadly and also extremely contagious. And that's the existential threat
posed in the biotech field that they're really risky experiments
being carried out, and they also have a a long

(25:57):
history of being accident prone as well. That's the one
that gets my my blood raised the most that I
don't know if you could tell ye it's I mean,
it's a terrifying proposition because we hear in the news cycle, right,
we hear about maybe once every year and a half
or report of a virulent strain of something breaking out

(26:20):
in a specific part of the world, and the question
is always how far will it get this time? Right? So,
are are you saying that there is a real and
substantial possibility that some sort of um aggressively modified virus
or contagion really could just through the slip of someone's hand,

(26:41):
spread across the planet. I mean it's it's not one
of those things where one virus being experimented on and
one lab poses a significant um risk to humanity. But
it is a risk just that one virus in that
one lab does pose a risk, just by the very
fact that it exists. And there are such things as

(27:02):
humans who are accident prone experimenting on them, right. So, um,
the problem comes when you have many, many people running
the same or similar experiments with the same or similar
viruses all over the world in an unknown number of labs.
Then that that one, that one remote risk starts to

(27:24):
compound and get a little a little more dangerous than
a little more dicey. Um. I don't think that there's
any virus right now that could conceivably wipe outcent of
humanity or so many humans that we would we could
never possibly rebuild civilization ten thousand years down the road.
I don't think so. I don't know. I'm not in

(27:46):
the biotech field, but from the outside looking in to
the progression of the field over the last few decades
in particular, UM, it does seem like that's the direction
it's going. From my view, just mine, and I'm not
an expert, but from my view the sentiment, the sentiment

(28:09):
that I've kind of tapped into is, if you are
a microbiologist of the kind of cowboy ilk, coming up
with a contagious virus with a mortality would be like
your crowning achievement, basically, And the whole premise of this
is to study it so we can figure out how
to treat them, and that that is like a that's

(28:31):
a legitimate avenue of research, that's the main avenue of
research for virology. That's the point largely. But do we
need to force gain a function in viruses that don't
exist like that in order to treat it. That's the
question that I have and and to me, the answer
is no. I think there are definitely other ways to
do it, and we should be focusing our research and

(28:53):
figuring out how to treat viruses that could consinuerably get
like that without creating them first. So, assuming that the
world doesn't end, we'll be right back after a word
from our sponsor, I'd like to take a take a

(29:17):
slight pivot here and ask a couple of biographical questions
we probably should have asked in the beginning. This is
this entire interview has made me very conscious of time
as well. I hope we have enough time to finish. Um.
What one thing a lot of people want to know
is whether there was some specific moment in your life

(29:38):
that inspired the End of the World series. Was it? Uh?
Was it something related to biotech? Did you like get
a nasty cold and the doctor said, boy, uh, this
is weird, Josh, Um to sit down. What happened? Have
you been hanging out with ferrets? Later? Um? I actually
was sick while I wrote the bio Tech episode one,

(30:01):
uh yeah, which just really drove everything home that much more. Um.
The thing that inspired me to do the series, which
and also I want to just take this time right now,
is to to thank all three of you for your
roles in helping me with the series. Like over time,
all three of you had a hand in it, and
I appreciate it big time. So thank you. Hat's off

(30:21):
to all three of you as well. We're excited to
hear it. I was too, really it finally came out. Um.
But the whole thing started, as you probably know, just
from this kind of intellectual curiosity about it, Like I
ran across Nick Bostro many years ago and read some
of his papers and um, I just found it fascinating

(30:43):
and I still find it fascinating. So the original point
of the series was to say, hey, everybody, check this out,
and that's the coolest thing you've ever heard in your life.
And as I dug into it more and more and
started to actually interview the people involved, like Nick Bostrom
and Tobior and other people at the Future of Humanity Institute,
I realized, oh wait, this isn't this isn't just an
intellectual pursuit. These people are doing like they're actually trying

(31:06):
to warn the world, like this is real, Like wait, wait, wait,
wait what this is real? And the the I underwent
a conversion, and then so too did the series because
I was still working on the series at the time,
and there was a huge tone shift in the series.
It went from straight basically like a very dry book report,

(31:29):
to Okay, we need to do something, everybody, and like
this kind of thread of we need to form a movement,
we need to start doing something emerged in the series
and took on almost became like a character in the series,
or certainly a theme, a major theme. So it was
originally intellectual interests that that brought me to it, and
then I kind of got struck by lightning on the

(31:52):
way to finishing and it changed the tone big time.
I'll tell you what made me want to put my
phone down and up the movement. And it was in
one of the episodes where you talk about a certain
three and a million chants that occurred in the nineteen forties.
Isn't that fascinating? Can you tell us a little bit
of that story? Yeah. So, the what's widely seen as

(32:14):
the first human made existential risk that we've ever faced
was the first detonation of an atomic bomb at the
Trinity Test on July six in Alma Gordon, New Mexico, USA. UM.
And it wasn't that they were saying, yes, this thing
is going to be a deadly weapon, this is an
existential risk. A lot of people make the case, and

(32:36):
I kind of subscribe to it, to that the nuclear
bomb has never been an existential threat to humanity like
nuclear war, I should say, has never actually been because
we can't, we probably couldn't wipe all of humanity out.
And again, that's the thing that separates existential risks from
all other types of risk everything else. We have a

(32:57):
chance to rebuild, we have a chance to learn from
that mistake. With existential risks, there's no second chances, there's
no do over one thing. One thing goes wrong. That's
it for everybody, right, Um. And the first so the
nuclear bomb, just to say this, the nuclear bomb was
not the existential risk. Again, was not officially that part

(33:18):
wasn't the existential risk. It was the detonation that conceivably
posed an existential risk. I should say it was the
first possible human made existential risk. And the reason it
was is they they said they were sitting around the
dudes at the Manhattan Project. Um, and I think it
was Edward Teller. This is like the first time they
all formally met, and Edward Teller was like, hey, has
anybody thought might we accidentally ignite the atmosphere with this thing?

(33:41):
You know, we're about to dump a massive amount of
energy into the atmosphere. It could set off a chain reaction,
don't you're thinking. I've read different accounts of it. Some people,
um say that it was immediately shouted down and they
all realized now it's fine, it's fine whatever. And then
somebody ended up telling Arthur Compton, and Arthur Compton was like, oh,
fiddle sticks, Like we need to do some of them

(34:02):
about this, and he assigned Teller and a couple of
other guys to go figure out whether it was possible. So,
depending on on what story you here, either it was
like this great, like, guys, we got to get to
the bottom of this, or it was kind of like
a side project. But either way, they definitely assigned Teller
and a small group to go figure out, you know, mathematically,

(34:24):
if that was possible. And just to be clear, we're
talking about the entire atmosphere, not just a section of atmosphere,
the Earth's entire atmosphere chain reaction essentially. That was the
that was the fear that that if when they set
off this first stomic bomb, because no one had ever
set off an atomic bomb before, it wasn't even at
the time known whether it was possible. There was still

(34:44):
a chance that it was going to be an impossibility
to create a nuclear explosion. But they were saying, if
we do do this, I mean like, if we started
a chain reaction in the atmosphere, it could spread and
burn off the entire atmosphere on Earth and then life
would just see to exist in very short order. So
they started to study this and they came back and said,

(35:07):
there's a almost no chance that this is going to happen,
even even accounting for energy way beyond what we're going
to be doing with the bomb. Um, We're we we
it's not gonna happen. But these guys are physicists, and
physicists don't deal in in certainty. They deal in probability.
And so there was still that that that small, small

(35:31):
chance that they could accidentally ignite the atmosphere with this.
So later on, years later, UM, they went ahead with
the test. They decided that the possibility was small enough
that it was worth taking the risk because at the time,
the Nazis were still going strong, and they were like,
this is this is worth you know, the small chance

(35:51):
that we're going to ignite the atmosphere is worth you know,
taking over the Nazis with this bomb that we're going
to produce from this test eventually. UM. But the the
years later, the guy who was running the Manhattan Project
at the time, his name was Arthur Compton. He told
the writer Pearl Buck about this story and he said,
I drew a line in the sand. I said that

(36:13):
if there was a three and a million chance of
igniting the atmosphere, we wouldn't go through with the test.
And so over the years some people have been like,
was it one in a million? Did he misspeak? Like
it was there any chance whatsoever? But that was supposedly
the way that we handled the first existential risk, which
in a lot of ways was like great, you know,
hats off. Like they took it seriously, they studied it. Um,

(36:37):
they did the math, they they carried the one and
all that stuff, right. Um. But then if you step
back and look at it another way, that they decided
for the rest of the people alive on earth at
the time, that this three and a million chance was
was worth it. Um, it was worth the risk. And
I'm sure there's plenty of people still alive today and

(36:59):
who would have been alive back then who would have said, no, no,
it's actually not worth that risk. Three and a million
is actually not that remote of a chance. Like you
have a one in a million chance of being struck
by lightning if you live in North America. I can't
remember if it's any given year or your lifetime, but
a one in a million, this is a three times
greater chance of being struck by lightning. That they were

(37:20):
going to ignite the atmosphere. And so if you kind
of look at the probability compared to other probabilities, UM,
it suddenly made it seem like maybe that wasn't such
a good idea to carry out the test anyway, And
so I kind of used that in I think episode
nine UM as a as kind of this this teaching example,
like it gives us a model of how to approach

(37:42):
existential risks, but it also shows us what not to do,
and that's not to have just a small caudra of
UH people working in secret to decide for the rest
of the world without any input UM whether something's worth
the risk or not. And that's one of the reasons
why we all need to be involved, why Dave, everyman
from Manitoba needs to be involved by you guys need

(38:04):
to be involved, by I need to be involved because again,
there's no one at the top thinking about this stuff
and it's going to take a bottom up process. But
for us to do it correctly, for us not to
just be like I don't think it's worth the risk,
we all need to understand the science behind all this stuff.
We need to be informed. So we need to if
we're gonna decide together. It has to be a smart decision,

(38:25):
you know, and we have to convert that death cult
into the thinking cult to to take on existential risks,
because basically everybody needs to be on board, and at
the very least the people who aren't on board need
to get out of the way and not work counter
to the stuff that that everyone else is trying to do.
You know, I I gotta be honest. That was incredibly

(38:47):
well said and I kind of felt music swelling. I
don't know if Paul's going to put it in when
we when we do this in post, but it does.
It does lead us to some question that naturally follow,
one of which being we're talking about becoming more educated,
become more aware of a situation, becoming less apathetic, realizing

(39:09):
that there's no one truly in a room at the top. Right,
But what what do those next steps look like? What
what are the specific concrete things other than of course
checking out the show? Right? Thanks for that, dud um
so Uh. The first step that we have to take

(39:30):
those of us alive today is to start talking about
this kind of stuff. Like the more people talk about things,
and I used the example of like the environmental movement,
like the environmental movement today, has a lot left to do,
a lot of work left too, And who knows, maybe
this kind of twelve year timeline that we've been given
recently will will get people going a lot more seriously

(39:51):
on climate change. But the fact is there is an
environmental movement. There didn't used to be. It's just in
the sixties, the late sixties even there was basically no
such thing as the environmental movement. Now every you know,
eight year old can tell you tons of environmental facts,
cares about the environment, knows what they can do to

(40:11):
to make the to make the earth a better place. Um,
we need to do the same thing with existential risks.
So step one is for everybody to make this an issue.
When all of us start talking about things like this,
the people who we elect start paying attention. It's like, oh,
this is what these guys want. Okay, I'm on board.
It's not like they're necessarily opposed to what we're doing

(40:32):
by principle, it's just not enough of us are talking
about this kind of thing or that kind of thing
or whatever. So the more of us start talking start
talking about this, the more we're going to be able
to get movement on it. We also need to basically
take a lot of the um scientific mental energy that
we have available, and a lot of the money that

(40:54):
we put towards science and a lot of other stuff
to divert to thinking about existential risks, identifying existential risks,
identifying best practices. And then the next step is for
those of us alive today to say, Okay, what what
did you guys come up with? And then listen to them,
not fight them, not say that sounds kind of hard,
we can't do it, because what they come back to

(41:16):
us with will be a roadmap for surviving the next
hundred to two hundred years as a species. Right if
we can, if we can just kind of alter our
brains just a little bit, and those relatively small ways,
we will lay the groundwork for generations to come to
build on. But that's the key. Those of us live

(41:37):
today have to start now, or else we're just going
to hamstring the ones to come. And in that sense,
it really kind of bears a strong resemblance to environmentalism
as well. So it's basically environmentalism for the human species
is what we need to do. Start that. Wow, So
everyone listening here, I would describe End of the World

(41:59):
as if Black Mirror and Cosmos had a baby, and
then that baby made a podcast and baby can't make Yes,
Oh yes I can. But if if that interests you
as it did me, go check it out right now.
You can find the End of the World on the
I Heart Radio app. Is that the right way to
say it? I've never done this before. Somebody want to do?

(42:21):
You want to do? What? Josh? You say? Where it is?
You can find it on Apple podcast, the I Heart
Radio app, everywhere you listen to podcasts, and not to mention,
it's all there so you can bass the whole thing
there for your listening pleasures. So all ten episodes. Yeah,
and hats off to Paul too. You know he was
the supervising producer on it and top job. Try to
call him mission control. Also, also tell us how you

(42:46):
feel after checking out the show. Tell us what really
piqued your interests. Tell us if you have responses, you
can you can write to us. You can write to
Endo the world on let's see Instagram. Twit all the hits,
it's right mostly I'm there's a there's a hashtag. Hashtag.
You've gotta make the two similar the two fingers simile

(43:07):
hashtag e O t W Josh Clark is where you'll
find that all over social and then I'm at Josh
um Clark on like Instagram and Twitter and in Facebook.
Um and then there's like some e O t W
Josh Clark stuff to you and tell us, tell us
how you keep your optimism after you listen to the show.
I don't want to spoil it too much because you

(43:30):
should go check it out. So go ahead and find
us again. We are conspiracy stuff on most socials or
conspiracy stuff show on Instagram. It just just tell us
what you think this is. This kind of stuff keeps
me up at night, honestly, And I don't know how
you got through creating all of this content, Josh, because
I mean, just listening to it and and uh, absorbing

(43:51):
it as a listener gives you a certain amount of
dread and hope and it's almost this simultaneous like scared
happy feeling. It's very strange. It's weird. It's very strange.
And you've just been immersing and you're still here and
you look you look to be fine. Okay, all right, Well,
if you don't want to do any of that stuff.
Give us a call. We are one eight three three

(44:15):
s T d w y t K. Leave a message,
I'm sorry, leave a message you might get on the show.
And if you don't want to do any of that stuff,
please send us an old fashioned email. We are conspiracy
at how stuff works dot com.

Stuff They Don't Want You To Know News

Advertise With Us

Follow Us On

Hosts And Creators

Matt Frederick

Matt Frederick

Ben Bowlin

Ben Bowlin

Noel Brown

Noel Brown

Show Links

RSSStoreAboutLive Shows

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.