All Episodes

April 28, 2020 55 mins

An existential risk is a special kind of threat that are different from other types of risks in that if one of them ever befalls us, it would spell the permanent end of humanity. It just so happens we seem to be headed for just such a kind of catastrophe.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey, everybody, Josh here Um. We wanted to include a
note before this episode, which is about existential risks um
threats that are big enough to actually wipe humanity out
of existence. Well, we recorded this episode just before the pandemic,
which explains the weird lack of mention of COVID when
we're talking about viruses, and when this pandemic came along,

(00:22):
we thought perhaps a wait and sea approach might be
best before just willy nilly releasing an episode about the
end of the world. So we decided to release this now,
still in the thick of things, not just because the
world hasn't ended, but because one of the few good
things that's come out of this terrible time is the
way that we've all kind of come together and given

(00:43):
a lot of thought about how we can look out
for each other. And that's exactly what thinking about existential
risks is all about. So we thought there would be
no better time than right now to talk about them.
We hope this explains things, uh and that you realize
we're not releasing the glibly in anyway. Instead, we hope
that it makes you reflective about what it means to

(01:06):
be human and why humanity is worth fighting for. Welcome
to Stuff. You should know a production of My Heart
Radios How Stuff Works. Hey, you, welcome to the podcast.
I'm Josh Clark. And there's Charles W. Chuck Bryant over there,

(01:26):
and there's guest producer Dave c sitting in yet again,
at least the second time. I believe he's already picked
up that he knows not to speak. He's nodding the
custom established by Jerry Um. But yeah, he did not,
didn't he? So yeah, I guess it is twice that
Dave's been sitting in. What if he just heard two
times from the other side of the room, You're like,

(01:49):
didn't have the heart to tell him not to do that? Right?
I think he would be Um, he would catch the
drift from like the record scratching that just like materialized
out of nowhere. Many people know that we have someone
on permanent stand by by a record player just in
case we do something like that, and that person is
Tommy Chong. Hi, Tommy, do I smell bong water man?

(02:12):
Betty breaks of it? Yeah? Probably, so, I mean, hats
off to him for sticking to his bit. You know.
Cheech was like, hey, hey, I want a good long
spot on Nash Bridges, So alls, whatever you want me
to about, Like, I'm just into gummies now, Tommy Chong
like tripled down. Yeah, and least he sold the bongs,
didn't he that? Uh? Pe test Beaters test beat Okay,

(02:38):
suddenly think of how to say something like that away
to um defeat urine test. Oh well, listen to you, fancy,
I would say, I don't know. I know that a
street guys call it pe Test Beaters, but Pete Test
Beaters is a a band name about as good as say,
like die your Planet. Actually I think diarrhea Planet's got

(03:02):
it beat. But still all right, so chuck, um, we're
talking today about a topic that is near and dear
to my heart. Existential risks, that's right, which I don't
know if you if you've gathered that or not, but
I really really am into this, this topic all all around. Um.
As a matter of fact, I did a temporart series

(03:24):
on it called The End of the World with Josh Clark,
available everywhere you get podcasts right now. Um, I managed
to smash that down. That's kind of what this is.
It's a condensed version and forever, like I wanted to
just s y s k if I the topic of
existential risks like do do it with you? I wanted
to do it with you. This is going to be

(03:45):
a live show at one point it was, Um, I
think even before that, I was like, Hey, you want
to do an episode on this. You're like, this is
pretty dark stuff. We're doing it now now. I The
only time I said that was when you actually sent
me the document for the live show and I went,
I don't know about a live version of this, So
I guess I guess that must have been before the
end of the world. Then, huh, this was like eight

(04:06):
years ago. Well, I'm glad you turned down the live
show because it may have lived and died there. So Um,
one of the have made all those into the world
big bucks right exactly, Man, I'm rolling in it. My
mattress is stuffed with him. Um. So uh. And you know,
bucks aren't always just the only way of qualifying or

(04:27):
quantifying the success of something. You know, there's also Academy awards, right, oscars,
and that's it, peabodies, big money or public awards ceremonies. Okay, granted, Um,
the other reason I wanted to do this episode is
because one of the people who was a participant in
Interviewee in the End of the World with Josh Clark,
a guy named Dr Toby ord Um recently published a

(04:50):
book called The Precipice and it is like a really
in depth look at existential risks and the ones we
face and you know what's coming down the pike and
what we can do about him and why right exactly,
cheers and jeers right exactly. Um. And it's a it's
a really good book and it's written just for like
the average person to pick up and be like, I

(05:11):
hadn't heard about this and then reached the end of
it and say, I'm terrified, but I'm also hopeful. And
that one reason I wanted to do this episode to
let everybody know about Dr Ord's book or Toby's book.
It's impossible to call him dr or he's just a
really likable guy. Um is because he actually turned the
tone of the End of the World around almost single handedly.

(05:31):
It was really grim, remember before I interviewed him, really
and and also you remember I started like listening to
The Cure a lot. Um just got real dark there
for a little while. Which is funny that The Cure
is my conception of like really dark anyway, um, death
metal guys out there laughing, right, so he but talking

(05:52):
to him, he just kind of just steered the ship
a little bit, and by the end of it, because
of his influence, the End of the the World actually is
a pretty hopeful well series. So my hat's off to
the guy for for doing that, but also for writing
this book the precipice. Hat's off, sir. So, um, we
should probably kind of describe what existential risks are. Um.

(06:13):
I know that you know in this document is described
many many times. But the reasons described many many times
is because there's like a lot of nuance to it.
And the reason there's a lot of nuance to it
is because we kind of tend to walk around thinking
that we understand existential risks based on our experience with
previous risks. But the problem with existential risks are they're

(06:34):
actually new to us and they're not like other risks
because they're just so big and if something happens one
of these existential catastrophes but follows us, that's it. There's
no second chance, there's no do over, and we're not
used to risks like that. That's right. Uh, nobody is
because we are all people, right, and the thought of

(06:56):
all of human beings being gone um or at least uh,
not being able to live as regular humans live and
enjoy life like and not live as matrix batteries because
you know, technically the matrix those are people. Yeah, but
that's that's no way to live. The people in the pods, Yeah,
that's what I'm saying. I wouldn't want to live that way.

(07:18):
But that's another version of existential risk. Is not necessarily
that everyone's dead, but you could become just a matrix
battery and not flourish or move forward as a people,
right exactly, so um. But but with existential risks in general,
like that, the general idea of them is that like
if you are walking along and you suddenly get hit
by a car, like you no longer exist, but the

(07:40):
rest of humanity continues on existing, uh. Correct. With existential risks,
it's like the car that comes along and hits not
a human but all humans. So it's a risk to
humanity itself. And that's just kind of different because all
of the other risks that we've ever run across, um

(08:01):
either give us the luxury of time or proximity, meaning
that we have enough time to adapt our our behavior
to it, to survive it and continue on as a species.
Or there's not enough of us in one place to
be affected by this this UM risk that took out
say one person or a billion people, Like if all

(08:23):
of Europe went away, that is not an ex risk. No,
and so people might say, um, it would be really sad,
and I mean up to you know, the people alive
on Earth, if they all died somehow, it would still
possibly not be an existential risk because that one percent

(08:44):
living could conceivably rebuild civilization. That's right. We're talking about
giving the world back to mother Nature and just seeing
what happens. Do you remember that UM series. I think
it was a book to start the Earth without us
m oh so, I think I know that there was
a big deal when it came out, and then they

(09:04):
made like a maybe a Science channel or that GEO
series about it where this guy describes like how our
infrastructure will start to crumble, like if humans just vanished tomorrow,
how the Earth would reclaim, Nature would reclaim everything we've
done and undo you know, after after a month, after
a year, after ten years. I've heard of that it's
really cool stuff. Yeah, there's a Bonnie Prince. Billy my

(09:28):
idol has a song called It's Far from Over, and
that's sort of a Bonnie principally look at the fact that, hey,
even if all humans leave, it's not over. Yeah, like
new animals are gonna new creatures are going to be born,
the earth continues. Yeah. Uh. And he also has a
line though about like you, but you better teacher kids
a swim. That's a great line. Yeah, it's good stuff.

(09:50):
They ever tell you. I saw that guy do karaoke
with his wife once. Oh really, you know our friend
Toby his wedding. Yeah, I would have not been able
to be at that wedding because you would have just
been such a fanboy. I don't know what I would do.
I would, I would. It would have ruined my time,
They really would, because I would second guess everything I

(10:10):
did about talk. I mean I even talked to the
guy once backstage, and that ruined my day. It really did,
because you spent the rest of the time just thinking
about how stuff. It was actually fine. He was a
very very very nice guy, and we talked about athens
and stuff. But that's who I just went to see
in d C Philly in New York. Nice when a
little follow him around the tour for a few days,

(10:33):
did he sing that song about the world going on
on or life going on? He did so. Um, so
let's just cover a couple of things that we like
people might think our existential risk that actually aren't. Okay. Yeah,
I mean I think a lot of people might think, um, sure,
some global pandemic that could wipe out humanity. There could

(10:54):
very well be a global pandemic that could kill a
lot of people, but it's probably not going to kill
every living human, right. It would be a catastrophe, sure,
but not an next risk. Yeah, I mean, because humans
have anybodies that we develop, and so people who survive
that flu have anybodies that they pass on the next generation,
and so that that disease kind of dies out before

(11:15):
it kills everybody off, and the preppers at the very
least they'll be fine, would be safe. Um, what about
calamities like a mud slide or something like that. You
can't mud slide the earth. You can't. And that's a
really good point. This is what I figured out in
researching this, after doing the End of the World, after
talking all these people. It took researching this article for

(11:37):
me to figure this out. That it's time and proximity
that are the two things that we used to survive,
and that if you take away time and proximity, we're
in trouble. And so mud slides are a really good
example of proximity, where a mudslide can come down a
mountain and take out an entire village of people. Yes,
and it's really sad and really scary to think of.

(11:57):
I mean we saw it with our own eyes. We
stood in a field that was now what like eight
or nine feet higher than it used to be. Yeah,
and you could see the trek. This is in Guatemala.
When we went down to visit our friends at co
ed Um, there was like the trees were much sparser,
you could see the track of the mundon. They were like,
the people are still down there. This is It was

(12:17):
a horrible tragedy and it happened in a matter of seconds.
It just wiped out a village. But we all don't
live under one mountain, and so if a bunch of
people are taken out, the rest of us still go on.
So there's the time and there's a proximity yeah, I
think a lot of people in the eighties might have thought,
because of movies like war games and movies like The
Day after that global thermon nuclear war would be an

(12:40):
ex risk and as bad as that would be, it
wouldn't kill every single human being. Uh no, No, they
don't think so they started out thinking this. Like, as
a matter of fact, nuclear war was the first one
of the first things that we identify as a possible
existential risk. And if you kind of talk about the
history of the field for the first like several decades,

(13:02):
that was like the focus, the entire focus of existential risks.
Like Bertrand Russell and Einstein wrote a manifesto about how
we really need to be careful with these nukes because
we're gonna wipe ourselves out. Carl Sagan, you remember our
amazing Nuclear Winner episode that was from you know, studying
existential risks. And then in the nineties a guy named

(13:24):
John Leslie came along and said, Hey, there's way more
than just nuclear war that we could wipe ourselves out with.
And some of it is taking the form of this
technology that's coming down the pike. And that was taken
up by one of my personal hero is a guy
named Nick Bostrom. Yeah, he's a philosopher out of Oxford,
and he is one of the founders of this field.

(13:45):
And he's the one that said, are one of the
ones that said, you know, there's a lot of potential
existential existential risks and nuclear wars peanuts bring it on.
But and I know, I don't know of boss from
specifically believes they probably does that that there we would
be able to recover from a nuclear war. That's the idea,

(14:07):
as you rebuild as a society after whatever zombie apocalypse,
ri clear war happened. Yeah, and again say it killed
off people. To us, that would seem like an unimaginable
tragedy because we lived through it. But if you zoom
back out and look at the lifespan of humanity, not
just the humans life today, but all of humanity, like
it would be a very horrible period in human history,

(14:29):
but one we could rebuild from oversay ten thousand years
to get back to the point where we were before
the nuclear war. And so ultimately it's probably not an
existential risk. Yeah, it's tough. This is a tough topic
for people because I think people have a hard time
with that long of a view of things. And then
whenever you hear uh the big Matt comparisons of you know,

(14:53):
how long people have been around and how old the
earth is and that stuff, it kind of hits home.
But it's stuff for people living that live you eighty
years to think about, Well, ten thousand years will be fine,
and even like, um, you mean, when I was researching this,
she brought this up a lot, like where do we
stop caring about people that our descendants? You know, we

(15:13):
care about our children or our grandchildren. That's about I
just care about my daughter, That's about it. That's where
it is with the grandchildren. You have grandchildren yet? Yeah,
but wait till they come along. Everything I've ever heard
is that being a grandparents even better than being a parent.
And I know some grandparents. Okay, let's say I'm not
dead before my daughter eventually has a kid, if she

(15:35):
wants to, I would care about that grandchild. But after that,
forget it. Yeah, my kids, kids, kids, who cares granded
that's about That's about where it would like, I care
about people in humanity as a whole. I think that's
what you gotta do. You can't think about like your
your eventual ancestors think you just got to think about people, right, Yeah,

(15:58):
that's really help people you don't know. Now, it's kind
of requisite to start caring about existential risks, to start
thinking about people, not just well let's talk about it.
So Toby Ord made a really good point in his
book The Precipice, right that you care about people on
the other side of the world that you've never met. Yeah,
that's what I'm saying, Like that happens every day, right,

(16:19):
So what's the difference between people who live on the
other side of the world that you will never meet
and people who live in a different time that you
will never meet. Why would you care any less about
these people human beings that you'll never meet, whether they
live on the other side of the world at some
time or in the same place you do, but at
a different time. I think a few I mean I'm
not speaking for me, but I think if I were

(16:40):
to step inside the brain of someone who thinks that,
they would think like a it's a little bit of
a self Um, it's a bit of an ego thing
because you know, like, oh, I'm helping someone else, so
that does something for you in the moment, Like someone
right now on the other side of the world that
maybe I've sponsored is doing good because of me. And

(17:02):
I had a little kick out of it from Sally Struthers.
Yeah that does something. It helps he help with food
on her plate. She still with us, I think so.
I think so too. But I feel really bad if
I certainly haven't heard any news of her death, people
would talk about that and the record scratch would have
just happened. Uh. So I think that is something too.

(17:24):
And I think there are also sort of a certain
amount of people that are just um that just believe
you're worm dirt. There is no benefit to the afterlife
as far as good deeds and things, so like once
you're gone, it's just who cares because it doesn't matter.
There's no consciousness. Yeah, well that's I mean, if you

(17:45):
if you were at all like piqued by that stuff,
I would say, definitely read the precipice because like one
of the best things that Toby does, and he does
a lot of stuff really well, is describe why it matters,
because I mean, that's a philosopher after all. Um, so
he says like this is why it matters, Like not
only does it matter because you're you're keeping things going
for the future generation, you're also continuing on with the

(18:08):
previous generation built like who who are you to just
be like, oh, were you just gonna drop the ball? No?
I agree that's a very self centered way to look
at things totally. But I think you're right. I think
there are a lot of people who look at it
that way. So you want to take a break, Yeah,
we can take a break now, and maybe we can
dive into Mr Bostrom's or doctor I imagine Bostroms U

(18:28):
Five different types? Are there? Five? No, there's just a few. Okay,
a few different types of existential we can make up
a couple of the addam net. Let's not stop, all right, Chuck. So, uh,

(19:07):
one of the things you said earlier is that existential risk,
the way we think of them typically is um that
something happens and humanity has wiped out and we all
die and there's no more humans forever and ever. That's
an existential risk. That's one kind, really, and that's the
easiest one to grasp with, which is extinction. Yeah, and

(19:28):
that kind of speaks for itself. Just like dinosaurs are
no longer here, that would be us, Yes, and I
think that's one of those other things too. It's kind
of like how people walk around like, yeah, I know
I'm going to die someday. But if you sat them
down and you were like, do you really understand that
you're going to die someday? That they might start to
panic a little bit, you know, and they realize I

(19:51):
haven't actually confronted that, I just know that I'm going
to die. Or if you knew the date, that'd be weird.
It would be like a Justin Timberlake movie. Would that
make things better? Or we're for humanity? I would say better? Probably, right.
I think it'd be a mixed bag. I think some
people would be able to do nothing but focus on
that and think about all the time they're wasting, and

(20:11):
other people would be like, I'm gonna make the absolute
most out of this. Well, I guess there are a
couple of ways you can go, and it probably depends
on when your data is. If if you found out
your date was a ripe old age, you might be like, well,
I'm just going to try and lead the best life
I can. That's great. If you find out you live
fast and die hard at seven die harder, uh, you

(20:32):
might die harder. You might be like screw it, or
you might really ramp up your good works. It depends
what kind of person you are. Probably and more and
more I'm realizing is it depends on how you were
raised to. You know, like we we definitely are responsible
for carrying on ourselves as adults. Like you can't just say, well,

(20:53):
I wasn't raised very well or I was raised this way,
so whatever, Like you have a responsibility for yourself and
who you are as an adult. Sure, but I really
feel like the way that you're raised to really sets
the stage and put you on a path that's that
can be difficult to get off of because it's so
hard to see for sure, you know, because that's just
normal to you because that's what your family was. Yeah,
that's a good point. So anyway, extinction is just one

(21:16):
of the ways one of the types of existential risks
that we face, a bad one. Permanent stagnation is another one,
and that's the one we kind of mentioned. Um dance
around a little bit, and that's like some people are around.
Not every human died and whatever happened, but um, whatever
is left is not enough to either repopulate the world

(21:39):
or to progress humanity in any meaningful way to rebuild
civilization back to where it was, and it would be
that way permanently, which is kind of in itself tough
to imagine too, just like the genuine extinction of humanity
is tough to imagine the idea of, well, there's still
plenty of humans running around, how are we never going
to get back to that place? And there's that may

(22:01):
be the most depressing one, I think. I think the
next one is the most depressing. But that's pretty depressing.
But one one example it's been given for that is like,
let's say we say, um, all right, this climate change,
we need to do something about that. So we undertake
a geo engineering project that isn't fully thought out, and
we end up causing like a runaway greenhouse gas effect

(22:23):
and there's just nothing we can do to reverse course,
and so we ultimately wreck the earth. That would be
a good example of permanent stagnation. That's right, This is
this next one. So yes, agreed, permanent stagnation is pretty bad.
I wouldn't want to live under that, But at least
you can run around and like, um, do what you want.
I think the total lack of personal liberty and the

(22:46):
flawed realization one is what gets me. Yeah, they all
get me. Uh. Flawed realization is the next one, and
that's Um, that's sort of like the matrix example, which
is that there's technology that we invented that eventually makes
us their little batteries and pods right basically, or there's

(23:10):
just um, some someone is in charge, whether it's a
group or some some individual or something like that. It's
basically a permanent dictatorship that we will never be able
to get out from under because this technology we've developed, yeah,
is being used against us, and it's so good at
keeping tabs on everybody and squashing descent before it grows.

(23:34):
There's just nothing anybody could ever do to overthrow it.
And so it's a permanent dictatorship where um, we're not
doing anything productive, we're not advancing, we're say, um, say,
it's like a religious dictatorship or something like that. Oh,
anybody does is go to church and support the church
or whatever, and that's that. And so what Dr Bostrom

(23:55):
figured out is that there are there are fatest as
bad as death. There are possible outcomes for the human
race that aren't are as bad as extinction that still
live people alive even like in kind of a futuristic
kind of thing, like the flawed realization Wine goes um,
but that you wouldn't want to live the lives that

(24:16):
those humans live, and so humanity has lost its chance
of ever achieving it's true, its true potential. That's right,
And that that those qualifies existential risks as well, that's right.
They want to live in the matrix no at all,
or in a post apocalyptic um altered Earth. Yeah, the

(24:36):
matrix basically like thunder of the Barbarian that's what I
imagine with with the permanent stagnation. So, uh, there are
a couple of big categories for existential risks, and they
are either nature made or man made um. The nature
ones we've uh you know there there's always been the

(24:56):
threat that big enough um object hitting planet Earth could
do it, right, Like that's always been around. It's not
like that's some sort of new realization, but it's just
a pretty rare It's so rare that it's not likely. Right,
All of the natural ones are pretty pretty rare compared
to the human made ones. Yeah, Like I don't think

(25:17):
science wakes up every day and worries about a comet
or an asteroid or a meteor. No, and it's definitely
worth saying that the better we get at scanning the heavens,
the safer we are eventually when we can do something
about it. If we see this coming, what do we do?
Just hit the gas and move the Earth over a bit?

(25:37):
Since the right, Um, and there was nothing we can
do about any of these anyway, So maybe that's also
why science doesn't wake up worrying, right. Yeah, so you've
got near earth objects, you've got celestial stuff like collapsing
stars that produce gamma ray births. And then even back
here on Earth, like a supervolcanic eruption could conceivably put
out enough soot that it blocks photosynthesis and showing that, yeah,

(26:01):
sends us into essentially a nuclear winner too. That would
be bad. But like you're saying, there's these are very
rare and there's not a lot we can do about
them now. Instead, the focus of people who think about
existential risks, um, And there are like a pretty decent
handful of people who are dedicated to this now. Um,
they say that the anthropogenic or the human made ones,

(26:23):
these are the ones we really need to mitigate because
they're human made, so they're under our control and um,
they they they that means we can do something about
them more than say a comment. Yeah. Yeah, but that's
a it's a bit of a um double edged sword
because you think, oh, well, it's since we could stop

(26:44):
this stuff, that's really comforting to know. But we're not. Right, Like,
we're headed down a bad path in some of these
areas for sure. So because we are creating these risks
and not thinking about these things, in a lot of cases,
they're actually worse even though we could possibly control them.
It definitely makes it more ironic too, right. So, um,

(27:08):
there are a few that have been identified, and there's
probably more that we haven't figured out yet or haven't
been invented yet. But one of the big ones, just um,
I think almost across the board, the one that existential
risk analysts worry about the most is AI artificial intelligence. Yeah,
and this is the most frustrating one because it seems
like it would be the easiest one to uh not

(27:31):
stopping its tracks, but to divert along a safer path. Um.
The problem with that is that people who have dedicated
themselves to figuring out how to make that safer path,
are coming back and saying this is way harder than
we thought it was going to be to make the
safer path. Yeah really yeah. And so at the same time,

(27:54):
while people recognize that there needs to be a safe
path for AI to follow, this other our path that
it's on now, which is known as the unsafe path,
that's the one that's making people money. So everybody's just
going down the unsafe these other people are trying to
figure out the safer one because the UM the computer

(28:15):
and war games would say, maybe the best option is
to not play the game, and that's if there is
no safe option, then maybe a I should not happen
or we need to And this is almost heresy to
say we need to put the brakes on AI development
so that we can figure out the safer way and

(28:35):
then move forward. But we should probably explain what we're
talking about was safe in the in the first place, right, Yeah,
I mean, we're talking about creating super intelligent AI that
basically is is so smart that it starts to self
learn UM and is beyond our control and it's not thinking, Ah,
wait a minute, one of the one of the things

(28:56):
I'm programmed to do is make sure we take care
of humans. And it doesn't necessarily mean that some AI
is going to become super intelligent and say I want
to destroy all humans. That's actually probably not going to
be the case. It will be that this super intelligent
AI is carrying out whatever it was programmed to do,
would disregard humans exactly. And so if our goal of

(29:19):
staying alive and thriving UM comes in conflict with the
goal of whatever this AI's goal is, whatever it was
designed to do, we would lose that because it's smarter
than us. By definition, it's smarter than us. It's out
of it, it's out of our control. And probably one
of the first things it would do when it became
super intelligent is figure out how to prevent us from
turning it off. Well, yeah, that's the fail safe, is

(29:43):
the all important failsafe that the AI could just disable
exactly right. You can just like sneak up behind it
with a screwdriver or something like that, and then you
can get shot the robots like in a robot voice.
So that's called UM designing friendly or aligned a I
and people I have are like some of the smartest
people in the field of AI research. Have have stopped

(30:06):
figuring out how to build AI and have started to
figure out how to build friendly AI. Yeah. Aligned is
in aligned with our goals and needs and desires. And
Nick Bostrom actually has a really great um thought experiment
about is called the paper clip problem. Yeah. Um, and
it's it's you can hear it on the end of

(30:26):
the world. Nice. I like that driving listeners. The next
one is nanotech. Um. And nanotech is I mean, it's
something that's very much within the realm of possibility, as
is AI. Actually it's not. That's not super far fetched
either by super intelligent AI. Yeah, it's definitely possible. Yeah,

(30:47):
and that's the same with nanotechnology we're talking about. And
I've seen this everywhere, from um, little tiny robots that
will just be dispersed and clean your house, um, to
like the atomic level where they can like reprogram our
body from the inside, the little tiny robots that can
clean your car. Yeah. Those are those are the three.

(31:11):
Those are three things, so um, two of them are cool.
One of the one of the things about these nanobots
is that because they're so small, they'll be able to
manipulate matter on like the atomic level, which is like
the usefulness of that is mind bottling to send them
in and they're gonna be networked, so we'll be able
to program to do whatever and control them. Right. Um.

(31:33):
The problem is is if they're networked in there under
our control, if they fall under the control of somebody
else or say a super intelligent AI, then we would
have a problem because they can rearrange matter on the
atomic level, so who knows what they would start rearranging
that we wouldn't want them to rearrange. It's like that

(31:53):
Gene Simmons sci fi movie in the eighties. Uh, I
won't say it was Looker. No. I always confuse those two,
the other one this is Runaway, Runaway. I think one
inevitably followed the other on HBO. They had to have
been a double feature because they could not be more linked.
In my mind. Same here, you know. I remember Albert

(32:15):
Finney was in one. I think he was in Looker
he was, and Gene Simmons was in Runaway as the
bad guy, of course, but it did a great job,
and Tom Selleck was the good guy. Tom Selleck. Yeah,
but the idea in that movie was not nanobots. They were,
but they were a little um insect like robots that
they just weren't nano sized, right, And so the reason

(32:37):
that these could be so dangerous is because not their size.
But there's just so many of them. And while they're
not big and they can't like punch you in the
face or stick you in the neck with a needle
or something like the runaway robots, they can do all
sorts of stuff to you molecularly, and you would not
want that to happen. Yeah, this is pretty bad. There's
an engineer out of m my Tea named Eric Drexler.

(33:00):
He is a big, big name in molecular nanotech. He
if he's listening right now. Right up to when you
said his name, he was just sitting there saying, please
don't mention me, no, because he's tried to back off
from his gray goo hypothesis. So, yeah, this is the idea.
What there are so many of these nano bots that
they can harvest their own energy, that can self replicate

(33:23):
like a little bunny rabbits, and that there would be
a point where there was runaway growth such that the
entire world would look like gray goo because it's covered
with nanobots. Yeah, and since they can harvest energy from
the environment, they would eat the world, they'd wreck the world. Basically,
this is that's a that's scary, you're right. So he
took so much flak for saying this, even because apparently

(33:45):
it's scared people enough back in the eighties that nanotechnology
was like kind of frozen for a little bit. Yeah,
and so everybody went drux lure. And so he's backed
off from it, saying like, this would be a design flaw,
this would just naturally happen with nanobots. You'd have to
design them to harvest energy themselves and to self replicate,

(34:07):
and so just don't do that. And so the thing
is like, yes, he took a lot of flak for it,
but he also like it was a contribution to the world.
He pointed out two big flaws that could happen that
now we're just like a sci fi trope. But when
he when he thought about them, they weren't self evident
or obvious. Yeah. I mean, I feel bad we even
said his name, but it's worth saying. Clyde Drexler, right, Glad,

(34:32):
that's right. Biotechnology is another pretty scary field. Um. There
are great people doing great research with infectious disease. UM.
Part of that, though, involves developing new bacteria and new viruses,
new strains that are even worse than the pre existing
ones as part of the research. And that is, uh,

(34:55):
that can be a little scary too, because I mean
it's not just stuff of movies. There are acts, events
that happen, protocols that aren't followed, and this stuff can
or could get out of a lab. Yeah, and it's
not one of those like could get out of a lab.
Even things that has gotten out of it happens, I
don't want to say routinely. Dis happened so many times

(35:16):
that when you look at the track record of the
biotech industry, it's just like, how are we not all
dead right now? It's crazy lost broken arrows, lost nuclear
warhead exactly, but with little, tiny, horrible viruses. And then
when you factor in that terrible track record with them
actually altering viruses in bacteria to make them more deadly,
to do those two things to reduce the time that

(35:39):
we have to get over them right, so they make
them more deadly, um, and then to reduce proximity to
make them more easily spread, more contagious, so they spread
more quickly. And kill more more quickly as well. Then
you have potentially an existential risk on your hand. For sure.
We've talked in here a lot about the Large Haydrin Collider.

(36:00):
We're talking about physics experiments as the I guess this
is the last example that we're going to talk about. Yeah,
and I should point out that this is not physics experiments.
Does not show up anywhere in Toby Ord's precipist book. Okay,
this one is kind of my pet. Yeah. I mean,
there's plenty of people who agree that this is a possibility,

(36:20):
but a lot of existential risks. Theorists are like, I
don't know. Well, you'll explain it better than me. But
the idea is that we're doing all these experiments, uh
like the large Adreing Collider to try and figure stuff
out we don't understand and which is great, but we
don't exactly know where that all could lead. Yeah, because

(36:41):
we don't understand it enough, you can't say this is
totally safe. And so if you read some physics papers
and this isn't like Rupert Sheldrake Morphick Fields kind of
like right, it's it's actual physicists have said, well, actually
using this version of string. Are it's possible that this

(37:01):
could be created in a large hadron collider or more
likely a more powerful collider that's going to be built
in the next fifty years or something like that. Super large. Sure,
the Duper, I think it's the nickname for it. Oh man,
I hope that doesn't end up being the nickname the Duper, right, Yeah,
I guess so. But it also has a little kind

(37:23):
of you know, I don't know, I like it all right,
So um, they're saying that a few things could be
created accidentally within one of these colliders when they smashed
the particles together. Microscopic black hole. Uh, my favorite, the
low energy vacuum bubble, which is it's a little tiny
version of our universe that's more stable, like a more

(37:45):
stable version, a lower energy version, and so if it
were allowed to grow, it would grow at the speed
of light. It would overwhelm our universe and be the
new version of the universe. Yeah. That's like when you
buy the baby alligator or the baby constrictor python. You
think is so cute, right, and then it grows up
and eats the universe screwed. The problem is, is this

(38:07):
new version of the universe is set up in a
way that's different than our version, and so all the matter,
including us, that's arranged just so for this this version
of the universe would be disintegrated in this new version.
So it's like the snap. But can you imagine if
all of a sudden, just a new universe grew out
a large hadron clider accidentally and at the speed of

(38:29):
light just ruined this universe forever, If it was we
just accidentally did this with a physics experiment. I find
that endlessly fascinating and also hilarious, just the idea I
think the world will end ironically somehow. It's it's entirely possible. So, uh,
maybe before we take a break, let's talk a little

(38:51):
bit about climate change, because a lot of people might
think climate change is an existential threat. Uh, you know,
it's terrible and we need to do all we can,
but even the worst case models probably don't mean an
end the humanity as a as a whole. Like it
means we're living much further inland than we thought we

(39:13):
ever would, and we maybe are much tighter quarters than
we ever thought we might be in a lot of
people might be gone, but it's probably not going to
wipe out every human being. Yeah, it'll probably end up
being akin to that same that same line of thinking,
the same path of Um, A catastrophic nuclear war, which
I guess you could just say nuclear war catastrophic is

(39:35):
kind of built into the idea, but we would be
able to adapt and rebuild. Um, it's possible that our
worst case scenarios are actually better than what will actually happen.
So it's just like with a total nuclear war, it's
possible that it could be bad enough that it could
be an existential risk. It's possible climate change could end

(39:57):
up being bad enough that it's an existential voice. But
from current understanding, they're probably not existential risks, right, all right,
Well that's a hopeful place to leave for another break,
and we're gonna come back and finish up with why
all of this is important. It should be pretty obvious,
but we'll summarize it. Stop you know, stop, stop, you know, stop,

(40:42):
you know, okay, chuck um. One thing about existential risks
that people like to say is well, let's just not
let's just not do anything. And it turns out from
people like Nick Bostrom and Toby Ord and other people
around the world who are thinking about this kind of stuff.
If we don't do anything, we probably are going to
accidentally wipe ourselves out. Like doing nothing is not a

(41:06):
safe option. Yeah, But um Bostrom is one who has
developed a concept that's hypothetical called technological maturity Um, which
is it would be great and that is sometime in
the future where we have invented all these things, but
we have done so safely and we have complete mastery
over it all, there won't be those accidents, there won't

(41:27):
be the gray goo, there won't be the AI that's
not aligned. Yeah, because we'll know how to use all
this stuff. Says right, like you said, right, we're not
mature in that way right now. No, Actually, we're at
a place that Carl Sagan called their technological adolescence, where
we're becoming powerful, but we're also not wise. At the
point where we're right now technological adolescence where we're starting

(41:50):
to invent the stuff that actually can wipe humanity out
of existence. But before we reach technological maturity, where we
have safely master and have that kind of wisdom to
use all this stuff, that's probably the most dangerous period
in the history of humanity. And we're entering it right now.
And if we don't figure out how to take on

(42:11):
these existential risks, we probably won't survive from technological adolescents
all the way to technological maturity. We will wipe ourselves
out one way or another. Because this is really important
to remember. All it takes is one one existential catastrophe,
and not all of these have to take place. It
doesn't have to be some combination, just one. Just one

(42:33):
um bug with basically a percent mortality has to get
out of a lab. Just one accidental physics experiment has
to slip up um just one AI has to become
super intelligent and take over the world like just one
of those things happening, and then that's it. And again
the problem with existential risks that makes them different is

(42:53):
we don't get a second chance. One of them befalls us,
and that's that. That's right. Uh, there depends on who
you talked to about if you want to get in,
maybe just a projection on our chances as a whole
as humans. Uh Toby ord right now is uh what
a one and six chance over the next hundred years. Yeah,

(43:15):
he always follows that with Russian roulette, other people say
about ten percent. Um. There's some different cosmologists. There's one
name Lord Martin Rees who puts it att Yeah. He
actually is a member of the Center for the Study
of Existential Risk. And we didn't mention before. Bostrom founded
something called the Future of Humanity Institute, which is pretty

(43:36):
great f h I. And then there's another one more
place that I gotta want to shout out. It's called
the Future of Life Institute. It was founded by Max
tag Mark and Yon Talling, co founder of I think Skype.
Oh really, I think so all right, well you should
probably also shut out the Church of Scientology. No, no, no,

(43:57):
no genius. Yeah, that's the one that's one thinking about. Well,
they get confused a lot. This is a pretty cool
little um thing you did here with how long because
I was kind of talking before about the long view
of things and how long humans have been around. So
I think your rope analogy is pretty spot on here.
So that's J. L. Schellenberg's rope analogy. Well, I didn't

(44:19):
think he wrote it. I wish it were admitting that
you included it. So the what we were talking about,
like you were saying, is like it's it's hard to
take that long view. But if you if you step
back and look at how long humans have been around.
So Homo sapiens have been on Earth about two thousand years,
it seems like a very long time. It does. And
even modern humans um like us have been around for

(44:39):
about fifty thousand years, it seems like a very long
time as well. But if you think about how much
longer the human race, humanity could continue on to exist
as a species, UM, it's that's nothing. It's virtually insignificant um.
And J. L. Schellenberg puts it like this, like, let's
say humanity has a billion year lifespan, and you translate

(45:02):
that billion years into a twenty foot rope. That's easy
to show up with just the eighth of an inch
mark on that twenty ft rope. You would have to
our species would have to live another three hundred thousand
years from the point where we've already lived. Yes, we
would have to live five hundred thousand years just to
show up as an eighth of an inch that first

(45:24):
eighth of an inch on that twenty ft long rope.
That's how long humanity might have ahead of us. And
that's actually kind of a conservative estimate. Some people say,
once we reach technological maturity, we're we're fine, We're not
going to go extinct because we'll be able to use
all that technology like having a I track all those
new Earth objects and say, well, this one is a

(45:46):
little close for comfort, I'm gonna send some nanobouts out
to disassembl it. We will remove ourselves from the risk
of ever going extinct when we hit technological maturity. So
a billion years is definitely doable for us. Yeah, and
it's uh, why we care about it is because it's
happening right now. I mean, there is already a I
that is unaligned. Um, we are, We've already talked about

(46:08):
the biotech in labs. Accidents have already happened, have been
all the time, and there are experiments going on with
physics that we we think we know what we're doing,
but accidents happen, and an accident that you can't recover from,
you know, there's no whoops is let me try that again,

(46:28):
right exactly because we're all toasts. So this is why
you have to care about it. And luckily, um, I
wish there were more people that care about it. Well,
it's becoming more of a thing and if you talk
to Toby Ord, he's like, so, just like say, the
environmental movement was, you know, the the moral push, and
we're starting to see some some stuff some results from

(46:48):
that now, but stay starting making the sixties and seventies,
nobody had ever heard of that. Yeah, I mean it
took decades. He's saying, like, we're about that's what we're
doing now with existential Chris. People are going to start
to realize like, oh man, this is for real, and
we do something about it because we could live a
billion years if we managed to survive the next hundred,

(47:08):
which makes you and me chucking, like all of us
alive right now in one of the most unique positions
any humans ever been at. We have the entire future
of the human race basically resting in our hands because
we're the ones who happened to be alive when humanity
entered its technological adolescence. Yeah, and it's a it's a
tougher one then save the planet, because it's such a

(47:30):
tangible thing when you talk about pollution, and it's very
easy to put on a TV screen or in a classroom. Um,
and it's not so easily dismissed because you can see
it in front of your eyeballs and understand it. This
is a lot tougher education wise, because, um people, here's
something about nanobots and gray goo or ai and just think,

(47:54):
come on, man, that's the stuff of movies. Yeah, And
I mean that's it's sad that like we couldn't dig
into it further, because when you really do start to
break it all down and understand it, it's like, no,
this totally is for real and it makes sense, like
this is entirely possible and maybe even likely. Yeah, and
not the hardest thing to understand. It's not like you

(48:15):
have to understand nanotechnology to understand its threat, right exactly.
That's well put. The other thing about all this is
that not everybody is on board with this, even people,
even people who hear about this kind of stuff are like, no,
you know, this is pie in the sky. It's overblown
or the opposite of the sky. It's a cake in
the ground. Is that the opposite dark sky territory? It's

(48:39):
a turkey drumstick in the earth. Okay, that's kind of
the opposite of the of a pie. Okay. I think
I may have just come up with a coloaquialism I
think so. Um, So, some people aren't convinced. Some people say, no,
AI is nowhere near being even close to human level intelligent,
let alone super intelligent. It like, why spend money because

(49:01):
it's expensive, right, Well, and other people are like, yeah,
if you start diverting, you know, research into figuring out
make AI friendly, I can tell you China and India
aren't going to do that, and so they're going to
leap frog ahead of us and we're going to be
toast competitively. So there's a cost to an opportunity cost,
there's an actual cost um So there's a lot of people.

(49:22):
It's basically the same arguments for people who argue against
mitigating climate change. Yeah, same same thing kind of. So
the answer is, uh, terraforming, terraforming, Well, that's that's not
the answer. The answer is one of those right, study
terraforming is right. The answer is to study this stuff

(49:43):
figure out what to do about it. But it wouldn't
hurt to learn how to live on Mars, right or
just off of Earth because in the exact same way,
like um that, like a whole village is at risk
when it's under a mud slide or a mountain and
a mud slide comes down. If we all live on Earth.
If something happens to life on Earth, that's it for humanity.

(50:03):
But if they're like a thriving population of humans who
don't live on Earth, who live off of Earth, if
something happens on Earth, humanity continues on. So learning to
live off of Earth is a good step in the
right direction. That's plan A DOT one. Sure it's tied
for first, like it's something we should be doing at

(50:25):
the same time as studying and learning to mitigate existent tourists. Yeah,
and I think it's got to be multi pronged, because
the threats are multi pronged. Absolutely. And there's one other
thing that I really think you've got to get across.
Like we said that, if if say the US starts
to invest all of its resources into figuring out how
to make friendly AI, but India and China continue on

(50:48):
like the path, it's not gonna work. And the same
goes with if every country in the world said, no,
we're going to figure out friendly AI, but just one
dedicate it itself to continuing on this path, the ninety
not the rest of the countries in the world. Progress
would be totally negated by that one yeah, so we
gotta get the It's got to be a global effort.

(51:10):
It has to be a species wide effort, not just
with AI, but with all these understanding all of them
and mitigating them together. Yeah, that could be a problem. So, um,
thank you for very much for doing this episode with me.
I'll though you talking to Dave. No, well, Dave too.
We appreciate you to Dave, but but big ups to
you Charles, because Jerry was like, I'm not sitting in

(51:31):
that room. It's like, I'm not listening to Clark blather
on about existential risk for an hour. Um, so one
more time. Toby Ord's The Precipice is available everywhere you
buy books. You can get The End of the World
with Josh Clark wherever you get podcasts. If this kind
of thing floated your boat, check out the Future of
Humanity Institute the Future of Life Institute UM and they

(51:54):
have a podcast hosted by Aerial Kahn and UM. She
had me on back in December of two eighteen as
part of a group that was talking about existential hope.
So you can go listen to that too. If you're
like this is a downer, I want to think about
the bright side, there's that whole Future of Life Institute podcast.

(52:14):
So what about you? Are you like convinced of this
whole thing like that this is an actual like thing
we need to be worrying about and thinking of. Really, No,
I mean I think that, sure there are people that
should be thinking about this stuff, and that's great as
far as like me, what can I do well? And

(52:35):
then I ran into that like, there's not a great
answer for that. It's more like, start telling other people
is the best thing that the average person can do. Hey, man,
we just did that in a big way. We did,
didn't We Like? It's people. Now we can go to sleep. Okay,
you got anything else? I got nothing else? All right? Well,
then since Chuck said he's got nothing else, that's time

(52:57):
for listener mail. Uh yeah, this is the opposite of
all the smart stuff we just talked about. I just realized, Hey, guys,
I love you, love stuff you should know. On a
recent airplane flight, I listened to and really enjoyed the
Coyote episode wherein Chuck mentioned aften wolf bait as a
euphemism for farts. Coincidentally, on that same flight, Uh, we're

(53:22):
Bill Ny the Science Guy and Anthony Michael Hall, the actor.
What is this star studded airplane flight? He said, so Naturally,
when I arrived at my home, I felt compelled to
watch rewatch the film Weird Science, in which Anthony Michael
Hall stars in that movie, and I remember this now

(53:43):
that he mentions it. In that movie, Anthony Michael Hall
uses the term wolf bait as a euphemism for pooping
dropping a wolf bait, which makes sense now that it
would be actual poop and not a fart. Did you
say his name before? Who wrote this? No? Your friend
who used the word wolf bait? Eddie at sure? Okay?
So is Eddie like a big Weird Science fan or

(54:03):
Anthony Michael Hall. I think he just Kelly Lebroc fan. Yeah,
that must be it. Uh. It has been a full
circle day for me and one that I hope you
will appreciate hearing about. And that is Jake Man. Can
you imagine being on a flight with Bill Night and
Anthony Michael Hall. Who do you talk to? Who do
you hang with? I don't. I'd just be worried that

(54:23):
somebody was gonna like take over control of the plane
and fly it somewhere to hold us all hostage and
make those two like perform or what if Bill Knight
and Anthony Michael Holler Inca hoots maybe and they take
the plane hostage. Yeah, it would be very suspicious if
they didn't talk to one another, you know what I mean?
I think so? Who is that? That was Jake? Thanks Jake,

(54:44):
that was a great email and thank you for joining us.
If you want to get in touch with us, like
Jake did, you can go onto stuff you Should Know
dot com and get lost in the amazing nous of it.
And you can also just send us an email to
stuff podcast at i heeart radio dot com. Stuff you

(55:05):
Should Know is a production of iHeart Radio's How Stuff Works.
For more podcasts for my heart Radio, visit the iHeart
Radio app, Apple Podcasts, or wherever you listen to your
favorite shows. H

Stuff You Should Know News

Advertise With Us

Follow Us On

Hosts And Creators

Chuck Bryant

Chuck Bryant

Josh Clark

Josh Clark

Show Links

AboutOrder Our BookStoreSYSK ArmyRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.