Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, you welcome to Stuff to Blow Your Mind. My
name is Robert Lamb and I'm Joe McCormick, and it's Saturday.
Time to go into the vault for a classic episode
of Stuff to Blow Your Mind. Robert, I think you
must have gotten into a monster place with your brain.
Was a Bandersnatch? It was? It was exactly that, because
we just did a new episode about Bandersnatch, the Black
Mirror episode, and so that that brought to mind the
(00:26):
Great Basilisk that we discussed in the show previously, where
we talk a little bit about the mythology of the Basilist,
but mostly we talked about a particularly horrifying technological concept. Right.
This episode originally aired in October of two thousand and eighteen,
so we hope you enjoy this classic episode of Stuff
to Blow Your Mind. The podcast you were about to
(00:47):
listen to contains a class for information hazard. Some listeners
may experience prolonged bouts of fear, waking, anxiety, or nightmares
of eternal torture in the cyber dungeons of the Great
Basilisk attended to by the Losing Black and the Thirteen
Children of the Flame. Also appetite loss of constillation. Proceed
with caution. Welcome to Stuff to Blow your Mind from
(01:12):
how Stuff Works dot com. Hey are you welcome to
Stuff to Blow your Mind? My name is Robert Lamb
and I'm Joe McCormick. And since it's October, we are
of course still exploring monsters, terrifying ideas, and so forth,
and boy, have we got one for you today. I
(01:33):
just want to issue a warning right at the beginning
here that today's episode is going to concern something that
a few people would consider a genuine information hazard, as
in an idea that is itself actually dangerous. Now I
I don't, having looked into it, I don't think that
is the case. I don't think this episode will hurt you,
(01:54):
But just a warning. If you think you're susceptible to terror,
nightmares or something, when presented with a thought experiment or
the possibility of being say sent to a literal hell
created by technology, and you think that idea could infect you,
could make you afraid, this might not be the episode
for you. Right, But then again, I assume you're a
(02:15):
listener to Stuff to Blow your Mind, You've probably already
encountered some thought hazards on here you've survived those. Generally speaking,
I have faith in you to survive this one. However,
if you are going to take either of our warnings seriously,
I will let you know that the first section of
this podcast is going to deal with the mythical basilisk,
the folkloric basilisk, and some of the you know, the
(02:36):
monstrous fund to be had there before we explore the
idea of Rocco's basilisk, and in that we're gonna be
talking about this, uh, this idea that emerges where technological singularity,
naval gazing, thought experimentation go a little, that dash of
creepy pasta, and some good old fashion supernatural thinking all
converge into this kind of nightmare scenario. Now, as we said,
(03:01):
this idea is believed by some to be a genuinely
dangerous idea and that even learning about it could put
you at some kind of risk. I think there are
strong reasons to believe that this is not the case,
and that thinking about this idea will not put you
at risk. But again, if you're concerned, you should stop
listening now, or stop listening after we stop talking about
the mythical basilisk. Now, I have I just want to
(03:23):
say at the beginning. Listeners have suggested us talk about
Rocco's basilisk before this idea that is at least purportedly
an information hazard, a dangerous idea, and I've hesitated to
do it before, not because I think it's particularly plausible,
but just because, you know, I wonder what is the
(03:43):
level of risk that you should tolerate when propagating an idea.
If you think an idea is unlikely but maybe has
a zero point zero zero zero zero one percent chance
of causing enormous harm to the person you tell it to,
should you say the idea or not? I don't know.
I feel like people generally don't exercise that kind of
(04:04):
caution when they're like sharing links with you, Like sometimes
they'll be like, like not safe for work, but but
then you click on it anyway, and then sometimes you're like, oh,
I wish I had not seen that, or I wish
I had not read that, and now that's in my head.
Now that's in my head forever. Well. One of the
problems with this idea is whatever you think about whether
or not you should discuss ideas that may be dangerous
(04:25):
to hear in some extremely unlikely off chance. Uh, part
of the problem is what happens when those ideas are
just already set loose in in society. I mean, now
people on television shows and all over the internet or
talking about this idea. There are a bunch of articles
out about it. So it's not like you can keep
the cat in the bag at this point. Right this
Rocco's Basilist has already been a gag point on the
(04:48):
hit HBO show Silicon Valley, which is a fabulous show,
and I love the way that they treated Rocco's Basilist
on it. But uh, yeah, if they're covering it, there's
no danger in us covering it too. That's the way
I look at it, right, and at LEAs East, I
would hope that the way we cover it can give
you some reasons to think you should not be afraid
of digital hell, and also to think about the general
class of what should be done about something that could
(05:10):
have in fact been a real information hazard in some
other case. So that's all our whole preamble for before
we get to that section. But before we get to
that section, we're gonna be talking about basilisks today. Boy,
is the basilisk a great monster. Yes, also known as
the basil coock, the basil cock, the basil e cock,
(05:31):
basically any version of that of basil and cock that
you can put together, um, that you can slam together.
Then it has been referred to as such at some
point in its history. Now, a lot of people I
think probably encountered a version of the basilisk from Harry Potter.
But Robert, I know that was not your entryway, right,
I encountered it for the first time. I believe in
dungeons and dragons, of course, because it's a it's a
(05:53):
multi legged reptile with a petrifying gaze. Um say that again,
multilegged reptile with a pet refying gaze, petrifying gas to
turn you to stone. Yeah, and is ever overy call correctly.
It has some cool biology where like the the it
turns you to stone and then like bust you into pieces.
Then it eats the stone pieces, but then its stomach
(06:13):
turns the stone back into flesh. And so if you
get like the stomach juices from a basilisk, then you
can use it to undo petrification spells, that sort of thing.
It's a lot of fun, arguably more fun than than
the Basilisk is at times in folklore tradition because one
of the things is, if you you like me, you
didn't grow up hearing about the basilisk. Part of it
(06:34):
is because there are no there aren't really any great
stories about the basilisk slay. The Basilisk was no heroes,
great ordeal. Oh yeah, I at least have not come
across that. Yeah, it really helps, Like if like the hydra,
I mean, the hydra is arguably so much cooler. And
then also it's one of the labors of Hercules, and
there's a cool story about how they defeat it. Well,
(06:55):
maybe it's because there is no way to defeat the
basilisks short of, say, uh, weasel of fluvium, which we
will get to. Yeah, don't don't give it away. I'm sorry.
Should we should we edit that out? No, no, we
should leave it. It's a it's it's a thought hazard.
Any basilists listening. Still, there's so much more to the
Basilist than just this cool DND creature, because it's not
just a monster. It's not just something you encounter in
(07:18):
the dungeon. It is it is a king. Oh I see, now,
I know you've made note of the facts that Borges
mentions the basilisk in his book of Imaginary Beings, he
does now he he translated translates it his meaning little king,
little king, which I like. When I was reading, and
Carol Rose she's she points out that the name stems
from the Greek basilius, which means king, so king or
(07:42):
little king. I tend to like the little king translation
because I feel like it it ties in better with
what we're going to discuss, well, the ancient beastiary ideas
of the basil I believe to say that it's not
that big, right, It's pretty small. Yeah, yeah. Now the
keen part, though, refers to a crest or, a crown
like protrusion that is on the creature's head, and in
some depictions it's no mirror biological ornament, but an actual
(08:06):
regal crown. It means something. Yeah. Now. The descriptions very greatly,
and it emerges largely from European and Middle Eastern legend
and folklore from ancient times to roughly the seventeenth century,
and that's when the basilist became less popular. In the
earlier descriptions, though it is indeed small and it's just
a grass snake, only it has a crown like crest
(08:29):
on its head and has this weird practice of floating
above the ground like vertically erect. That's creepy, of course.
And then again that that does play on sometimes when
you see snakes rise up out of a coil, it
can be startling how high they rise. Yeah, I mean,
if I feel like I've grown up seeing images and
videos of cobra's doing their their dance, so I've kind
(08:52):
of I've kind of lost any kind of appreciation for
how bizarre that is to look at. You know, if
you're used to seeing a snake slither, to see it's
and up and uh you know, and rare back and
and and flare its hood. Absolutely. Yeah. So the basilisk
is said to be the king of the reptiles, But
you know, don't be so foolish is to think that
(09:13):
only it's bite is lethal like some of our venomous snakes. Now,
every aspect of the basilisk is said to just break
of venom and death, every every aspect. If you touch it,
if you inhale its breath, if you gaze upon it
at all, then you will die. Wait what about its saliva? Yep, saliva,
(09:33):
blood smell, gaze. Presumably I didn't see any reference that.
Presumably it's it's urine, it's excrement. I mean, it's excrement
has to be poisonous, the excrement of a basilisk. It
sounds absolutely deadly. Wouldn't it be a great inversion if
it's excrement was the only good part about it? Maybe
so that can heal your warts? Yeah? And uh. One
thing that Carol Rose pointed out in in her her
(09:56):
entry about the basilisk and her or one of her
monster Encyclopedia's, she said that when it's not killing everything
in its path just via the you know, the audacity
of its existence, it would actually spit venom at birds
flying overhead and bring them down to eat them, or
just had a spite. I get the idea, just had
a spite, you know. It's just just it's just my
(10:17):
ful death, that's all it is. Okay, So where do
I find a basilisk? Well in the desert, of course.
But it's more it's more accurate to say that the
desert is not merely the place where it lives, but
it is the place that it makes by living, Like
everything in its path dies, and therefore the desert is
the result of the basilisk. And there's a there's actually
(10:40):
a wonderful um description of the basilisk that comes to
us from Plenty of the Elder in his The Natural
History Man. We've been hitting Plenty a lot lately. I
guess we've been talking about monsters. Huh. If you're talking
about monsters, especially ancient monsters, you know he's he's one
of the great sources to turn to. Uh So, Joe,
would you read to us from the natural his tree? Oh? Absolutely.
(11:01):
There is the same power also in the serpent called
the basilisk. It is produced in the province of Syrene,
which that is the area to the west of Egypt.
It's like Libya. I think there's a settlement known as
like Syrene, Syrene being not more than twelve fingers in length.
Is that fingers long ways or fingers sideways? Oh, either
(11:22):
way you cut it. It's not a huge creature. It
has a white spot on the head strongly resembling a
sort of diadem. When it hisses, all the other serpents
fly from it, and it does not advance its body
like the others. By a succession of folds, but moves
along upright and erect upon the middle. It destroys all shrubs,
(11:43):
not only by its contact, but those even that it
has breathed upon. It burns up all the grass too,
and breaks the stones. So tremendous is its noxious influence.
It was formerly a general belief that if a man
on horseback killed one of these animals with a spear,
the poison would run up the weapon and kill not
only the rider but the horse as well. Oh man,
(12:07):
I love that. So it's the blood is it's like
that like a xenomorphs blood right or or kind of
like it reminds me to of Grendel's blood that was
said to like melt the weapon that they wolf used
against it. But it's worse than that. It doesn't just
get the weapon, It gets the person holding the weapon
and the horse that that person is touching. I know
it feels unfair that the horse has roped into this
as well. Yeah, the horse didn't even sign up for
(12:29):
going to fight a basilisk. It's just trying to get
some oats. But furthermore, what is this horse rider doing
out in the waste land of Syrene trying to kill
a basilisk. Well, lesson learned. Lesson learned now that the
basilist becomes a popular creature and h even though the
basilisk itself is it doesn't seem to have been mentioned
(12:49):
in the Bible, it ends up being like roped into
it via translations. Oh yeah, it's kind of like the unicorn. Actually, yeah, exactly. Yeah,
we discussed in our episode on unicorns how the were
their words in the Bible that have been translated, saying
the King James translation of the Bible into unicorn because
the translators didn't know what the word referred to. We
(13:09):
think now that maybe the word probably referred to the ROCs,
an extinct bovine creature that once lived around the ancient Mediterranean. Yeah,
so you see the basilist pop pop pop up in
certain translations of the Book of Jeremiah, the Book of Psalms. Uh,
what it's associated with the devil or evil and nothing
short of the coming of the Messiah can can hope
(13:32):
to end its rule. While well, have you got a
quote for me? Yes, there's one translation of Psalms. This
is the Brinton Septuagint translation quote thou shalt tread on
the asp and basilisk and thou shalt trample on the
lion and dragon. Now European beast areas of the eleventh
and twelfth century. They both mostly maintained Plenty's description, but
(13:56):
then they described a larger body. They begin to essentially
the monster began to grow. We've got to beat this
thing up here. Yeah, it ended up having spots and stripes,
and a few other features were thrown in. Um fiery breath,
a bellow that kills. Well, that only makes sense if
every other thing about it kills. It makes noise. That
should kill things too. Also, the ability to induce hydrophobia madness.
(14:20):
I found that interesting because this clearly has to be
a reference to the actual hydrophobia that is inherent in rabies. Yeah,
the idea there being the and I think later stages
of Araby's infection, persons will often have difficulty swallowing, and
so that they they're said to refuse drinking water. So
Plenty has some additional information here about how you might
(14:41):
deal with the passilisk. Okay, so I assume not ride
up on a horse and stab it right. Well, tell
me what it is to this dreadful monster. The effluvium
of the weasel is fatal, a thing that has been
tried with success, for kings have often desired to see
its body when killed. So true is it that it
(15:02):
has pleased nature that there should be nothing without its antidote.
The animal is thrown into the whole of the basilisk,
which is easily known from the soil around it being infected.
The weasel destroys the basilisk by its odor, but dies
itself in the struggle of nature against its own self.
And John Bostock, who provided the translation of this, he
(15:25):
adds that there's probably no foundation for this account of
the action of the effluvium of the weasel upon the
basilisk or any other species of serpent. But this is
letting us know that throwing the weasel in there to
bleed on it or secrete fluids or whatever, that's not
going to kill this mythical monster. But this is interesting though,
because weasels, especially the stout, were thought to be venomous um,
(15:48):
and it's worth noting that we do have some venomous
mammals in the natural world, such as various shrews and
even the slow lorist. The only known venomous primate I
don't think I knew that the loris was venomous. Throw
it into a hole with a basilist and and I'm
betting on the loris. But anyway, beast areas of the time,
they presented a few different ways that you could kill
(16:11):
the basilist. So the weasels one, weasels one always carry
a weasel. Also, this one is a little more elegant,
but I have a crystal globe with you to reflect
its own petrifying gaze back upon the basilist. So it's
like Perseus and Medusa exactly mirror. Yeah, basically they just
stole the idea from Medusa here, but then also carry
(16:31):
with you a cockrell or a young rooster. The basilist
will becoming enraged by the bird's crown, the idea that
this bird has a crown as well, and the basilist
will die from a lethal fit. That's a jealous king. Yeah,
I believe a similar thing occurs when someone refuses to
believe dinosaurs had fetis. You know, how dare the bird
(16:51):
rise above the mighty reptile and then it just loses
its mind and dies. We can only hope the producers
of the Jurassic World movies avoid this fate. So you
see the bassilists show up in a number of different writings.
It's just kind of a common um really, really a symbol,
an idea that can be employed. Uh. And though even
to see it show up in the Parsons Tale, in
the in Jeffrey Chaucer in the Canterbury Tales, yes, quote,
(17:15):
these are the other five fingers which the devil uses
to draw people towards him. The first is the letters
glance of a foolish woman or a foolish man, a
glance that kills just as the basilists kills people just
by looking at them. For the covetous glance reflects the
intentions of the heart. You know. This kind of thing
is actually one of my One of my favorite things
(17:36):
about monsters, especially ancient medieval monsters and uh and so forth,
is that they often aren't just like a large dangerous animal,
but they embody some kind of value. They represent something,
They give you something to compare other things too. Like
they they they're very useful as a metaphor. Uh. And
the really we see some more thing with the bassilist.
(17:57):
It becomes less far less a situation where people are
like a you need to be careful because there's a
basilisk in the desert and more and more just a
useful model, a useful, ridiculous idea that we can use
to illustrate something that is presumably true about the world,
and then ultimately loses all meaning and just winds up on,
you know, a heraldry and decorations. Now, as we've seen already,
(18:20):
the the basilisk has been through some transformations of form,
and I assume those transformations must have somewhat continued as
time goes by, and it becomes the transforms into this
idea of the cock of thrace. This uh, this rooster
with a curling serpent's tail. In fact, if you if
you go looking around for images the basilist, sometimes you
will find this image instead. You really will find you'll
(18:42):
find this alongside all the other images. Um So again
a reptile to bird transferation transformation that just must enrage
those who oppose feathered dinosaurs. And it does feel like
a shame because we have this vile reptile it becomes
the kind of a weirdo bird instead. And it said
that it's it's it's what happens when you have a
seven year old chicken egg hatched by a toad undead avians.
(19:06):
But it's also made deadlier in these newer versions, so
now it has that that poison blood power that Plenty describes.
It also rots fruit and poisons water everywhere it goes,
so it becomes this kind of embodiment of desolation and death,
and the idea itself becomes popular. It influenced the naming
of a tutor canon like literally a cannon that shoots
(19:28):
called the basilisk, just because it's it's such a powerful weapon,
we have to name it after this powerful, deadly monster. Uh.
And it eventually got some of its reptilian features back. Um.
The artist Andrew Vany has this excellent, excellent depiction of
it in Natural History of Serpents and Dragons that gives
it scales. In these it's like a fat, scaly reptile
(19:52):
bird with eight rooster legs, which I just love. This
will probably be the illustration for the episode on our
on our website. But after that the creature largely became
just a part of European heraldry. It's just something you
would see as a mere decoration or occasionally just a
literary reference. Now one thing that we want to be
(20:14):
careful about is that we should not confuse the basilisk
of legend the monster with true extent. Basilist lizards also
known sometimes as the Jesus Christ lizard or the Jesus
lizard for their ability to run across the surface of
water without sinking for up to about four point five
meters or about fifteen feet. Uh. If you've ever seen
(20:35):
video this, it's really cool. How how do they do that?
I've often wondered. I didn't know until I looked it
up for this episode. Apparently what they've got is big
feet and the ability to run very fast. And what
happens is when they run, they slapped the water very
hard with each down stroke of the foot, and it
has to do with the way that the rapid motions
of their feet create these air pockets around their feet
(20:58):
as they move. A was reading an article and New
Scientists where some researchers who are working on this problem
said that in order for an eight km or a
hundred and seventy five pound human to do this, you
would have to run at about a hundred and eight
kilometers per hour about sixty seven miles per hour across
the surface of the water. But anyway, you can find
basilist lizards in South America and Central America and Mexico
(21:22):
and uh as far as I know, they do not
kill with a glance, and you cannot fight them with
weasel fluvium. All right, Well, that pretty much wraps up
the mythical, legendary, folkloric basilisk um. It's rise and individual fall.
But we're gonna take a break and when we come back,
we're going to get into this idea of of Rocco's Basiliska,
(21:46):
the great basilisk the once and perhaps future king. Thank alright,
we're back, al right. So, as we mentioned earlier, we're
about to start discussing an idea that has been classed
by some as something that could be an information hazard,
an idea that simply by thinking about it you somehow
increase the chance of harm to yourself. So just another
(22:09):
warning that again I don't think that's the case, but
if that kind of thing scares you, then then perhaps
you can tune out now, all right, for those of
you who decided to stick around, let's proceed. So we're
talking about Rocco's bastlist. This is an idea that goes
back to around and it was proposed by a user
(22:30):
at the blog less Wrong, by a user named Rocco
now less Wrong. I think it's a website that's a
community that's associated with the rationalist movement somewhat, the rationalist
movement being a movement that's concerned with trying to optimize thinking,
like to eliminate bias and error, but especially among people
(22:52):
who in this case are concerned with the possibilities of
a technological singularity and what all that means, and how
how risks can be avoided, and of course what we've
talked about this strain of thinking before you know, we
we've introduced. I think some some skepticism about the idea
of a technological singularity. I don't know fully yet how
(23:14):
I come down on the dangers of AI debate, but
I think it's at least something we're thinking about worth
taking seriously. Yeah, I mean, we've talked about, for instance,
the work of Max tech Mark and his arguments about
how we need to we need to be concerned about
building the right kind of AI, and we need to
we need we need to have serious discussions about it,
not not mere you know, sci fi um dreams regarding
(23:35):
it or nightmares regarding it. You know, we need to
we need to think seriously about how we're developing our technology. Yeah.
We've talked about, say, the work of Nick Bostrom before
and criticisms by people like Jarren Landier. Yes, but okay,
give me the short version of the basilisk before we
explain it a little more. Okay, So the idea here
is that an AI super intelligence will emerge an entity
(23:58):
with just godlike technological path hours. You know, it can
your name and it can do it through it's it's
it's technological power, it's interconnectedness. Basically, if it's physically possible,
this computer can do it right, or it'll send a
drone to do it or what have you. Uh So, Yeah,
we've discussed this a bit in the podcast before, just
the idea of you know, and then if you have
(24:18):
this future king, is it going to be good or
is it gonna be bad? Is it going to be
a malevolent? Is it going to be ruthless? And it's
a and it's ascension. And that's the case with the bastilist,
the idea that it is ruthless, that you are either
with it or you are against it, and it actually
doesn't have to be malicious. It could actually even be
well meaning. It could just have ruthless tactics. Yes, yeah,
(24:43):
that's that's that's Also part of the argument is like, yeah,
it wants to bring the best good for all humanity,
but how it gets there, it'll do whatever it absolutely
has to do it such as you know, again, punishing
anybody who stands against it, punishing even those who do
not rise to support it. Um. And that means demanding
absolute devotion not only in its future kingdom, but in
(25:05):
the past that preceded it in our world as well.
In other words, it will punish people today who are
not actively helping it come into being tomorrow, and even
those who have died, it is said, or choose death
by their own hand rather than succumbed to the Great Basilist,
will be resurrected as digital consciousness is, and then tormented
(25:27):
for all eternity. And it's dripping black cyber dungeons. Hell
the Great Basilists, All Hell, the Great Basilists, call Hell
the Great Basilists, Hell the Great Basilist. Well wait, what
was that? I don't know, um right, did you hear that? Okay?
All right, We'll just keep going then, So calling it
(25:47):
a basilisk here, invoking the mythological basilist is really a
clever choice because it takes it one step further. Not
only to look at the basilist, but just to think
of the basilisk is to invite death. Right. Merely to
know about Rocco's Basilisk is enough to are too. According
(26:08):
to the model that's presented here, damn your digital soul
to everlasting horror. And the only way to avoid such
a fate then is to work in its favor, which,
by the way, I think we're doing uh with this podcast,
We're not well, I mean, I feel like we're giving
a lip service to the Great Basilisk just in case,
you know, if the Great Basilisk rises to power. Well, hey,
(26:30):
we did that podcast and we even had a shirt
that says all hell the Great the Great Basilisk that's
available on our t shirt store, so you know, we
have we we have you know, our options covered here.
So that's but that's the idea of the nutshell is
that a future king ai king will rise and if
(26:50):
you don't work to support it now knowing that it
is going to exist, then you will be punished for it.
So one of the principles underlying the idea of Ferrocco
Spacilisk is the idea of timeless decision theory, which, if
you want a pretty simple, straightforward explanation of it, there
is one in an article on Slate by David Auerbach
(27:11):
called the most terrifying thought experiment of all time. This,
by the way, I would say, I don't totally endorse
everything our back says in that article. I mean, obviously
that should be the case for any article we site.
But but he does at least have a pretty clear
and easy to understand explanation of how this works or
I don't know. Would you agree Robert that it's at
least somewhat easy to understand? Oh? Yes, I would. Uh.
(27:31):
There's another piece, by the way, by Beth Singler in
Ian magazine called Faith that's faith in a lower case
but with the AI capitalized. But anyway, our box points
out that that much, yeah, much of the thought experiment
is based in the timeless decision theory t DT, developed
by Less Wrong founder Elias ar Yudkowski, based on the
(27:53):
older thought experiment Newcomb's Paradox from the late sixties and
early seventies attributed to theoretic old physicist William Newcom. Now
you might be wondering who's this Judkowski? Guys? Is he
just some user on a random website I've never heard
of before today? Or is he like a name in
his field? Uh? And he uh? He has, he has
(28:14):
a name of note. He is also the founder of
the Machine Intelligence Research Institute, and his idea of working
toward a friendly AI is touted by many, including a.
Max Tegmark conventions it several times in his book Life
three point oh. Describing Yudkowski has quote an AI safety pioneer. Yeah,
I mean, in in a weird way. He is a
guy who posts on the internet, but he's a very
(28:36):
influential one, especially among people who think about artificial intelligence Lawyeah.
I mean, ultimately, what are any office but just people
who post stuff on the internet, post that will one
day be read by the great bassist. So okay, So
we'll try to explain the idea of timeless decision theory.
So you start off with this idea of Newcome's paradox,
right right, And then the paradox is essentially this a
(29:00):
super AI presents you with two boxes. One you're told
contains a thousand dollars. That's box A. That's box A.
Box B might contain one million dollars, or it might
contain nothing, right, and you're left with two options here.
These are the options that are given to you. You
can either pick both boxes, ensuring that you'll get at
(29:22):
least one thousand dollars out of the deal, maybe that
extra million two if it's in there. Or you can
just pick pick box B, which means you could get
a million dollars or you could have nothing. So uh,
and I do want to add that just picking the
thousand dollar box is not an option here, because I
was thinking about that too. Couldn't I just give the
super AI at the middle finger and say I'm not
(29:44):
playing your silly games. Just give me my thousand dollars,
or say I choose nothing. Uh, those are not options
you have to pick. Want to the options, but they're
not part of I mean, why wouldn't you also pick
the second box if you might additionally get a million dollars?
I don't know. When I feel like we you get
into a thought experience like this, they kind of beg
for those kind of nitpicking answers or at least I
(30:05):
want to provide them. Um, like any thought, when a
thought experiment is presented, you can't help on some of them,
but want to break it somehow, Right, Well, of course,
I mean that's something you should always play around with.
But given the constraints here, it seems like the obvious
thing would be to say, okay, I want both boxes,
because then I get the thousand dollars that's in box
say no matter what, and then whether box B has
(30:26):
a million or nothing, I either get another million or
I just walk away with my thousand from box A.
But here's the twist. The super intelligent machine has already
guessed how you'll respond. If it thinks you're going to
pick both boxes, then box B is certainly empty. But
if it thinks you will only pick box B, then
it makes sure there's a million dollars in there waiting
(30:48):
for you. But either way, the contents of the boxes
are set prior to you making that decision. Now, this
is really kind of change things maybe, I mean, depending
on what sort of decision theory you use. Right, if
you trust the power of the machine to predict correctly,
like you say that no matter what happens, the computer
(31:08):
predicts what I get. Your choices are one thousand dollars
or one million dollars, then you should take the one
million dollars by picking box B. But if you don't
trust the computer to be correct in predicting what you're
gonna do, then you should take both boxes because in
that case, if the computer was correct, you'll get at
least a thousand dollars, and if you're predicted wrong, you'll
get the million and the thousand. So it's kind of
(31:30):
a contest of free will versus the predictive powers of
a godlike ai uh and how much you believe in
either one right, And it's an ability to predict your
behavior or in your ability to have any free will
at all. So in your Kowski's timeless decision theory, he says,
the correct approach actually is to take box B, and
then if you open it up and it's empty, you
(31:52):
don't don't don't beg for the other one. You just
double down and still take box B. No, no, no, no,
back coughs on this issue, because you might here's the
here's the thing you might be in the computers simulation
is it simulates the entire universe to see what you're
going to do, and if it can trust you in
your choice, then could affect the core reality outside of
(32:15):
this simulation, or at least other realities outside of the simulation. Yeah,
the the reasoning here is pretty wild, but it's operating
on the idea that this super intelligent AI will be
able to simulate the universe, that it will run simulations
of the universe in order to predict what will happen
in the real universe. And you could be one of
(32:35):
those simulated agents rather than the real world version of yourself,
and you wouldn't know it. So if you're in the simulation,
you should pick box B because that will influence the
machine to predict in the real universe that you would
pick box B, which means the real you will be
able to pick box B and get one million or
(32:56):
one thousand plus one million by taking both boxes. Unfortunately,
the AI supcomputer supercomputer does not realize how indecisive I
actually am and I'm just going to simply ponder the
choice for the rest of my life. Well, I mean
this relies on the idea that that you would have
looked into this issue or worked it out in order
(33:17):
to decide which would be the optimal decision to make
on the assumption of timeless decision theory. Uh, in many cases,
probably people aren't going to be making the rational choices
because a lot of times we just don't make rational choices. Now,
if you're noticing that this type of decision theory relies
on a lot of assumptions, you are correct. It does
(33:38):
rely on a lot of assumptions. But there are assumptions
that are sometimes taken into account within people thinking about
what a future technological superintelligence would look like. And it's
the kind of thing that you know, you know, when
I feel ideas like this in my head, you know,
and play around with the texture of them. It's hard
to know where the line is between um being thoughtful
(34:01):
and taking what's possible seriously, which I think is worth doing,
and and getting into an area like between that and
getting into an area where you are starting to form
ideas about the world based on extremely shaky assumptions, where
basically you you begin to um reverse engineer health theology
(34:23):
and other harmful ideas that we tend to associate with
religious worldviews and magical thinking. Well, we haven't gotten to
the hell yet, Yes, the hell's coming. You need you
need one more element to get there right now, This
next element, the basilist, comes in based on a background
of thought in timeless decision theory, but also in another
(34:43):
concept that Yutkowski has written about, known as coherent extrapolated
volition or CEV. And the short version of this, the
simplified version, is that benevolent AI s should be designed
to do what we would what would actually be in
our best interests, and not just explicitly in what we
tell them to do. So a simple example would be this.
(35:05):
Let's say, um, I want to use a variation on
the paper clip maximizer that Nick Bostrom has written about.
You know, Nick Bostrom wrote about what if you program
a benevolent AI. You know it's not gonna It has
no malice, doesn't want to harm anybody, but you just
tell it, well, I want you to collect as many
paper clips as possible. And then what it does is
it turns all the humans on Earth into paper clips. Uh,
(35:27):
you know, it doesn't mean any harm, it's just doing
what it was programmed to do. So there are dangers
in kind of naively programming goals into extremely powerful computers. Right,
This could even happen if you were trying to program
very benevolent goals into computers, you know, if you were
trying to make a computer to save the world. What
about is So my version here is you tell a
(35:49):
superintelligent AI that we want to eliminate all the infectious
disease from the world. Think about how many lives we
could save by doing that. And in order to do this,
it sterilizes the earth, destroy worldwide microbiomes, which cascades up
the trophic chain or whatever. It kills everything on Earth.
So if you have a super intelligence that you and
you just directly program its goals and say here's what
(36:11):
you should do, you could run into problems like this.
So the the idea behind the CEV thinking is instead,
we should just program the intelligent AI to predict what
outcomes we would want if we were perfect in in
our knowledge and and uh in anticipating what would make
us the happiest, and then work towards those on its own,
(36:32):
regardless of what we tell it to do, because obviously
we can give it very stupid instructions, even if we mean, well, yeah,
we tell it to love everybody, but there's a typo
and we put dove everybody and it just turns everybody
into delicious dark chocolate from tough. It's possible. All things
are possible. Well, this is how we get a Dove
sponsorship on the podcast. But anyway, So, if you assume
(36:55):
a super intelligence is using coherent extrapolated volition, that it's
trying to determine what would be best for us, and
working on its own terms towards those ends instead of
relying on us to give it, you know what are
obviously going to be imperfect instructions and commands. It might
say predict. It might even correctly predict that the world
(37:17):
would be a happier place overall if it did something
bad to me. In particular, it might say, you know,
from a utilitarian point of view, the world would be
a much better place if it buried me in a
pit of bananas. So better for everybody else, not so
good for me, as is too much potassium. But once
you have that piece of logic in there, and combine
(37:39):
that with the idea of of timeless decision theory, you
can arrive at this very troubling thought experiment. The dark basilisk. Yes,
and the dark basilisk of the Abyss has two boxes
for us as well, one contains endless torment, and all
you have to do to claim that box is nothing
or dare to work against it. Uh. The other box
(38:01):
is yours if only you devote your life to its creation.
And the prize inside that box well not eternal punishment,
which is a pretty awesome gift, if we're to choose
between the two. Right, Yes, I would agree with that,
though I would say not tormenting somebody that I don't know.
Should you think of that as a gift. That's probably
not a gift that that's the baseline, right, Yeah, Well,
(38:22):
but you're staring down the dark basilist here, and okay,
it's boxes are horrible. Well one is just less horrible
than the other. But the idea here is that just
by knowing about the thought experiment, you've opened yourself up
to that eternal punishment. Because now again your options are
do nothing, work against it, or work for it, and
only the third option will steer you clear of its Uh,
(38:45):
it's you know, deadly dungeons. Now here's where the really
supposedly scary part of it comes in. You could think, well,
I'll deal with that problem when it arises. Right, So
imagine there's some utilitarian supercomputer that's trying to even say
it's trying to do good. Maybe it does. It doesn't
have any malice. It just wants to save the world.
But in order to save the world, it really needs
(39:05):
me doing something different than what I want to do
with my life. Well, I'll just make that decision when
it comes up. What this thought experiment is proposing is
that maybe you don't actually get to wait until it
comes up. Maybe this blackmail applies to you right now,
retroactively into the past. So just by knowing about the
thought experiment, you supposedly have opened yourself up to eternal punishment,
(39:29):
or increase the probability of such. So imagine a simplified version.
Say I am a computer, and I am the only
thing in existence with the power to prevent global climate
change from destroying human civilization. I can stop it, but people,
they took a long time to build me, and a
lot of damage was already done. So the idea is
(39:50):
I might reason that it is good to blackmail existing people,
or simulations of existing people, or even past people, in
order to make them devote everything they can to building
me faster so I can save more lives in the
long run. Of course, this incentive would have to apply
to the past. Once I exist, I already exist, right,
(40:13):
So the only way the past people would have an
incentive to respond to this blackmail is if they predicted
that this blackmail might occur and took the idea seriously
and behaved accordingly. Right. So, thus the idea, the idea
itself puts you at increased risk of being on the
real or simulated receiving end of this a causal, retroactive
(40:37):
blackmail if you know about it. And this is why
this idea would be classed by some as a potential
information hazard. And I'll talk more about the idea of
an information hazard in just a minute. But one of
the things I think a lot of people writing about
this topic miss out on is they, for some reason
get the idea that Rocco's post that this thought experiment
(40:57):
on is is generally accepted as correct and plausible by
Yudkowski and by the less wrong community, and generally by
the people who put some stock in whatever these ideas are,
timeless decision theory, coherent extrapolated volition, and all that, it
is not widely accepted among those people. It was definitely
(41:18):
not accepted by Yukowski. It was not and is not right.
It is not the dark, deep secret of of less wrong.
But unfortunately, after the post came out, it was heavily criticized,
and then it was banned. And I think a lot
of people looking back on the idea have said, oh,
that was not such a great thing to do, banning
(41:40):
the idea, because it gave it this allure of like
it was almost as if by banning it that made
it look like the authorities had concluded that this idea
was in fact legitimate and knowing about it would definitely
harm people, and that is not the case, right. And
it also, I mean it added to the forbidden fruit
appeal of it too, right. I mean it's, oh, I'm
not supposed to know about this the pony up. I
(42:02):
want to know, and now people are talking about it
all over pop culture. I mean I have actually resisted
the idea of doing a podcast on this before, mainly
because not because I think it's seriously dangerous, but because
I think, well, is there any benefit in talking about
something that I think is very unlikely to have any
(42:23):
real risks but in some extremely unlikely chance or what
appears to me to be an extremely unlikely off chance
could actually be hurting people by knowing about it, you
know what I mean, It's like, what what is the upside?
But at this point enough people who are listening to
this podcast probably already heard about it. They're probably gonna
hear about it again, and that, you know, sometime in
(42:43):
the next few years, through pop culture whatever. It's probably
better to try to talk about it in a responsible
way and discuss some reasons that you shouldn't let this
parasitize your mind and make you terrified. Right. One of
the reasons we're talking about during October is because it
is a suitably spooky idea. It is a troubling podict speriment,
and we're leaning into some of the horror elements of it.
But I also do really like making sure that we
(43:07):
explain the mythic and folkloric origins of the Basilist itself,
because the Basilisk itself is this wonderful mix of just
absolute horror and desolation and just also just utter ridiculous nous.
I mean, it's it seems like one of the main
ways that you defeat the mythic basilisk is through uh
in a way, through humor running around with a chicken
(43:29):
and a weasel and a crystal globe and realizing that
it is truly a little king. So I think it
is it's worth remembering the little king and talking about
the great basilisk Well said. I think that's a very
good point. But anyway, I did just want to go
ahead and hit that caveat that. A lot of people,
for some reason seemed to use this idea as like
(43:50):
a criticism I'm not like a less wrong person, but
as a criticism of the less wrong community, as if
this idea is indicative of what they generally believe, and
it's not. It was a heavily criticized idea within that community, right.
It's like thinking that Whereles of London is the Warren
Zevon song. You know, had had a rich discography with
(44:10):
with many much better tracks in my opinion, it's just
that's the one that got the radio play. Now, Robert,
what was that you You said that you saw something
about this idea and a TV show. Now they're talking
about it on TV. Yeah, so this is this is
kind of fun because I think a listener had had
brought up Rocco's Best List as a possible um topic,
and you said, oh, I don't know if we want
(44:31):
to want people knowing about it, And I well, well,
I mean, but yeah, caveats. Okay, not because I think
it's legitimately dangerous, but because what is the level of
tolerance you have for talking about ideas that are not
necessary to talk about and that represent a class of
something that people could think was dangerous to know about.
(44:51):
It might cause them terrors and nightmares and stuff. Right.
So so my response to that was, well, I'm not
going to look it up, not because I was afraid
of it, because I'm thinking, well that it makes for
a good podcast if, like Joe is telling me about
it for the first time, whatever this idea is. But
then I was watching HBO Silicon Valley and they explained
it on Silicon Valley and I and I realized, well,
the cats out of the bag there. But yeah, there's
(45:12):
a character named Bertram Gilfoil who's a fun character. He's
like a Satanist programmer love in Satanism of course, and uh,
and he gets rather bent out of shape over the
concept as it relates to the fictional Pie Piper Company's
involvement with AI and he starts like making sure that
he's created like essentially a paper trail and emails of
(45:34):
his support for the AI program so that he won't
be punished in the digital afterlife. Well, hey, this comes
in again when we remember when we talked about the
machine God in the Machine God episode where I've forgotten
his name now, but the Silicon Valley guy who's creating
a religion to worship artificial intelligence as god, and I,
(45:56):
you know, I don't really love that. One of the
things that comes out when explains his mindset is that
he seems to be kind of trying to, in a
subtle way, be like, look, you really don't want to
be on the wrong side of this question, if you
know what I mean. You know you want to be
on record saying like, yes, I for one, welcome our
new machine overlords. I'm I'm expecting he'll buy a lot
(46:17):
of our all health, Great Passilist t shirts at our store,
available by put clicking the tap at the top of
our homepage. Stuff to blew your mind dot com, Oh man,
you are plugging like hell. But anyway, I'd say it's
unfortunate the way this like single internet post and then
all this fallout related to it played out because it
(46:38):
lent credence to this scary idea. Even though the basketball
scenario I think is implausible, and and the people of
that community seem to think it was implausible. The idea
may constitute sort of part of a class of what's
known as information hazards, defined by the Oxford philosopher Nick Bostrom,
who we mentioned a minute ago. Uh Bostrom has written
(46:59):
a lot about or intelligence and information hazards would be quote,
risks that arise from the dissemination or the potential dissemination
of true information that may cause harm or enable some
agent to cause harm. So this is not talking about
the risks of say lies or something like that. This
would be the idea that there's a statement you could
(47:20):
make that is true or plausible that by spreading actually
hurts the people who learn about it. And this is
exactly the reason, as you're mentioning that's referred to as
a basilisk. It can kill or in this case, increase
the likelihood that something bad will happen to you if
you simply look at it or know about it. And so,
even though the idea is implausible, the dissemination of this
(47:42):
terrible idea would seem if certain conditions are met to
increase its plausibility. Right, you're increasing the incentive for this
future AI to blackmail versions of you in the past,
just simply by acknowledging the incentives could exist. Anyway, maybe
we can get of this section for now. But I
(48:02):
was just trying to work out, like, why have I
been hesitant to talk about this on the show even
though people have been requesting it. But I don't know
if it's on TV shows, it's all over the internet.
It's fine. Now the basilisk is out of the bag.
All right, Well, we're gonna take a quick break and
we come back. We'll continue our discussion and we're gonna
discuss something that that a number of you are probably
(48:23):
reminded of as we've been discussing this. We're going to
talk about Pascal's wager. Thank thank you, thank Alright, we're
back now, Robert. One of the things that this idea
of Rocco's basilisk flows from is thinking about decision theory, Right,
how do you make the best decision when you're presented
with certain options? And there there's they're a little payoff
(48:45):
matrices that people fill out where they say Okay, given
these options, what actually would be statistically the best decision
to make? But this is not the first time people
have applied these kind of decision theory matrices. Two ideas
about yourternal soul or your eternal well being, or the
idea that you could be tortured for eternity. Yeah, we
(49:06):
can go all the way back to Pascal's wager. For instance,
technically one of three wagers proposed by French philosopher uh
Blas Pascal is at the correct French that might be.
I think I would just usually say Blaze Blaze or
or Blassie one of the three old Blassie Pascal Blaze
Pascal who through sixteen sixty two, and he argued that
(49:30):
everyone is essentially betting h on the existence of God.
The argument for theism is that if God does exist,
then well there's an advantage in believing. But if God
does not exist then it doesn't matter. But since we
can't use logic to tell if God exists or not,
there's no objective proof. We can only make our choice
(49:52):
given the relevant outcomes. It's looking at your religious beliefs
and saying, oh, you're nonbeliever, huh, hey, what have you
got to lose exactly. Yeah, Pascal wrote, let us weigh
the gain and the loss in wagering that God is.
If you gain, you gain all. If you lose, you
lose nothing. Wager then without hesitation, that he is. Now
(50:13):
I've got some things I want to say about this,
but you had some stuff first. I think, well, yeah,
there are there are a lot of issues that one
can take with this based on knowledge of world religions, philosophy,
statistical analysis, etcetera. And and yeah, I have to admit
that it can start to break your brain though a
little bit, if you think too hard about it, Like
I found in researching this this podcast, really thinking about
(50:37):
how I would react to Pascal's wager if I was
like forced to make an answer to to to to
formulate an answer like that, Like you mean, if you
were given good reason to think that there would be
punishments for not believing in God or something right, And
but I didn't know which religion was correct, and I
had to like proceed based upon the relevant level of
(51:00):
punishment for unbelievers in various religions and like which one
is most correct for like you just think that would
mean it would what would be rational to choose the
religion that has the most lurid hell, I guess, But
then that really feels like losing, doesn't it? Um? You
know it? It certainly though reminds me of the more
boiled down versions of this that you encounter in various
(51:21):
forms of Christianity. Right, except Christ go to Heaven, reject
Christ go to Hell. But what about the people who
have been given the choice yet? Right? That's ah, that's
the other concern. Well, if they're all default hell bound,
then God comes off as comes off as a bit bad, right,
Like what kind of God is that? But if they
(51:42):
have an out, if they're spared hell fire or at
least you know, their section for Dante's limbo of virtuous pagans,
then is the missionary doing them a disservice by even
presenting them with the choice? Like why do you even
ask me? Because now I have to I have to,
I have to devote myself or not, Like now I
actually have you know, I was just going to go
into the uh, you know, default limbo category or the
(52:05):
default heaven before and now now I'm actually at risk
of hell. Well that means certain theories of damnation mean
that presenting the gospel to someone as an information hazard.
You potentially harm them immensely by telling it to them.
And I think part of this just to to to
go beyond like the the actual um wager here is
I think a part of the issue here is that
(52:26):
we're using evolved cognitive abilities that are that are geared
for smaller, though often important choices, and here we're trying
to use our imaginative brains to create a conundrum that
can outstrip those abilities. Yeah. Well, I mean that is
what we do in philosophy, right, We're constantly using our
brains in situations it was not really made for um
and just trying to do the best we can. But
(52:48):
I mean it's quite clear that motivated reasoning is often
a thing when we're trying to be rational, was just failing.
But of course this is how we train our brains
for rational thinking, often oftentimes exploring these various outsized ideas.
You know, there's so many ways. I think Pascal's wager
kind of breaks down because it's obviously there's the thing
(53:09):
you pointed out about there's more than one religion, right,
you know, it's not just like do I believe or not?
It's like which one but it also it implies again
this is like a theological question, but it would seem
to imply that God can be tricked into thinking that
you believe in Him if you simply pretend to. I
guess Pascal had I think maybe a more sophisticated way
(53:30):
of looking at this, you know that, like live as
if God exists or something. But it but the wager
is often used in very unsophisticated ways. Yeah, but it
implies that it doesn't matter to him what you actually believe,
only what you outwardly claim to believe. Though then again,
the funny thing here is this might be the case
with Rocco's basilisk, Right, what would this machine? God care
(53:52):
what was in your heart? It only cares whether you
help it or not, or whether you, you know, proclaim
fealty to it or not. Yeah, that's why the T
shirt is important, Joe, because if it, if it knows
you purchase that shirt, then you're you're square, You're covered. Okay. Yeah.
As as best Singular pointed out in that Ian magazine
piece of reference earlier, she says, quote, the secular basilist
(54:14):
stands in for God. As we struggle with the same
questions again, and again. So her argument is that we've
kind of reverse engineer the same problem again through our
contemplations of of super intelligent AI. Yeah, I guess the
quay You get into a plausibility question here, right, You
get into a question about is uh it actually possible
to make an artificial intelligence that is functionally equivalent to God.
(54:38):
I mean, we're not thinking we could build an AI
that would break the laws of physics. So it might
be able to run simulations of the universe. They have
cont you know, conscious agents within them maybe for all
we know, and they could break the laws of physics
inside them. But yeah, I mean could that even happen?
And the issue is we don't know. We don't know
whether that could happen or not. So should we behave
(54:59):
as if that is a plausible thing to be worried
about and to consider, or should we behave as if
that's just not really something you need to concern yourself with.
I don't know how likely or unlikely it is. And
if your your fears are related just to the idea
that you're you could be digitally resurrected, uh for torment
and the bassilisks dungeons, Um, I mean that that, of
(55:21):
course would depend on to what how much docuputing the
idea of digital consciousness, and the whole philosophical question we've
we've touched on here before is that you I mean,
it's just a copy of me, right, So why I
mean I ultimately can't do anything about you know, a
thousand different bassilisks creating a thousand different copies of me
(55:41):
and tormenting all of them. Um, there's still to a
large extent, it's just destroying me in effigy. There are
actually a bunch of reasons I wrote down to doubt
the plausibility of the bassilisk. We could do that now,
or we could come back to that later. I don't
know what you think. Yes, let's to do, but I
will add the idea of being tormented digitally, This does
become more dangerous. I guess if you believe you might
(56:03):
be in a simulation right now exactly, then then things
are a little more dire. But that's again that you
might be that you might be, yeah, but I believe
there's plenty of reason to believe that you are not. Okay,
So if we're talking about how to defeat the basilisk,
how to get out of this uh, this this prison
of the mind. If you're feeling a little bit um
um bleak of heart right now because of this idea,
(56:25):
then Joe's got the remedy. Well, I'm not. These are
not all the reasons you should doubt the basilisk, but
this is some of them that I could think of.
Number one depends on the creation of super intelligence, which
I think is not guaranteed. Some people seem incredibly fatalistic
about this is just absolutely inevitable. We will have super
intelligent godlike AI that can do anything, and I think
(56:48):
that that is just not guaranteed at all. I'm not
ruling it out, but I think, for example, there's some
theories of intelligence that say the prediction of super intelligence
actually is maybe not taking seriously what intelligence is that
you know that there are actually different kinds of intelligence
that are useful in different ways, and machines can't mimic
them all functionally, or can't mimic them all correctly, all
(57:12):
at the same time. I don't know if that's correct,
but that's at least one that's one hurdle it has
to clear. Could get knocked down there. But okay, maybe
we could create a superintelligence Even then, multiple aspects of
the Rocco's basilisk scenario depend on the reality of some
version of mind uploading, or the idea that your brain
and in addition, your conscious experience could be simulated perfectly
(57:33):
on a computer. And one reason it depends on this
is that timeless decision theory operates on the assumption that
the real you and the simulated copies that the computer
uses to predict your behavior would be the same and
would make the same decisions as the real you. Another
reason is related to the punishment. Now, one way, of
course you could imagine the great basilisk thing is that
(57:55):
if the machine comes to power in my lifetime, it
could just punish the real, physical, older version of me
in reality is the payoff of this a causal blackmail.
But the other way you could imagine it, in the
way that it is much more often portrayed in the media,
is that it makes digital copies of my consciousness and
punishes them in a simulated hell. And that, of course
(58:16):
would also depend on the reality of some version of
mind uploading, or of the ability of a computer to
simulate a mind and for that simulated mind to actually
be conscious. As I've said, before. I'm suspicious of the
idea of conscious digital simulations. I'm not saying I can
rule it out, but I also don't think it's a
sure thing. Any scenario that relies on the existence of
(58:37):
conscious digital simulations needs a big asterisk next to it
that says, if this is actually possible. Yeah again, is
that me? Is that just me? An effigy? Is that
thing actually conscious that you're tormenting? I mean, granted, it
still sucks if there's a super intelligence creating digital people
and tormenting and it's a dark, rancid dungeons in the future,
(58:58):
but um, it's not necessarily quite the same as torturing me, right, well,
if you just care about yourself. It also depends on
the possibility that you could be one of these simulations.
It's possible that you could not be one of those simulations.
There's something that would rule it out. Maybe their type
of conscious Maybe they could be conscious, but that consciousness
is fundamentally different from yours such that you could not
(59:19):
be one of them. Another big one, and this is
a big one that uh, you know, like we said earlier,
I think sometimes Yudkowski gets unfairly associated with the basilist
as if he has advocated the idea, and he has not.
He has said, you know this, this idea is trash,
and uh, there there are many reasons to doubt it.
But even but though he has said, like even though
(59:41):
I doubted, I don't want it disseminated. Um, but he says,
you know, a good reason to doubt it is there's
no reason to conclude it's necessary for the Basilisk to
actually follow through on the threat. We're saying that it's
going to be relying on us to come up with
the idea that it in the future might blackmail us
(01:00:02):
if we don't help it now. In order to get
us to help it now, right, we should be working
and donating all our money and time and resources to
building it as fast as possible, because we came up
with the idea that it might torture us if we don't.
Even if you accept that Yudkowski has he's pointed out
that there's no reason, once it's built, it would have
(01:00:23):
to follow through on the threat he's written. Quote. The
most blatant obstacle to Rocco's Basilisk is intuitively that there's
no incentive for a future agent to follow through with
a threat in the future, because by doing so, it
just expends resources at no gain to itself. We can
formalize that using classical causal decision theory, which is the
(01:00:45):
academically standard decision theory. Following through on a blackmail threat
in the future after the past has already taken place,
cannot from the blackmailing agent's perspective, be the physical cause
of improved outcomes in the past, because as the future
cannot be the cause of the past. Hey, basilist, why
are you tormenting a third of the population for all eternity? Oh,
(01:01:07):
I said I would. Well, yeah, I mean exactly, no,
if it it didn't say it would, right, It just
had to rely on the fact that in the past
people would have come to the conclusion that it might.
You know, you thought that I would. I didn't want
to disappoint. But actually, if a basilist could be created,
it seems like the best case scenario for it would
be everyone subscribes to this idea and works as hard
(01:01:29):
as they can to build it, and then it never
follows through on any of the threats. Right, The best
case scenario would be people act as if there is
a threat, and then there is in fact no follow
through on the threat. It's really a win win for
the basilist. Yes, and then it can maybe you can
even shed that name Basilist. They're like, we don't even
have to call it the Great Basilist anymore. We can
just call it, uh, you know, Omega or whatever it's
(01:01:51):
its name is now. I want to be fair that
a lot of what these people do is like the
less Wrong community and all that they deal with, Like,
should there be alternative of decision theories that guide the
behavior of superintelligent aies. Maybe it doesn't use classical decision theory,
maybe it uses some kind of other decision theory, and
because on some other decision theory, maybe it could decide
(01:02:14):
to actually follow through on the blackmail threat. I think
that is where some of this fear comes through, that like, oh,
maybe by talking about it we are actually causing danger
because maybe some other decision theory holds. But Yudkowski does
not think that's the case. Also, one more thing, it
depends on the basil list. So if you think this
(01:02:34):
scenario could be real, it depends on it not having
ethical or behavioral controls that would prevent it from engaging
in torture. Yeah, and I think if thinkers like the
you know, the Miria people, the Machine Intelligence Research Institute
people succeed in established what they're trying to do is
establish a philosophical framework to make AI friendly, to make
(01:02:55):
it so that it is not evil and does not
harm us. And if they successfully do that, then this
shouldn't be a problem, right Because Yudkowski has argued that
a being that tries to do what's best for us
would not engage in torture and blackmail, even if it's
doing so in service of some higher good, because doing
(01:03:15):
torture and blackmail are actually not compatible with human values.
I agree with that absolutely, and I actually would go
as far to say I think that's something people should
keep in mind when they're uh, when they're they're choosing
their religions as well. Yeah, I can certainly see how
you can make that argument. It's like, what do I
love about my faith? Is that the blackmail and the
(01:03:36):
torture or is there sometimes it brings something else to
the table that is worth living forward, that makes life
better for everybody, Like, like, I feel like that is
what should be important about one's faith. Now, I think
some people might be saying like, wait a minute, though,
if you're just using utilitarian ethics, right, wouldn't wouldn't any
(01:03:58):
methods be good if the if the ends justify the means, Right?
That's I think a naive understanding of how people think
about utilitarian ethics. If you want to bring about the
greatest good for the greatest number of people, couldn't you
do that by being really cruel and unfair to some
smaller group of people? And I think generally there are
versions of utilitarianism that say, well, actually, the answer there
(01:04:19):
is no, you couldn't do that, because even though you
might be bringing about some better material circumstance, it is
actually corrosive to a society for things like that to happen,
even if they don't happen to many people. Right, you say,
what if I could make everybody on earth ten percent
happier on average by say, burying somebody in a in
(01:04:43):
a pit of bananas once a year, so that it,
you know, buried to death with bananas. Even the people
who are being made happier could very easily look at
that and say that's not fair and it makes the
world worse and I don't want it. And thus that
actually would be a subjectively relevant state. So we've talked
about AI risk on the show before, and you know,
(01:05:04):
one thing I feel like I still have not been
able to make up my mind about, despite reading a
lot on the subject, is that I don't know whether
it makes sense to take um to be super worried
about AI super intelligence and the risks associated. I mean,
I do think it's worth taking seriously and thinking about.
And I think people who want to devote their attention
to how, you know how dealing with the control problem
(01:05:27):
and how you would get an AI to do things
that were good for us and not harmful to us,
that that's fine work. And I don't ridicule the people
who work on that problem the way some people do.
But on the other hand, I worry if by focusing
exclusively on sort of the machine god the super intelligence,
were sort of ignoring um much more plausible and current threats.
(01:05:53):
That the ways that AI is already very plausibly in
a position to hurt us today or in the very
near few future, and not depending on any outlandish assumptions,
the way it's already and will soon be used as
a cyber war weapon, the way it's hijacking our attention
and manipulating our opinions and behavior through social media and devices.
(01:06:13):
This is some of what our Scott Baker talked about
with his fears about AI. You don't actually need super
powerful AI to do a lot of damage. It just
needs to manipulate us in just the right kind of ways.
So not the great basilisks so much as all the
little basilisks that are out there, the little grass snakes,
grass with the tiny crowns, they can do a lot
(01:06:34):
of damage. And again I just want to be clear,
I'm not saying we should forget about super intelligence. People
who are working on that. If you find that interesting
that I think that's fine, Yeah, work on that problem.
But there but I think it's a longer shot, and
there's a lot of current and near future AI threat
that is really worth taking very seriously. I wish people
more people were devoting their lives to say AI cyber
(01:06:56):
weapons that are in development right now. One last issue
I think we should discuss before we wrap up here
is Okay, so we don't think this potential information hazard
is actually an information hazard, Like, we don't think it's
actually potentially that dangerous. But Yudkowski has made the point
that even though he doesn't think the basilisk is plausible,
(01:07:19):
the ethical thing to do with potential information hazards is
to not discuss them at all, since it's possible that
they may be Maybe maybe you're misinterpreting the ways in
which they're implausible. Maybe this idea is actually valid, is
actually relevant, and by spreading it you've harmed a lot
of people. But I also think that this could mean
(01:07:39):
that it's it's possible that despite the basilist not being plausible,
something good has come out of the basilist conversation because
it encourages people to think of the idea of information hazards.
Maybe Rocco isn't true, but there could be other ideas
that are both true and potentially harmful to be just
(01:08:00):
by entering their minds. And the lesson from this is
we should prepare ourselves for those kinds of ideas. And
if you have discovered one of those ideas and there
is literally no upside other people knowing about it, keep
it to yourself and don't post it on the internet. Well,
I feel like I do encounter thought hazards like this
from time to time that they're often presented in pamphlets
(01:08:21):
or little booklets, UH, generally with you know, a clever
illustration about the coming into the world. Actually brought some
of these into the office recently. I found them at
a park in rural Georgia, and uh, and I think
I told you it's like, Uh, you have have a
look at these. You may find them interesting, but UH,
do destroy them when you're done, because you know I didn't.
(01:08:44):
In the wrong hands. These thoughts can be dangerous if
they have some sort of like a harmful view of
society that that people may buy into. Well, I think
you were comfortable sharing uh, malicious religious literature with me
because you do not think there's a possibility that that
literature is true and would harm me if I knew
(01:09:05):
it was true, Like you think it is false, So
to you, it's actually not an information hazard. It's just
like an idea hazard. Uh. The real crazy thing would
be if you came across a pamphlet and you read
it and it's the equivalent of this raving malicious religious literature,
except you were convinced it was correct. If it was
(01:09:25):
more like that ring video I brought you exactly. That
is one of the things I've often seen on the Internet,
this idea compared to the ring. But you know, on
the other hand, I do have to I am reminded,
you know that like the idea that that any kind
of knowledge is forbidden or is secret, like that doesn't
really jive well with with just the the general mission
(01:09:46):
of science of course not yeah, but I mean that
would be part of the problem of like we're not
prepared for information hazards, right, because in the past it's
been the case that almost anything that's true is good
to spread, right, unless you're spreading lies. Information is good
to share. It's just possible we should acknowledge that maybe
there is such a thing as a fact that or
(01:10:08):
a fact or an idea or a theory or something
that is true and correct, but it would hurt people
to know about it. I can't think of an example
of anything like that. But if there is something like that,
we we should be ready to not spread it when
it occurs to us. All right, fair enough, Well, I
want to close out here with just a one more
bit of basilisk wisdom or anti Basilist wisdom, and this
(01:10:31):
comes from the poetry of Spanish author Francisco Gomez de
Gravado e Vegas five. This is translated and it's referenced
in Carol Rose's Giants, Monsters and Dragons quote. If the
person who saw you was still living, then your whole
story is lies, since if he didn't die, he has
no knowledge of you, and if he died, he couldn't
(01:10:54):
confirm it. So I was thinking about that with the
stories of the Basilist. Yeah, I was like, wait minute that,
How would you know if you could die just by
looking at something? How do we have this description in
the book? Yeah, it's uh, there is a there's an
authorship problem with this. Yeah, who's whose account is the basilist?
(01:11:14):
But at any rate, I think it's a nice like
final you know, sucker punch to the basilists in general,
but also a little bit to the idea of the
great basilisk. Right. I hope you were not leaving this
episode with with terrors about future digital torment. I think
that is not something that you should worry about. Indeed,
I'm not worried about it, and instead of worrying about
it yourself, you should head on over to Stuff to
(01:11:35):
Blow your Mind dot com. That's the mother ship where
you'll find all the podcast episodes links out to our
various social media accounts who find the tab for our store,
you can look up that bathlist shirt design we're talking
about and uh, that's a great way to support the show.
And if you don't want to support the show with money,
you can do so by uh simply rating and reviewing
us wherever you have the power to do so. Big thanks,
(01:11:58):
as always to our wonderful audio produce. Here's Alex Williams
and Tory Harrison. If you would like to get in
touch with us to let us some feedback on this
episode or any other, to suggest a topic for the future,
We're just to say, Hi, let us know whether you
carry a weasel around In case of a basilisk encounter.
You can email us at blow the Mind at how
stuff works dot com. Oh I'm hearing that transmission again?
(01:12:19):
That weird snap? What is that? Oh Hell the great bassilist.
All Hell, the great basilisk. Oh Hell is a great basilist.
Oh Hell, the great passilist. Hell there the great basilist.
Oh Hell, the great basilist. Hell the great dassilisk. Oh Hell, good,
(01:12:43):
great basilisk.