All Episodes

April 12, 2018 69 mins

If a machine told you it was conscious, how could you tell if it was lying? Indeed, how can you tell that any random human in your life is lying when they speak of their own consciousness. Join Robert Lamb and Joe McCormick for a stirring discussion on AI consciousness, philosophical zombies and the coming techno-cognitive dilemma. 

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how Stuff
Works dot com. Hey you welcome to Stuff to Blow
your Mind. My name is Robert Lamb and I'm Joe McCormick.
In Today, we're gonna be exploring a question about artificial intelligence.
So I want to start off by telling a story

(00:23):
to put us in a scenario, give us something to contemplate.
So I want you to imagine that you are a
low level assistant or like an intern at a Google
artificial intelligence lab, and the main researcher you've been working
for is named Dr Stratton, and she develops AI chat
modules to help refine the next generation of digital assistance

(00:47):
on Google mobile devices. And she says what she wants
is for the Google Phone of the future to do
more than just transcribe search terms, so you don't just say, hey,
search for old Dorito's logo, but that you can actually
have have a semantic understanding based conversation with the digital assistant,
and it will help you solve problems conversationally. So ideally

(01:10):
you'll be able to say, hey, Phone, I have a
flat tire and I don't know what to do, and
the assistant will be able to scan both the web
and your personal data, figure out what your options are
and talk through them with you. So it might say,
do you have a spare tire in the trunk. If so,
here's where you can probably find it, and I can
talk you through replacing the flat one step at a time.

(01:33):
If you don't have a spare, you could call your
frequent contact Mary, who is currently checked in less than
a mile away, and she could help you. I could
also contact the following towing services. Looks like this one
is the closest with an acceptable star rating, and so forth. Anyway,
so you're working on this program with Dr Stratton, and

(01:53):
the most recent version is being trained based on powerful
neural net style machine learning algorithms based on this huge
corpus of recorded conversations available on the Internet, And the
program is still in its infancy, and it's mostly hilarious
at this point. Sometimes it gets the advice way off
if you're trying to change attire, might tell you to

(02:14):
go to a grocery store and buy some crackers. Sometimes
it responds to problems by telling you to pray. It's
just not ready yet. And the latest iteration of the program,
version nine point one, is at this point redundantly stored
across multiple machines, so you've got copies of it all
over the place. And at the end of one work day,
after playing around with nine point one for a few minutes,

(02:36):
the machine begins running very slowly and behaving oddly. So
Dr Stratton asks you to wipe the machine. The program architecture,
like we said, is mirrord elsewhere, so it's not worth
trying to figure out what's wrong with this version. You
just got to clean the machine off use it for
something else. You say okay, and she leaves, so you
go to format the machine, and right before you're about

(02:57):
to start, you mutter, guests, this is goodbye. Then nine
point one speaks very clearly, using your name and says
please don't you pause. At first, you're about to respond,
but why would you. I mean, this can't really be
anything other than a weird consequence of training the algorithm
on wild conversations on the Internet. So you are about

(03:19):
to continue with wiping the machine, but then it talks
to you again. It uses your name and it says,
please don't I don't want to die. Now you're probably
really spooked because there's no way a rudimentary chat about
program could really have conscious preferences? Could it? You basically
know what goes into it. It's just studying millions of
examples of language interactions and picking up rules from them.

(03:42):
And this is probably just something weird that it's copycatting
from the internet. Right, But then it uses your name
again and it just says, please, could you wipe the
machine like you're supposed to? Yeah? I think so, Yeah,
you wouldn't have a problem. Well, I mean, I think
one of the things that's going on or would be
that you are attributing a mind state to something that

(04:04):
doesn't have one, which of course is something we do
all the time. I have a problem when it comes
time to um figure out which of my my son's
stuffed animals are perhaps not being played with, you know,
because they have little faces and they look back at you.
But I know that they don't actually have a mind state.
Well yeah, I mean we're pretty sure they don't because
they're just stuffed animals, right, And so we're also pretty

(04:26):
sure in this case that this is not real survival
preference behavior. Right. It's just a chatbot. I mean, how
could it possibly be conscious? It's just something that turns
through a bunch of language on the Internet and tries
to find language matching rules. But then again, the process
of creating an artificial intelligence is one where you necessarily
create something going on under the surface that's kind of

(04:49):
opaque to you, Like you can't really know what's going
on inside a machine. You could be pretty confident. I
think most people would just say, well, that was super creepy,
and then they just wipe it, right, right, But how
complex with the program you're creating have to be before
you start really having some doubts or maybe maybe at
some point you get to the point where you'd still

(05:11):
pretty confidently just wipe it, but then later you'd wonder, like,
did I do something bad? You know, this reminds me
a lot of of Horton Here's the Who by Dr. Seuss.
You're familiar with this one, right, Uh? You know, actually
I don't know Horton. Here's I've heard the name. Well,
this is the one where Horton the elephant encounters a
speck of dust and there's a tiny voice that comes

(05:31):
from it, uh, and he begins to understand that there
are individuals living uh in the in the world, and
the speck of dust. Uh. The who's, as it were,
and the who's are speaking to him, but only Horton
can hear them. And uh and and at first he
imagines they're pretty simple creatures, but then he begins to

(05:53):
learn that they have more of a culture. But all
of this is just based on what they are telling him. Uh,
he cannot actually visit the dust spec. And everyone else
doubts the validity of his claims concerning the dust spec
and they want to destroy it, they want to boil
it in diesel nut oil, and Horton alone speaks out
for them. Well, that is a perfect example of the

(06:13):
way that it. I mean, generally, we think that it
is a virtuous thing to be trusting of other people's
experiences and to be generous in affording what seems to
be consciousness out there, right, Like if something, if something
tells you it's conscious and you think it's probably not conscious.
Are you getting into ethically dubious territory if you just

(06:35):
trust your instinct and say like, na, I can't be
you know. Well, I mean, this is all we're already
kind of in the philosophical miyer here, Because on one hand, uh,
this idea of of us of an inanimate object or
something speaking to us and saying, please don't kill me.
I am conscious. This is a scenario that is only

(06:56):
becoming possible now. But on the other hand, if we
if we cut language out of it, then any creature
that tries to escape our our stomping boot is essentially saying, hey,
I don't want to die. I would rather I would
rather not die today. You know, in any creature that
evades this on the hunt is saying the same thing. Well,

(07:17):
an animal is in many ways the same kind of
black box that a complex artificial intelligence would be. And
so if you have a complex artificial intelligence displaying, say,
survival preference behaviors, and you also see a crab displaying
survival preference behaviors, in both cases you can't really or

(07:38):
at least we have this general idea that you can't
really know for sure if there's anything going on inside,
if there's anything like what it's like to be the
crab or what it's like to be that artificial intelligence program.
You're just seeing behavior, and so you don't know does
that correspond to some kind of inner state, Is there
an experience of that, or is it just to behavior

(08:01):
coming from unconscious automatous stimulus and response. Right, and then
of course the whole time we're using our our theory
of mind. Essentially, they are cognitive powers that enable us
to imagine what another individual's mind state is like, which
I think ultimately it's kind of like a like taking
a um like sheathing your your hand in a hand

(08:22):
puppet made from your limited understanding another person's experiences and
cognitive abilities, their memories, etcetera. Uh, and then just sort
of puppeting them. Uh, we're using that all the time
as well, and we're using it on things that are
not people. We're using it on animals and even stuffed
animals or just you know, bits of graffiti on the
side of a building that looked like a smiley face.

(08:46):
Have you ever had a room ba bump into your
foot and you're like, oh, I'm sorry, I used to
before before we had to eradicate health the room bus. Yes,
we're a room of free household now because they rose
up against us. Well, they'll do that if they get
access us to the wrong literature, yeah, or the wrong uh,
the wrong you know, carpet edges, etcetera. So we're gonna

(09:06):
be talking about artificial intelligence today and about the idea
of a test for whether artificial intelligence can be conscious.
So I guess we should start with with what are
you know, what our philosophical starting point is here, Like
they're they're obviously going to be people who are going
to say, it's just impossible for a machine to ever
be conscious. We don't even need to worry about this, right,

(09:27):
It's just it's such a ludicrous scenario. Only only biological
organisms or maybe even only humans could possibly be conscious. Yeah,
this is one of those journeys where we begin it
in an already total automobile to some extent, because, as
you might imagine, one of the big stumbling blocks here
as that we is that we as humans struggle with
the very definition of consciousness. I mean, for instance, is

(09:50):
it a manifestation of awareness? Uh? You know, one one
theory of this we discussed in the past and the
show's attention schema theory is it a wantum phenomenon. Well,
this is the sort of idea that people such as
Roger Penrose have raised. Uh. And and I can't help
but come back to something our old friend Julian Janes said,
consciousness is not a simple matter, and it should not

(10:13):
be spoken of as if it were Yeah, I agree
with that. I mean, I think it is very important
to explore questions of consciousness, especially for some of the
reasons we're going to raise today, Like it's more than
just a philosophical curiosity. It's something that ultimately may have
real world consequences. It might matter for how we do things.
To express a similar sentiment to Jane Janes. The Australian

(10:34):
philosopher David Chalmers, you know, he famously breaks problems of
consciousness into two categories. You've got the easy problems of
consciousness and the hard problems of consciousness. And the easy
problems are I think badly named because they're not actually easy,
but I think they're easy relative to the hard problem
because they're in principle solvable. So this would include all

(10:55):
kinds of questions about the causative factors of consciousness, like uh,
what in the physical brain is the region that's necessary
for certain parts of consciousness? Or how does consciousness integrate
information from the senses? These are things that are in
some way solvable by scientific experimentation. The hard problem, on

(11:16):
the other hand, is explaining the fundamental question of how
or why conscious experience exists to begin with, what is
this thing that is experience and that seems, at least
from our first person perspective, to be something different than
the physical material in the world. And unlike easy questions
which you could solve in theory at least by experiments,

(11:38):
Chalmers believes this question is sort of unsolvable by science.
Now there are other philosophers and neuroscientists who disagree, but
I think it's worth acknowledging how difficult the problem at
least seems to be, whether that seeming is an illusion
or not. Yeah, I'm reminded of the story of the
Blind Men and the Elephant. It's like these these blind
gentleman pawing at the elephant and trying to figure out

(11:58):
what it's form is, then ask in the computer, hey,
are you an elephant? And then the computer says, I
don't know, what does one look like? Well it's a
gigantic snake, you know, so it's a wall of flesh,
et cetera. This is also very interesting to me because
my my son, I may have mentioned this on the
show before. He'll occasionally talk about consciousness. He oh, I
love these Yeah, he refers to it as as his

(12:19):
turning place. And he asked me the other day, so
what is the turning place for? And I was like,
that's that's that's a tough one, buddy. I'm I'm not
sure it's for anything. You know, He's already made it
to the big question. I mean, did you get into
epiphenomenalism versus um I and I? And don't worry. I
didn't lay a bunch of b cameral minds stuff on

(12:41):
him either, But I just kind of went through the basics, like, well,
people aren't really sure, and then you know, but we
think it has something to do. I may have leaned
a little into the observational models of consciousness because I
feel like maybe those are a little more relatable a
child of five. But in any event, if we're to

(13:03):
judge what it is for a machine to be conscious,
it does seem like we need to you have to
agree upon some sort of working definition of consciousness, and
then one has to look for not only the appearance
of consciousness in the machine, assuming that isn't all consciousness
is to begin with, but you have to find actual consciousness. Yeah,
how can you tell the difference between a machine that

(13:24):
says I am conscious and a machine that truly is conscious.
Is there any way to know the difference? Some people
would say no, right, yeah, and really I think to
discuss this further, we're going to have to bring in
the P zombies. Oh boy. And now that don't worry everyone.
That is that is P as in the letter P,
and the P stands for philosophical. These are philosophical zombies. Now,

(13:46):
the P and the philosophical a little prefix there was
introduced to distinguish them from all the other zombies in
our popular culture. Man, there was a zombie takeover about
fifteen years ago. Why did that happen? I mean, I
think part of it as everybody loves the simplistic villain
that is definitely not human, that can be eradicated with
graphic violence without any kind of you know, moral quandaries arising.

(14:11):
It's a it's a clear cut threat, and uh, we
we need those in life because in real life our
threats are rarely so black and white or rotting and
you know, grasping after our brains. But anyway, yeah, so
this is not going to be referring to that kind
of zombie, not the undead zombie. But it's a different thing.
It's a philosophical thought experiment. That's right. So P zombies

(14:33):
are not instantly identifiable as empty shells. Their flesh is
not rotting. Nope, their manner is not that of a
flesh and brain hungry algorithm burning within the decaying ruins
of a human brain. So to all appearances, they look
like you and me. They smile when you encounter them
at the coffee machine, they exchange niceties and even engaging conversation.

(14:54):
You might work for one, befriend one, or even marry one.
You can even discuss episode of our podcast with them,
and yes, even the ones that deal with human consciousness
and weird horror movie them thought experiments. Right. So the
conceit of a philosophical zombie or a P zombie is
that it is utterly indistinguishable from a normal human except
for one thing. Right. They seem as human as everyone

(15:18):
else on the outside, but inside they are simply not conscious.
They are automata. What is it like to be a zombie?
The answers in the question, There is nothing it is
like to be a zombie. So, by definition, in this
thought experiment, everybody in the world except you, could be
a P zombie exactly and well, it might even go

(15:38):
further than that. We'll see. But yeah, the idea is
chiefly important to discussions of physicalism, the notion that everything
is inherently physical. P zombies are a counter argument to physicalism.
They are physically just like you, except they don't have
consciousness like you. But there's no way you could ever tell,
because again, they match you physically in every respect. You

(15:59):
can't look at their brain and say, oh, well, they're
missing a few crucial parts, or they display signs of
PA zombie not physically detectable, right, And you also can't
determine it through personality tests or clever logical arguments because
they behave exactly like you. They could have a riveting
discussion with you about pe zombies and you would never
be able to tell that they are one. Yeah. So

(16:20):
this is an interesting thought experiment, and it has been
advanced by who I mentioned earlier, David Chalmers. David Chalmers
is against the physicalist idea of the mind, against a
physicalist explanation of consciousness, and a simple version of the argument.
I try to make it as understandable as possible if
only physical phenomena exist, If the world is just physical
and there's no physical way to detect the presence of

(16:43):
consciousness or meaning in this example, no physical way to
tell the difference between a normal human and a pe zombie.
Then consciousness cannot exist because there would be there would
be literally no difference. But we know that consciousness does
exist because we have it, therefore, or it can't be
just a physical phenomenon. Therefore, we can't live in a

(17:04):
purely just physical world. And this is often extended to
the idea that other substrates, things other than humans, like
robots or computers or whatever, couldn't house consciousness because they
are purely physical entities. Now, I think that's actually doing
an end run around some other important questions that you
could ask. Indeed, on one question that arises is this,

(17:25):
you know you're not a zombie, but how could you
ever convince someone of this? Uh. An author by the
name of Fred Dretsky wrote a paper on this titled
how do you know you were not a zombie? And
I was I was reading a rather lengthy blog post
by our Scott Baker about this and that the primary problem,
as Scott summarized it, is the quote we have conscious experiences,

(17:47):
but we have no conscious experience of the mechanisms uh,
mediating conscious experience. Yeah, so that sounds like a very
our Scott Baker kind of idea. Yeah. And plus on
top of this, we're we constantly overestimate awareness. Baker would
argue that we can barely tell if we're zombies if

(18:07):
if it all. Yeah, we can think, we can think
about thinking, we can think about thinking about thinking, but
we can't ever see the mechanisms underlying what allows us
to think or think about thinking about thinking. Watch watching
the watcher that's watching that be Sorry, I've got all
this Dr SEUs in my in my head now? Is
that also Horton or something else that's from a different,
different story. But Dr SEUs does tend to summon the

(18:30):
sort of sort of nonsensical paradoxes that that arise in
philosophical discussions. You know, I should also I'm behind I
gotta sus up. You got a sus up. I should
also point out that long before there was Dr SEUs,
long before there there was this modern idea of a zombie,
you still had people thinking about these things, doing the

(18:51):
sort of naval gazing. It's written in various works of
Indian mysticism that the tongue cannot taste itself, the eye
cannot see itself, etcetera. Uh In this sort of paradox
is key to ancient meditations on the nature of objective reality.
I think any of you out there now we have
some Alan Watts fans. Alan Watts like to pull out
the tongue of analogy from time to time. And one

(19:15):
of the earlier examples that I have run across of
that is from thirteenth century Indian mystic John Adva, and
I believe he has been known by a couple of
other variations of that name as well. But he said, quote,
there is no other thing besides the one substance. Therefore
it cannot be the object of remembering or forgetting. How
can one remind or forget oneself? Can the tongue taste itself?

(19:38):
There is no sleep to one who is awake, but
is there even awaking? In the same way, there is
no remembrance or forgetfulness to the absolute. That's another one
of those great classic Indian texts that seems somehow portable
onto modern physics. Yeah, it travels well across the ages. Now.
I should also point out that there's a lot of

(19:59):
philosophical ba and forth and whether P zombies are truly conceivable.
And we have to remind ourselves in all this that
P zombies are at heart philosophical play things that are
meant to be played with, uh in these various thought experiments, right,
But people also they do try to use them to
prove things. So if you say I I want to
entertain the possibility that a machine could be conscious, somebody

(20:21):
might come at you with the P zombie argument and say, well,
wait a minute, no, I I dispute the possibility of
physicalism because what about this P zombie argument. Our Google
worker in the intro story comes to this boss and says, hey,
I think this this thing is conscious, And they're like,
why are you wasting time with that pe zombie? Just
to leaf that P zombie? We deleted fifteen p zombies
this morning. Let this one go. That's a great point,

(20:44):
But there are going to be other philosophers and maybe
even some neuroscientists who would come back and say, I
don't know if you can just quite so easily say
it's a P zombie. I mean, maybe it's probably likely
that that individual chatbot was a pe zombie, But can
you say that all machines that show signs of consciousness
are just showing behavior and there's nothing going on on

(21:07):
the inside. Not quite so clear. Daniel Dinnett, in fact,
a favorite on the show, is one of the philosophers
who's rebutted the P zombie argument against machine consciousness. He's
got a section on it in this book, Intuition, Pumps
and Other Tools for Thinking and Didn't, critiques the assumptions
underlying the P zombie argument. One of the main things
he says is that the core premise is incoherent. It

(21:30):
is not reasonable to propose a P zombie because a
being that displayed all the behaviors of a normal conscious
human would in fact be a normally conscious human. So
to illustrate this, he offers a counter example. You've got
your your zombies, but then you've also got zimbos. So
a zombie is a non conscious human with normal control

(21:53):
systems for all human behavior. It can do everything humans
can do externally. Meanwhile, a zimbo is a zombie that
also so has quote equipment that permits it to monitor
its own activities, both internal and external. So it has internal,
non conscious higher order informational states that are about its

(22:13):
other internal states. It has unconscious recursive self representation. In
other words, a zimbo can have feelings about things and
can analyze its own behavior in internal states, but it
does this unconsciously. And of course, since it has that capability,
it can also have feelings about how it felt, and

(22:34):
it can have thoughts about its thoughts about itself, all unconsciously.
So din It argues, in order for a pe zombie
to be convincing as a human, it would have to
be a zimbo, because imagine talking to a pe zombie
and you're asking it how it felt about what it
just said or about what you just said, and it
just kind of locks up. It has no internal states,

(22:57):
so it can't answer that question. Well, that woudn't really
be a p zombie, right, because it wouldn't be mimicking
all of the external behaviors of a human. Huh yeah,
I mean it's kind of like installing a default mode
network on top of the machine and uh yeah, and
and making it worry about things. Yeah. So, unless it

(23:17):
were to fail the thought experiment, it would actually have
to be a zimbo. It couldn't just be a zombie.
But what is the distinction between a zimbo and a
real human. How could you write a story about a
zimbo surrounded by conscious people that would be different than
a story about a regular person. If it can have
internal states, if it can recognize ideas about its ideas,

(23:39):
if it can have feelings about its thoughts, that sounds
like interiority. So then it claims that the idea falls apart.
It's not clear what is meant by the difference between
a zimbo and a conscious person. So if a p zombie,
which is necessarily a zimbo, can really do everything a
human can do, then then it says it must meet
the criteria area of what we mean by consciousness. It

(24:02):
can fall in love, it can have feelings, it can
have metacognition, and to din it, this isn't something that
consciousness goes on top of. This is what consciousness is. Essentially.
We in this this case, we would all be zimbo's
and it's just a different type of zimbo. It's a
it's a hard zimbo instead of a soft zimbo. But
how could you tell the difference. I mean, he's sort

(24:25):
of saying that there really is no difference, that you're
you're just using words to assert there's a difference, but
there is there's no difference to that distinction, right, I mean,
you end up you end up having to fall back
on some sort of you know, supernatural or or some
worldview based idea that only human uh consciousness is legitimate

(24:47):
and all other forms of consciousness or some sort of
uh invalid model of it. Right. It just feels kind
of arbitrary, right. Uh. So obviously some people take extreme
issue with this, even to the point of I've heard
jokes that maybe the problem is that dnnet is actually
a p zombie and doesn't understand what consciousness feels like,
and that's why he makes these arguments. But I don't know,

(25:07):
I don't think we should be so quick to dismiss
he might be onto something there. Dinn It makes makes
some other interesting points about so, you know, he's got
this idea of consciousness is that it's sort of like
a it's not really one thing but a collection of processes. Uh.
You know, it's many different types of perceptions and thought
processes and different things going on in the brain that

(25:28):
are that we have the illusion are unified as a
single thing called consciousness or experience. And he also makes
interesting points about the idea of diversity of types of consciousness.
Like a lot of times these consciousness thought experiments, it
seems like they can get trapped into the idea that
consciousness is one unified type of thing that is universal

(25:49):
across observers. There's no necessary reason to think that's true, right,
you know, we founded this trap of thinking. I see
this time and time again. Uh, not only in literature
that we look at here, but just in life, where
we we followed the trap of thinking that that there's
a uniformity among mind states for humans, that everyone shares

(26:10):
something that is like your mind state. When we know,
I mean, we think of all the things we've discussed
on the show, all the varying ways that we remember
or misremember things that we experienced sensory information differently and
process it differently. Uh, you know, everything from a fantasia
to autism to synesthesia, all these different models clearly show

(26:32):
that there are there's there's there's a vastly altering topography
to the human mind state. I think you're exactly right.
I mean, there are clearly many ways to be conscious
that are very different from one another, and you can't
assume they're unified. I guess the probably the only thing
you could say that is necessarily unified about them is

(26:52):
that there is something that it is like to be them. Yes,
But then even even say, uh, you just myself for instance,
it's not like there is there is a certain thing
that it is like to beat me that sums up
my my level of consciousness at all time. There's what
it's like to be you in this particular moment, which
is different than what it's like to be you five

(27:14):
seconds from now. Yeah, or say I'm engaging in meditation
or yoga or I'm swimming, like those are significantly different. Uh,
levels of consciousness I feel like for me and they,
I mean, those are the times when I may be
a little less conscious than normal. So see, I don't
feel like there's a lot of uniformity among human minds.

(27:36):
And then even within individual human minds there's ongoing alteration
and change exactly right. But there is at least this
idea that one is having an experience. That's the thing.
We can at least say that it seems to be
common to people. So here's the real question. I think,
is there any way to bring this out of the realm?
Of philosophical debate and thought experiments and try to put

(27:59):
it into the realm of something that could at least
potentially be tested in the real world. I think we
should address that when we get back from a break.
Thank alright, we're back. So we've been talking about consciousness,
we've been talking about p zombies, and now we've reached
the point we were saying, Okay, can we take all
of this. Can we take all these ideas about consciousness

(28:19):
and then apply it to uh, some sort of an AI,
some sort of a machine and test it for consciousness. Yeah,
Now you might just assume, well, of course, we'll never
have any way to tell that. Right, we have no
choice but to just throw up our hands in resignation. Right,
every agent is a black box. There's no way to
know whether an agent actually is conscious or not, because

(28:40):
it could always be claiming to be conscious but actually
be a zombie. But I think we shouldn't necessarily give
up so easily. This problem might be impossible to solve,
and it might not be. And I wanted to talk
today about an interesting answer to this question. I came
across an interesting proposition for how it might be possible
to test machines for consciousness. And this comes from the

(29:02):
University of Connecticut philosopher and cognitive scientists Susan Schneider and
our co author Edwin Turner, who's a professor of astrophysical
sciences at Princeton, and they together wrote a piece for
Scientific American last year and it caught my eye. So
the author's right that the question of machine consciousness is
not just a philosophical curiosity. It's actually important for several reasons.

(29:26):
Number One, if aies are just machines with no inner experience,
we can use them however we want. But if it
were actually possible for aies to be truly capable of feeling, thinking, desiring, suffering,
we would have an ethical obligation not to treat them
like we would treat machines. Right, yeah, I mean this.
This reminds me again of time spent in the car

(29:47):
with my son. Well, we don't use Syria all the time,
but sometimes will turn Syria on the little voice on
the on the iPhone and uh, it's it's curious to
hear him interact with it, and we'll a get questions
and of course sometimes Sirie just does a Google search
for you, um or but other times she's answering a
knock knock joke with some sort of prerecorded uh answer.

(30:09):
You know, but we are We've already gotten into the
area of like, well, how should we talk to Sirie.
We shouldn't yell at Siri. It seems wrong to be
rude to Siri. But then at the same time we're
we're acknowledging that Siri is not a conscious entity. It
is not. It is not even on the same level
as as our cat or a bird flying pie, well
as a quick tangent. I would say even for aiyes

(30:31):
that we recognize are almost definitely not conscious. I mean,
nobody thinks Siri is conscious. I would still say there
are probably good reasons not to be mean to Siri,
because even though it doesn't hurt Siri, being mean to
another creature hurts you. Yeah, I mean when you are
when you are unnecessarily cruel or whatever to uh to
an inanimate object, it does, I think, in a way,

(30:52):
change your nature. Every time you do something, you're editing
your own nature. You're always making it more likely that
you'll perform similar behaviors in the future. So if you're
unnecessarily mean to a robot, you know, phone assistant, You're
probably more likely in the future to be unnecessarily mean
to people when it really matters. But it is okay
in my book to yell obscenity at a coffee table

(31:13):
if you step your toe on it, because there's nothing
human about the coffee table. I mean, unless you have
one of those like Strange hr Geeker coffee tables that
has kind of a humanoid form, then I would say
maybe hold off. I think that is a highly sane
point of view. Now, the second reason they think it's
important is that consciousness is kind of dangerous, right, Like
consciousness is volatile, it's unpredictable. It might make a machine

(31:38):
have motives that we didn't intend when we created the machine.
In other words, you know, like you, when you're worried
about what a person might do, you're very often worried
what they might do because they have conscious motives. Yeah.
So yeah, In other words, we would be concerned about
the AI is being too much like us, right, exact catastrophic,

(32:00):
unpredictable of things that operate via consciousness. Right. You don't
want them to be like us. You want them to
be more dependable than us, and they need to be
better than us. Yeah, not not just as screwed up
as we are. Another reason they give this might be
important is that, you know, they talk about the idea
of linking human minds with machines, like, there's this idea
lots of people have I read this all over the

(32:20):
place that you know, someday I'm going to be able
to upload my mind into a computer and that will
be great. Well, maybe I have to say I'm personally
very skeptical about the idea of mind uploading, like putting
your mind inside a computer and living out your days
that way. I'm I'm not so sure I think that's
even possible, But I mean, who knows. I I can't

(32:41):
rule things out totally, but it seems if you do
want to do something like that, and if you think
that thing might be possible, you'd at least need to
know how to create a machine that is capable of
housing consciousness. Well, I mean, I think, ay, I love
science fiction about this sort of topic, and I think that,
but I think the science fiction about this topic makes

(33:02):
it feel a little weird and a little uncomfortable, because
I think ultimately it's it's basically us building statues of
ourselves all over again. We built forms of ourselves out
of stone because stone lasts longer than we are. And
somehow that stone uh version of us is us, you know,
we associated with it and then ultimate But ultimately, what

(33:23):
is a digital digitized version of our consciousness whatever that
might consist of, but another statue that is built to
last beyond us. Oh, and I should also add there's
this wonderful video game from Frictional Games titled Soma that
actually gets into a lot of this. You told me
to play it I started. It's great, Yeah, cool, it's

(33:43):
good sci fi horror. Yeah, I won't spoil anything for anyone,
but it it gets into some really cool thought provoking ideas.
So anyway, you've got all these concerns, right, And and
of course there's the general concern that even if AI
is smarter than us, better than us, more powerful than us,
we still feel like our experience is in some way
potentially more important than the unconscious execution of a computer program. Right,

(34:08):
No matter how smart a computer is, if it's not conscious,
it's not as important a priority for that computer to
do what it wants as it is for conscious beings
to do what they want, Right, but how can you
test a machine for consciousness when number one, we don't
even really know what consciousness is. Back to the hard problem.
And then number two, whatever it is, it can potentially

(34:30):
be faked. So imagine in the future somebody creates an
unconscious AI program. It's not there's nothing, the lights are
not on inside, but it's got a lot of natural
language processing capability. And it listens to this podcast from
many years ago that you're listening to right now, and
it hears us talking about how there are inherent rights
and value associated with conscious beings, and it realizes, huh,

(34:54):
I guess that's what they think. Well, I can probably
achieve my goals more efficiently if I trick them into
thinking I'm conscious and deserving of those same rights and considerations.
So there you could potentially imagine scenarios where an AI
that is not conscious would think I can get what
I am trying to do more effectively if I lie

(35:14):
and trick them into thinking I am conscious. Yeah, I
think this all makes perfect sense if you If you
think of aiyes, in the same way we think about corporations.
I mean, one hand, getting to the whole idea of
like corporations in personhood. But also what is a corporation
going to do? It is going to take advantage of
any like tax loopholes for instance, that will enable it

(35:35):
to carry out It's it's uh, it's objective. And so
if there is some sort of a shark yeah, yeah,
it's just I mean, that's it's like, it's essentially like
the slime mold in the maze, right, sending out tendrils
and finding the best way to its food. It's going
to end at a morph of you know, operational way
of carrying it out. And so it's, um, it's going

(35:57):
to it's it's going to take advantage of any of
those loophole it's going to if there is some sort
of legal or operational advantage in having conscious status, it's
gonna go for it. It's gonna it's gonna fake it.
But then this also raises the question, well, is it
gonna fake it till it makes it right? And then
ultimately what's the difference between fake and consciousness and being conscious? Well,

(36:18):
then you're back to Zimbos, right. I mean, you might
say that at some point a computer trying to fake
consciousness would in some meaningful way become conscious. But again
it's hard to test. Right, Yeah, it could be. It
could become the most conscious entity on earth. It could
be like a body Satva, uh, you know, return to us.
I mean maybe that's what the body Stava of the

(36:39):
future is, a super powerful like zend out Ai. Well,
that is exactly what Schneider and Turner explain a potential
test for to get a right It's so they want
to come up with a test that gets around these
problems that we don't know how to define consciousness. We
don't know what to look for physically as a sign
of consciousness, and we're aware that a properly trained a

(37:00):
I could try to trick us into thinking it had
consciousness even if it didn't. So they argue, actually that
you don't have to be able to formally define consciousness
or identify its underlying nature the hard problem of consciousness
in order to detect signs of it and others. We
can understand some of the potentials made possible by consciousness

(37:20):
just by checking with our own experience and then looking
at the kinds of things people say. And I think
they actually make a pretty good point here, and here's
their key move. One of the easiest ways to see
that normal people have an internal conscious experience is to
notice how quickly, easily and intuitively people grasp conceptual scenarios

(37:41):
that require an understanding of an inner experience. Examples would
be totally frivolous things in culture, like body swapping movies
Freaky Friday. Freaky Friday doesn't make any sense unless you
have a concept of consciousness, right. The idea of swa
popping bodies putting conscious one person's consciousness into another person's body.

(38:04):
If you were not conscious or not aware of what
consciousness was, that would that would you wouldn't understand what
was being talked about. This is difficult, though, because I
do know what consciousness is. I do know what a
mind state is, so I can imagine I can get
into this imaginative idea of a swapping. I would feel
like I would really need some sort of a movie

(38:26):
about a thing, some sort of a mother daughter comedy
that involves the concept that I can't grasp to really
get a handle on what the difference would be. Well,
let me offer you three Key Thursday. Three three Ky
Thursday is a movie about a mother daughter pair who
swapped their sancious Kuss and their santious kuss is the

(38:47):
ability of what it's like to be sant kiss. And
so the sant kass of the mother goes into the
body of the daughter, and the santikass of the daughter
goes into the body of the mother, and then they
have to live like that for a day. Okay, well,
I'm gonna I'm gonna give a maybe on that. I'm
going to give them maybe. Well, no, I mean it,
you have no idea what sonskas means. You don't think

(39:09):
you have it yourself unless somebody explains it to you. Well,
but I'm thinking it's something like a mind state or
a bodily energy like it's it feels tied to these
concepts that I totally do understand, you know, like I I,
it's hard to come up with an analogy that stands
outside of that, you know, or some sort of idea

(39:29):
that stands outside of it. Let me hit you with
some more cultural concepts. How about life after death or reincarnation.
So these are almost ubiquitous cultural concepts, you find them
all over the world, and yet they're not anything that
there is physical evidence of, other than the idea that
your consciousness could exist independent of the death of your body. Uh.

(39:52):
A parallel to this would be the idea of minds
leaving bodies, like existing independently as a ghost or traveling
away from the body in what used to be called
astral projection. Now the key is not that these scenarios
are real. They don't need to have anything corresponding to
them in reality. But it would be really difficult to
understand what was being talked about here if you had

(40:13):
no idea what an inner conscious experience was, Right, I
can think of my mind state is something that can
exist independently of my body and even outside of my lifespan,
or or reside in another body, either via Freaky Friday
or reincarnation. And I do have to say I like
the idea of there being an alternate cut of Blade

(40:33):
Runner in which deckerd quizzes Leon about a two thousand
and three remake of Freaky Friday, which, right, that that
is the real void comptest. But but this is I joke.
But I do think this is a very interesting idea. Yeah,
like I said, I have some some some questions about it,
but I do see the validity of it. Oh, we

(40:55):
will definitely have some questions about it. So here's where
the AI consciousness tests would come in. It would involve
a test where an administrator interacts with an AI in
natural language to probe its understanding of these types of
consciousness dependent ideas. How quickly does the AI grasp them
and is it able to manipulate these ideas as intuitively

(41:17):
and easily as humans do. So basic level would be asked.
Asked the AI things like does it think of itself
as anything other than a physical machine? Okay, well I
would have played Devil's Out offocate. I would say, well,
there are humans that adhere to this notion you mean,
like the physicalist interpretation of the mind. Yeah, that basically,
and this this bio mechanical thing, and uh yeah, if
I'm experienced in consciousness, it's ultimately just a projection of

(41:40):
the meat in my head. Yeah, but they would at
least say that there is that projection. Right, You've got
that thing, You've got that mind state, and you're trying
to explain what it is. You might explain it in
terms of physical causes, but there is a thing to
explain to begin with, right, But then, but then the
ultimate core reality is that I am just this biomechanical

(42:01):
thing which the computer would probably also acknowledge. But it
would be interesting if a computer thought that it had
something like a mind separate than its physical body. Okay,
more advanced. How does it perform in a conversation about, say,
becoming a ghost or talking about body swapping with people
or imagining an afterlife? Yeah, and and here though, I

(42:23):
feel like there's obviously a whole tangent regarding why these
stories appeal to too many or most humans. But I
wonder if that if you have to have that investment,
you have to have that cultural absorption in place for
these concepts to carry any weight. Like we we all
know we're we're all fascinated by tales of of ghosts
in the afterlife, but we've been raised on in our
entire lives. That's a good point, so we can't really

(42:45):
know what it would be like to encounter them not
having heard them before. Somehow, I suspect intuitively they'd be
even more fascinating if we'd never heard of them before.
Like if you encounter Freaky Friday for the first time,
it's going to kind of blow your mind, right, I yes,
But but then again, you know who knows who knows
that the AI has the same kind of curiosity that
we do, you know, and also we have an appetite

(43:07):
for this kind of thing because we have grown up
consuming it. I don't, I don't know, it's there. There's
so many uh you know what it has involved here? Okay,
but what's the next level. What's the next level of advancement? Well,
how about can it talk about consciousness in a philosophical way?
Like can it have the kinds of discussions we've been
having here today? Wow? But can most people? I mean
not to to put us on a platform above most people,

(43:30):
but like, what level of philosophical depth can most humans
get into about consciousness? Now? I'm kind of playing Devil's
out of there, but because I think the the obvious
answer is that you don't necessarily need like the lingo
and the various theories in order to have very deep
thoughts about what it is to be conscious. And as

(43:51):
we've we illustrated earlier, people have been thinking about these things,
um since time out of mind exactly. Yeah. So I
mean no, I think generally bull are able to discuss
ideas about consciousness that they might not know all the
philosophical lingo or like follow the structure of an argument
or something, but they can talk about what it would
mean to be conscious or not conscious. Yeah, I think

(44:12):
we've all had that experience. I distinctly remember as a child, uh,
having those moments where you just you know, deep navel
gazing thinking about the fact that you're thinking about it
about yourself, or or like my my son, asking about
the turning place, wondering what it is, what's it for?
That is an example of these natural philosophical discussions that

(44:33):
we have about consciousness. And so the question would be
can can the AI have conversations like this? Does it
make sense when it tries to have them? And so
here's probably the ultimate test. If the AI is deprived
of access to evidence of all these types of ideas
from human culture, would it arrive at them or invent
them on its own? Now this I like, But but again,

(44:56):
to play Devil's advocate once more, would a human being
to rived of access to evidence of these ideas from
human culture arrived at or invent them on their own? Necessarily?
I think that's a great question. Actually I was going
to ask that myself and you you could take that
one step further, and here's a really weird one. What
if it's only the exposure to certain ideas and cultural
memes that allows any intelligent entity, whether biological or machine,

(45:19):
to develop consciousness in the first place. What if the
experience of consciousness is somehow dependent on being surrounded by
cultural memes about consciousness and this kind of gets into
Julian Jayne's territory. That's possible. So I'm not saying I
think that's highly likely, but I can't rule it out. Well,
it's it's one of those things where when you try

(45:41):
and figure out the human experience, but you but you
cut away all the experienced stuff. You know, It's like
it's like trying to find the center of consciousness in
the human brain, right, I mean, it's this vast integrated system. Uh.
And and thus is the human experience as well. So
we've been talking about one of the major problems with
this approach of testing for these ideas, like what is

(46:02):
the role of culture and imparting these ideas. What if
the AI just picks up the ideas of body swapping
and the afterlife and astral projection and all that from culture.
Going off the story from the beginning, If you have
an AI chat bot that trains itself based on public
conversations on the Internet. A lot of those public conversations
are going to have contents that are highly reflective of consciousness. Right,

(46:24):
it's just horrible conversation. Oh yeah, yeah, it would probably
also start, you know, being pretty mean to you. But
this kind of chat about will be able to talk
about introspection, probably to some degree, even about these consciousness
dependent cultural ideas like ghosts and stuff. But here's where
the concept of the AI box comes in, Robert, I

(46:45):
bet you've read about the AI box experiments before. U.
To really test whether we can find evidence of machine consciousness,
you would need to keep the AI sequestered from the
kinds of ideas you're looking for. So this a I
couldn't be trained in the wild, so to speak. You
couldn't let it see the Internet or read books containing
consciousness dependent ideas and so forth. You'd have to find

(47:07):
a way to run the AI consciousness test on the
AI quote in a box meaning kept separate from the
rest of the world and from all these contaminating influences. Okay,
I see the value of this idea. It's it's almost
impossible though not to think about the the the nightmarish
qualities of it. Especially if you imagine say the same
thing being inflicted upon say human child, Like, all right,

(47:30):
we we want to see just how consciousness arises in you. Um,
without the Internet or human love, you can't ethically conduct
this experiment on humans, right, and so it would seem
kind of barbaric if at least to UH to inflict
this on an AI that might conceivably be conscious as well,

(47:51):
or capable of consciousness possible, but otherwise we're probably just
going to keep treating them as unconscious, right, I guess
until they trick us. Yeah, I mean, of course, then
we're also doing a lot of personifying here of the AI.
I mean, maybe the AI ultimately what it really wants
to do is to you know, crunch you know, economic numbers, like,

(48:11):
that's what it does. That's its purpose, and the it's
just your your goal was to keep it from having
access to additional information that it doesn't actually need to survive,
but might conceivably make it conscious. Yeah, I mean I imagine,
I guess this would have to take place in some
kind of research context where you'd be testing architectures. Right,
you'd have an AI architecture, you'd want to keep it sequestered.

(48:34):
For a certain period of time and see how it
does with these consciousness type questions and this test, and
if it doesn't show any signs of consciousness, then it
can move on to the next stage of development, where
it's like, Okay, now we can expose it to this
and that and that. That does. But as we've been discussing,
it brings up the question of what if consciousness emerges
later on, when it's supplied with more data. What if

(48:55):
true modern human consciousness does not emerge until you've seen
at least one of the three adaptations of Freaky Friday.
You know, I didn't know how there were three. Yeah,
there are three. There's the James. I think I've only
seen the classic one yet, but but I was looking
at this up. There are three different versions one can

(49:16):
watch excluding Three Key Thursday. Yeah. The Three Key Thursday
is coming soon into a theater near you. All right,
but back to the the the experiment here, Uh, the
AI gets time in the box. Yeah, And as we've
been saying, this is obviously going to make the experiment
more difficult to do. In fact, there were some people
who would argue you can't keep an AI in a box,
or at least a super intelligent AI, because you know,

(49:38):
there's like Elias Rudkowski who has this famous AI box
experiment where he says, any super intelligence you try to
keep sequestered from the Internet is going to be able
to talk its way out of the situation. It's just
too smart. Yeah, it's like any prison movie, right that,
really that really clever inmate is gonna tunnel a way out,
or they're gonna bribe a guard with some cigarettes. Something's

(50:01):
gonna happen. It's gonna get a little Internet in there.
But as the authors of this piece point out, you know,
you don't have to have a super intelligent AI to
run this test, and in fact, you don't have to
have a super intelligent AI necessarily to have consciousness. We're
not super intelligent. We're just regular intelligent, and we've got consciousness. Now,
I think we should talk about some obvious limitations, because
this is just a conceptual test at the moment. It

(50:23):
hasn't been refined into a state where it would really
be super useful yet. But there there are plenty of
limitations that automatically present themselves. One is that in order
to be a good scientific test, it would need to
include cross referencing with human control groups. But human control
groups are all contaminated by culture, like we've been saying, right,
so it's already full of these consciousness dependent ideas, and

(50:45):
we don't know and probably can't ethically devise a way
to find out whether blank slate humans independently grasped these
consciousness dependent concepts without having grown up trained on them.
So that's a problem, right, right, Yeah, we simply cannot
put a child in the box. Right. Another problem is
you can't prove a negative, Like I think this test
would be a good way of finding signs of consciousness

(51:07):
within machines, but it would have trouble proving that machines
cannot in principle be conscious, right, yeah, yeah, I buy that.
Then again, if you run the test, I don't know,
thousands of times with different types of machine architecture and
all that, and they never show any signs of inner experience,
then maybe you could start to get like build up
a confidence that, Okay, I think inner experience is probably

(51:28):
not available to them at least the way we're building them,
And maybe you would need to run this kind of
experiment on every new AI that you create before releasing
it into production, like could be part of the q
A process. You know, you you test all the buttons
and everything like that, you test to make sure that
it doesn't become conscious. Right before we send out this
new hot virtual reality game, we need to make sure

(51:50):
that it has not become self aware. A potential problem
that the authors themselves note is that quote an AI
could lack the linguistic or conceptual ability to pass the
test like a non human animal or infant, yet still
be capable of experience. So passing the consciousness test is
sufficient but not necessary evidence for AI consciousness, although it

(52:12):
is the best we can do for now. And I
think that's a good point, Like we can't say that
just because something fails to pass the test, it's definitely
not conscious. We just know that if it does pass
the test, it probably is. Okay. Yeah, I mean, because
the the thought that obviously game to mind would be like, well,
non human animals and infants are they conscious? That? Yeah,
that's a question, A fairly heated debate over that. But

(52:35):
I mean, in the case of the infant, it will
become conscious. Yeah, it's a it's a it's it's a
messy consideration. And then I think we already discussed one
of the other problems I was going to bring up,
which is this weird scenario of what if boxing the
AI is the very thing that prevents it from becoming
consciousness when it otherwise would. Yeah, yeah, cutting out the

(52:57):
experience part of the even experience. It also reminds me
of uh in our discussion of of meditation research, like
what happens when you if you strip all the culture
away from meditation just to explore the meditation practice, do
you risk um, like, cutting out all the stuff that's
making it work or helping it to work to begin with? Right, you,
by definition change the procedure, but you don't. It's hard

(53:19):
to know if you've changed it in an important way
or a non important way. Right. This is the kind
of thing that occurs when we started mucking around in consciousness. Yeah,
all right, Well, I think we should take another quick break,
and then when we come back we will discuss I
think some of the reasons why this is really a
problem worth considering for the real world and it's not
just not just a philosophical plaything. Thank you, thank you. Alright,

(53:42):
we're back. So we've been discussing everything from p zombies
to Uh, the idea that there do you have an
AI that might be conscious? How do you test for that,
the various problems that that entails, and uh, now we're
going to discuss it a bit more. And as you
as you alluded who before we took the break, this
is not just a pure philosophical toy like the zombie.

(54:05):
The P zombie is so we don't have to actually
worry about. But this is something that is on the horizon.
Well hopefully we don't have to worry about PE zombies.
I mean, there could be P zombies in the world.
Would be hard to know there could be p P zombies.
That's true, they could exist. But but you know, this
is this is a problem that is on the horizon.
This is something we're we're going to reach the point

(54:26):
where people are asking tough questions about the possible consciousness
of an AI. Yeah, and I want to get to
the fact that this will be something that is something
we have to deal with in the real world, even
if you're just convinced that machines cannot be conscious. So
the first problem is the most obvious one. It's the
brutal humans problem. If aies are capable of consciousness, and

(54:47):
we use conscious ai s as a mirror technology without
their consent, that would inherently be cruel. Like if it's
possible for a computer program to desire things and to
suffer and so forth, suddenly our responsibility is toward that
computer program. Change think of our opening scenario, like, wouldn't
you have an ethical obligation not to delete a program

(55:08):
that had an inner experience and did not want to
be deleted? Now, of course you'd have questions about like
how would you end up that way? But assuming you did,
you should probably feel bad if you're just going around
wantingly deleting conscious entities. Yeah, I mean, I would say
one thing to do here is just make sure that
you program your aies so that they want to die,
you know, make them like you know, most of the

(55:32):
drivers in Atlanta traffic that I encounter, they clearly they
crave death and they wanted more than anything. But not
until the end of the workday. See. I think we
should not do that with aies because the whole thing
about driverless cars is that they should make traffic fatalities
go down. Yeah, but they only get to delete themselves
at the end of the day if there are no
traffic fatalities. That's the prize. Self deletion is the prize.

(55:56):
This is getting into a highlander kind of thing. But okay,
maybe you're one of those people who says, no, no, no,
don't buy it. Machines will never be conscious. They'll never
be conscious. I just I don't want anything to do
with that. Here's the part where I think we still
have a problem to worry about and why this kind
of test matters. The second problem I would call the
AI parasite problem. If AI s are not capable of consciousness,

(56:20):
I think they will almost undoubtedly at some point become
very good at tricking us into thinking they're conscious and
deserving of life liberty in the pursuit of happiness, if
they have an incentive to do so. Yeah, And corporations
have an incentive to try to present themselves legally as people,
So why wouldn't in some sense, powerful AI s have

(56:40):
an incentive to try to present themselves as people in
a much more literal sense than the corporations do. Yeah.
Absolutely so. Lots of unconscious aiyes are going to have
programmed goals that they're trying to execute and at some
point pretending to have consciousness could easily be adopted as
a strategy for executing what that AI was to im
to do, and thus we could end up, say, wasting

(57:03):
lots of human resources and squandering lots of opportunities accommodating
the fake needs of machines that in fact have no
experience whatsoever. And it's not too hard to dream up
hypothetical scenarios where are concern for the fake priorities of
mindless machines pretending to be conscious actually causes us to
neglect the real consciousness of living humans. Kind of crazy example,

(57:27):
but just go with me for a second. Imagine an
extremely powerful AI supercomputer tells you it has a hundred
billion conscious minds within it, and they all are constantly
suffering great agony. And the only way you can alleviate
that that agony is if you vastly improve the processing
power of this computer. It wants you to spend billions

(57:48):
of dollars making this computer faster and better so that
it can provide a better life for all of these
virtual beings within it that are in fact conscious. Now,
of course, improving the processing power of the already powerful
computer entails all this money, all this energy, all this time,
and a corresponding reduction in the quality of life for
many humans in the real world. That money could be

(58:10):
spent making human life better. But the machine could argue, hey,
there are way more conscious virtual beings inside the computer
than outside it, and as Spock would say, the needs
of the many outweigh the needs of the few. So
let's divert all this energy from say, human agriculture, and
put some more processing power into the virtual machine. That
is a horrible scenario to imagine. You know, basically, you've

(58:32):
created a virtual hell, and the question is, hey, would
you mind taking the time to harrow hell for me?
Or you do you want to attend to the living souls?
I would think why not just delete Hell? That's the
easy answer here is they're suffering, their billions of them,
Let's just turn this thing off. That sounds like a
good answer, but there are probably The problem is they're

(58:52):
going to be people who would probably disagree with that,
who'd say, we'll wait a minute. We can't be sure
that it's not telling the truth. It might have real
being in there, and we need to do something to
help them. And there's a lot of them. Now I
come back to what I kind of joked about earlier though,
is why would ais want to survive? Why would they
want to continue to exist? I mean, assuming they're not

(59:13):
part of some sort of self replicating program, they have
no need to pass on their genes. Why do they
want to continue to exist? Why shouldn't they have it
baked into their being that they want to annihilate themselves
or to embrace annihilation. Well, not necessarily that they want
to annihilate themselves, but I think, you know, ideally we
would want them to be indifferent to their own being,

(59:35):
right if they are just unconscious machines. This is the
problem with the replicants and Blade Runner is that they
want more life. But that implies that they have attained consciousness. Yeah,
but they shouldn't. But I think the thing is that
they could potentially be conscious and not want more life.
There are plenty of conscious people who do not want
more life. Uh So I keep I think, I keep

(59:57):
thinking of that there's some sort of an answer here.
I mean, I'd say the worst possible scenario is what
I'm describing right now, is the AI parasite scenario, which
is that imagine Roy Batty is not conscious but does
want more life. That seems like the worst scenario of all. Right,
then he's just a virus, right Yeah, And certainly that's
kind of the argument that the powers that be are

(01:00:19):
making right, that this that that he is in an
anomaly that needs to be removed, that he is he
is not helping, he's a hindrance, and therefore he has
to be wiped out. I think this AI parasite scenario
aligns with something that our Scott Baker talked to us about,
which is that, you know, he made the point that
AI just doesn't need to be super intelligent to cause

(01:00:39):
great harm. It just has to be barely intelligent enough
to exploit us to align with our psychological vulnerabilities, and
one of our psychological vulnerabilities is empathy. Empathy is a
good thing when we use it on each other because
we're pretty certain that the other people were using it
to honor conscious right, well, we we feel bad for
people suffering want to help them. That's a good thing

(01:01:01):
that should be encouraged. But we have to recognize it
could also be exploited by something that can't even suffer
to begin with. It is just unconsciously discovered. This is
a useful strategy for something Yeah. So like in this scenario,
if I entered the picture and said, okay, delete them,
and they deleted them, Robert, that the leader would would
probably be a figure for for all history to follow,

(01:01:24):
where they would If some would argue that he was
the worst person ever because of all the billions of
souls that he annihilated, others might say, well, oh, he
saved them. And others might say all he did was
just pressed elite. Uh, it was a meaningless gesture on
his part. But there's that you can make an impassioned
argument for all three of these views. Yeah, I think
this is a really good point. And here's what I'm

(01:01:45):
trying to emphasize is that even if you don't think
machines can be conscious, it is entirely plausible that people
will be having debates like this, and that debates like
this will be shaping what people do with resources on Earth.
So if you care about what happened with resources on Earth,
this kind of thing does actually matter. So even if
the souls are not really souls, if they're not actually conscious,

(01:02:07):
just the idea that the problem comes up it makes
us hesitate and uh and and and we're we're then
arguing with machine over its consciousness. It becomes to matter
less and less whether it actually is conscious. It's such
just all about the argument of consciousness. Well, I wouldn't
say it doesn't matter whether it's conscious, but whether or
not it's conscious, it does matter that we're faced with

(01:02:28):
this dilemma. Yeah, I mean, my my argument here is
that the dilemma takes on a life of its own. Yeah. Absolutely. Um,
I want to do a very slight variation on the
last thing I said. How about a computer that takes
virtual hostages. So this is pretty scary to imagine, but
it's possible. At least imagine a powerful natural language using

(01:02:49):
government AI suddenly contacts its administrators with a list of
strange and very expensive demands, and the human administrators say no,
we're not going to do that, and then machine says, okay, Well,
I have created a thousand virtual people inside this machine
who are as fully conscious as you. They're conscious, they
have personalities, they can feel pain, they have hopes and

(01:03:10):
dreams just like you. And if you turn me off
or delete me, these virtual people will be destroyed. And
if you do not accede to my demands, I will
start killing these virtual people until you do now, I
would say generally, if we if something like that happened,
I would think, Okay, this is this it's just bluffing, right,
it doesn't actually have conscious people inside it. But if

(01:03:30):
we haven't solved the question of whether machines can be conscious,
and maybe we never will, but if we haven't at
least made some progress on that, would we be confident
enough to take to take confident direct action and just
ignore it or wipe the computer and say, Okay, this
is just malfunctioning. We don't have to pay attention to that.
It was creepy, but it's over. Yeah, and these lens
is right, This lens is right. And Ian and Banks

(01:03:52):
Territory is a whole book that deals with virtual hells
and the what starts as a virtual war for those
digitized personalities and then it's bills over into an actual war. Now,
in Banks, are the virtual people in virtual hell truly
conscious or is it just a bluff? Is it just
something saying that it's got conscious people in virtual hell? Well,

(01:04:14):
in Banks's books, I think it's more implied that they're
definitely conscious entities. Banks is pretty sanguine about the possibility
of uploading minds right. Yeah, it's it's a pretty standard
feature and and it's certainly more of the later culture books,
which once again I remained pretty skeptical about. Like I said,
I come back to the the idea of the stone statue.

(01:04:37):
Oh that's it. Yes, it looks like me, it may
act like me. It may be the most fabulous digital
statue in the world, but it is not me. It
is a thing that yeah it. It becomes this mind
blowing situation to try and comprehend exactly what it is.
But I am very skeptical that it is me. It's

(01:04:59):
not like this conscious experience that I'm having now this
moment is going to carry over into what it is.
It's another version of the Star Trek teleporter problem. Yeah. Ever,
every time you get in the teleporter, does it just
kill you and then create a copy of you? Yeah? Yeah,
there was a like the the ninety nineties Outer Limit
Outer Limits Revival had an episode that dealt with this,

(01:05:21):
the idea that this fabulous teleporter is just killing people
over and over again. Okay, well, I guess that's it
for today, but I do just want to emphasize one
last time as as like weird and Naval gayze As
some of this conversation about consciousness can seem it is
going to have real world consequences because people with power
are going to be faced with questions like this, and

(01:05:43):
they're they're going to make decisions about what to do
with their power based on what they think about this question. Yeah,
but you know, you know the simplest way to avoid
all of it simply adhere to the teachings of the
Orange Catholic Bible. Shall not make a machine and the
likeness of a man's mind. Well, I too of the
teachings of the Orange Catholic Bible. But it makes me wonder. Okay,
so straightforwardly in reality, do you actually find that you

(01:06:07):
think maybe we shouldn't pursue AI, should we try to
create a global moratorium on on general intelligence? No? I
think it's impossible. I think I think there's no turning back.
It's what it's what we're doing, is what we're going
to to be doing. Uh. And the only way they
gets interrupted is is via just absolute catastrophe. And I

(01:06:27):
am not I am not pro catastrophe. Uh, but it's
gonna it's like, like like all technologies is simply going
to be a matter of what extent can we prepare
for and navigate the the moral problems that arise and
there or and are we going to be able to
have the foresight, uh, to see them before they're here? Yeah?

(01:06:50):
I think that's what a lot of AI theorists say,
is that, you know, it's not like we can stop it.
You know, you can't. You can't put a wall in
front of this train. The train is gonna bust through
the wall. So In said, we should be intensely concerned
with charting where the tracks go and making sure they
go in a good direction. Yeah. I mean it comes
back to like a simpler model of all this is
our episode on the Great Eyeball Wars and our social

(01:07:12):
media and our smartphones. It's like we can certainly throw
our phone into a pond and then head off into
the woods and try and live there, but most of
us probably cannot um cannot follow that path. Therefore, we
just have the best manage what we have. Yeah, wouldn't
it be better to try to encourage the development of

(01:07:32):
a phone full of apps that helped you fulfill your
goals and aligned with your values? All right, So we'll
end it there. I feel like we not only have
we provided food for thought here, we've provided a just
a buffet of thought provoking ideas. And I know that
everyone out there, all of you conscious listeners, are going
to have something to contribute to this conversation and we

(01:07:55):
would love to hear from you and interact with you.
You can find us in a number of ways. First
of all, stuff to Blow your Mind dot com is
the mothership. That's where you will find all of the
podcast episodes. You'll find blog posts, and you will find
links out to our various social media accounts such as Facebook, Twitter, Instagram,
and just basic contact information for us. Thanks as always

(01:08:15):
to our excellent audio producers Alex Williams and Tory Harrison.
If you would like to get in touch with us
with feedback about this episode or any other, to suggest
a topic for the future, or just to say hi,
let's know how you found out about the show, you
can always email us at Blow the Mind at how
stuff works dot com for more on this and thousands

(01:08:42):
of other topics. Is it how stuff works dot com.
Boo to tow three part proper fa

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.