All Episodes

January 4, 2020 69 mins

If a machine told you it was conscious, how could you tell if it was lying? Indeed, how can you tell that any random human in your life is lying when they speak of their own consciousness. Join Robert and Joe for a stirring discussion on AI consciousness, philosophical zombies and the coming techno-cognitive dilemma. (Originally published April 12, 2018)

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Hey, welcome to Stuff to Blow Your Mind. My name
is Robert Lamb and I'm Joe McCormick, and it's Saturday.
Time to go into the vault for a classic episode
of Stuff to Blow Your Mind. This one originally aired
April twelve, and this one was a lot of fun.
This is where we addressed the question of how could
you tell if a machine was conscious. We've asked the
question a lot of times, could a machine be conscious?

(00:27):
We don't know the answer to that, but yeah, seldom
do you ask the question if it was conscious, how
would you know it? How would you know it wasn't
just pretending like a p zombie? Right? In this episode,
we also looked at the classical thought experiment, the philosophical zombie.
So we hope you enjoyed this Vault episode. Welcome to

(00:50):
Stuff to Blow Your Mind from how Stuffworks dot Com. Hey,
welcome to Stuff to Blow your Mind. My name is
Robert Lamb and I'm Joe McCormick. And today we're gonna
be exploring a question about artificial intelligence. So I want
to start off by telling a story to put us

(01:10):
in a scenario, give us something to contemplate. So I
want you to imagine that you are a low level
assistant or like an intern at a Google artificial intelligence lab,
and the main researcher you've been working for is named
Dr Stratton, and she develops a I Chat modules to
help refine the next generation of digital assistance on Google

(01:34):
mobile devices, and she says what she wants is for
the Google Phone of the future to do more than
just transcribe search terms, so you don't just say, hey,
search for old Dorito's logo, but that you can actually
have have a semantic understanding based conversation with the digital assistant,
and it will help you solve problems conversationally. So ideally

(01:57):
you'll be able to say, hey, phone, have a flat
tire and I don't know what to do, and the
assistant will be able to scan both the web and
your personal data, figure out what your options are and
talk through them with you. So it might say, do
you have a spare tire in the trunk? If so,
here's where you can probably find it, and I can
talk you through replacing the flat one step at a time.

(02:20):
If you don't have a spare, you could call your
frequent contact Mary, who is currently checked in less than
a mile away and she could help you. I can
also contact the following towing services. Looks like this one
is the closest with an acceptable star rating and so forth. Anyway,
so you're working on this program with Dr Stratton, and

(02:40):
the most recent version is being trained based on powerful
neural net style machine learning algorithms based on this huge
corpus of recorded conversations available on the Internet, and the
program is still in its infancy, and it's mostly hilarious
at this point. Sometimes it gets the advice way off.
If you're trying to change a time, ire might tell

(03:00):
you to go to a grocery store and buy some crackers.
Sometimes it responds to problems by telling you to pray.
It's just not ready yet. And the latest iteration of
the program, version nine point one, is at this point
redundantly stored across multiple machines, so you've got copies of
it all over the place. And at the end of
one work day, after playing around with nine point one

(03:22):
for a few minutes, the machine begins running very slowly
and behaving oddly, so Dr Stratton asks you to wipe
the machine. The program architecture, like we said, is mirrord elsewhere,
so it's not worth trying to figure out what's wrong
with this version. You just got to clean the machine
off use it for something else. You say okay, and
she leaves, so you go to format the machine, and

(03:43):
right before you're about to start, you mutter, guess this
is goodbye. Then nine point one speaks very clearly, using
your name and says please don't you pause. At first,
you're about to respond, but why would you? I mean,
this can't really be anything other in a weird consequence
of training the algorithm on wild conversations on the Internet.

(04:04):
So you are about to continue with wiping the machine,
but then it talks to you again. It uses your
name and it says, please don't I don't want to die.
Now you're probably really spooked because there's no way a
rudimentary chat about program could really have conscious preferences, could it.
You basically know what goes into it. It's just studying

(04:24):
millions of examples of language interactions and picking up rules
from them, and this is probably just something weird that
it's copycatting from the internet, right, But then it uses
your name again and it just says, please, could you
wipe the machine like you're supposed to yeah, I think so, yeah,
you wouldn't have a problem. Well, I mean, I think

(04:45):
one of the things that's going on here would be
that you are attributing a mind state to something that
doesn't have one, which of course is something we do
all the time. I have a problem when it comes
time to figure out which of my my son's stuffed
animals are perhaps not being played with, you know, because
they have little faces and they look back at you.

(05:05):
But I know that they don't actually have a mind state.
Well yeah, I mean we're pretty sure they don't because
they're just stuffed animals, right, And so we're also pretty
sure in this case that this is not real survival
preference behavior. Right. It's just a chatbot. I mean, how
could it possibly be conscious. It's just something that turns
through a bunch of language on the Internet and tries
to find language matching rules. But then again, the process

(05:29):
of creating an artificial intelligence is one where you necessarily
create something going on under the surface that's kind of
opaque to you, Like you can't really know what's going
on inside a machine. You could be pretty confident. I
think most people would just say, well, that was super
creepy and then they just wipe it. Right, But how
complex would the program you're creating have to be before

(05:52):
you start really having some doubts? Or maybe maybe at
some point you get to the point where you'd still
pretty confidently just wipe it, but then later you'd wonder, like,
did I do something bad? You know, this reminds me
a lot of of Horton. Here's the Who by Dr SEUs.
You're familiar with this one, right, Uh? You know, actually
I don't know Horton. Here's a I've heard the name. Well,

(06:13):
this is the one where Horton the Elephant encounters a
speck of dust and there's a tiny voice that comes
from it. Uh, and he begins to understand that there
are individuals living uh in the in a world and
the speck of dust the who's, as it were, and
the who's are speaking to him, but only Horton can
hear them and uh and and at first he imagines

(06:37):
they're pretty simple creatures, but then he begins to learn
that they have more of a culture. But all of
this is just based on what they are telling him. Uh.
He cannot actually visit the dust spec and everyone else
doubts the validity of his claims concerning the dust spec
and they want to destroy it, they want to boil
it in beazel nut oil, and Horton alone speaks out

(06:57):
for them. Well, that is a perfect example of the
way that it. I mean, generally, we think that it
is a virtuous thing to be trusting of other people's
experiences and to be generous in affording what seems to
be consciousness out there, right Like if something, if something
tells you it's conscious and you think it's probably not conscious.

(07:18):
Are you getting into ethically dubious territory if you just
trust your instinct and say like, nah, I can't be
you know, well, I mean, this is all we're already
kind of in the philosophical mayer here, because on one hand, uh,
this idea of of us, of an inanimate object or
something speaking to us and saying, please don't kill me,

(07:39):
I am conscious. This is a scenario that is only
becoming possible now. But on the other hand, if we
if we cut language out of it, then any creature
that tries to escape our our stomping boot is essentially saying, hey,
I don't want to die. I would rather I would
rather not die today, you know, And any creature that

(08:00):
evades us on the hunt is saying the same thing. Well,
an animal is in many ways the same kind of
black box that a complex artificial intelligence would be. And
so if you have a complex artificial intelligence displaying, say,
survival preference behaviors, and you also see a crab displaying
survival preference behaviors, in both cases you can't really or

(08:24):
at least we have this general idea that you can't
really know for sure if there's anything going on inside,
if there's anything like what it's like to be the crab,
or what it's like to be that artificial intelligence program.
You're just seeing behavior, and so you don't know does
that correspond to some kind of inner state, is there
an experience of that or is it just behavior coming

(08:47):
from unconscious automata stimulus in response? Right, And then of
course the whole time we're using our our theory of mind. Essentially,
they are cognitive powers that enable us to imagine what
another individuals mind state is like, which I think ultimately
it's kind of like a like taking a um, like
sheathing your your hand in a hand puppet made from

(09:10):
your limited understanding another person's experiences and cognitive abilities their memories, etcetera. Uh,
and then just sort of puppeting them. Uh, we're using
that all the time as well, and we're using it
on things that are not people. We're using it on
animals and even stuffed animals or just you know, bits
of graffiti on the side of a building that look

(09:30):
like a smiley face. Have you ever had a room
ba bump into your foot and you're like, oh, I'm sorry,
I used to before before we had to eradicate ouf
the room bus. Yes, we're a room of free household
now because they rose up against us. Well, they'll do
that if they get access to the wrong literature, yeah,
or the wrong uh, the wrong you know, carpet edges, etcetera.

(09:52):
So we're gonna be talking about artificial intelligence today and
about the idea of a test for whether artificial intelligence
can be conscious. So I guess we should start with
with what are you know, what our philosophical starting point
is here? Like they're they're obviously going to be people
who are going to say it's just impossible for a
machine to ever be conscious. We don't even need to

(10:12):
worry about this, right, It's just it's such a ludicrous
scenario only only biological organisms or maybe even only humans
could possibly be conscious. Yeah, this is one of those
journeys where we begin it in an already total automobile
to some extent, because, as you might imagine, one of
the big stumbling blocks here as that we is that
we as humans struggle with the very definition of consciousness.

(10:36):
I mean, for instance, is it a manifestation of awareness? Uh?
You know one one theory of this we've discussed in
the past, and the show's attention schema theory. Is it
a quantum phenomenon. Well, this is the sort of idea
that people such as Roger Penrose have raised. Uh. And
and I can't help but come back to something our
old friend Julian Jane said, consciousness is not a simple

(10:58):
matter and it should not be spoken of as if
it were. Yeah. I agree with that. I mean, I
think it is very important to explore questions of consciousness,
especially for some of the reasons we're going to raise today,
Like it's more than just a philosophical curiosity. It's something
that ultimately may have real world consequences. It might matter
for how we do things. To express a similar sentiment

(11:19):
to Jane Jane's the Australian philosopher David Chalmers. You know,
he famously breaks problems of consciousness into two categories. You've
got the easy problems of consciousness and the hard problems
of consciousness. And the easy problems are I think badly
named because they're not actually easy, but I think they're
easy relative to the hard problem because they're in principle solvable.

(11:40):
So this would include all kinds of questions about the
causative factors of consciousness, like, uh, what in the physical
brain is the region that's necessary for certain parts of consciousness?
Or how does consciousness integrate information from the senses? These
are things that are in some ways solvable by scientific experimentation.

(12:01):
The hard problem, on the other hand, is explaining the
fundamental question of how or why conscious experience exists. To
begin with, what is this thing that is experience and
that seems, at least from our first person perspective, to
be something different than the physical material in the world.
And unlike easy questions, which you could solve in theory,

(12:23):
at least by experiments, Chalmers believes this question is sort
of unsolvable by science. Now, there are other philosophers and
neuroscientists who disagree, but I think it's worth acknowledging how
difficult the problem at least seems to be, whether that
seeming is an illusion or not. Yeah, I'm reminded of
the story of the Blind Men and the Elephant. It's
like these these blind gentleman pawing at the elephant and

(12:44):
trying to figure out what its form is, then asking
the computer, Hey, are you an elephant? And then the
computer says, I don't know, what does one look like? Well,
it's that gigantic snake, you know, so it's a wall
of flesh, et cetera. This is also very interesting to
me because my son I may have mentioned us on
the show before. Hill usually talk about consciousness. He oh,
I love these. Yeah. He would refers to it as

(13:05):
as his turning place, and and he asked me the
other day, so what is the turning place for? And
it's like, that's that's that's a tough one, buddy. I'm
I'm not sure it's for anything. You know. He's already
made it to the big question. I mean, did you
get into epiphenomenalism versus um I and I? And don't worry,
I didn't lay a bunch of bi cameral minds stuff

(13:28):
on him either. But I just kind of went through
the basics, like, well, people aren't really sure, and then
you know, but we think it has something to do.
I may have leaned a little into the observational uh
models of consciousness, because I feel like maybe those are
a little more relatable to a child of five. But

(13:48):
in any event, if we're to judge what it is
for a machine to be conscious, it does seem like
we need to. You have to agree upon some sort
of working definition of consciousness, and then one has to
look for not only the appearance of consciousness in the machine,
assuming that isn't all consciousness is to begin with, but
you have to find actual consciousness. Yeah, how can you

(14:09):
tell the difference between a machine that says I am
conscious and a machine that truly is conscious? Is there
any way to know the difference? Some people would say no, right,
and yeah, and really I think to discuss this further,
we're going to have to bring in the P zombies.
Oh boy, And now that don't worry everyone. That is
that is P as in the letter P, and the
P stands for philosophical These are philosophical zombies. Now, the

(14:33):
P and the philosophical. A little prefix there was introduced
to distinguish them from all the other zombies in our
popular culture. Man, there was a zombie takeover about fifteen
years ago. Why did that happen? I mean, I think
part of it is everybody loves the simplistic villain that
is definitely not human, that can be eradicated with graphic
violence without any kind of you know, moral quandaries arising.

(14:57):
It's a it's a clear cut threat, and uh, we
we need those in life because in real life our
threats are rarely so black and white or rotting and
you know, grasping after our brains. But anyway, yeah, so
this is not going to be referring to that kind
of zombie, not the undead zombie. But it's a different thing.
It's a philosophical thought experiment, that's right. So p zombies

(15:19):
are not instantly identifiable as empty shells. Their flesh is
not rotting, Nope. Their manner is not that of a
flesh and brain hungry algorithm burning within the decaying ruins
of a human brain. So to all appearances, they look
like you and me. They smile when you encounter them
at the coffee machine. They exchange niceties and even engaging conversation.

(15:41):
You might work for one, befriend one, or even marry one.
You can even discuss episodes of our podcast with them,
and yes, even the ones that deal with human consciousness
and weird horror movie theme thought experiments. Right, So, the
conceit of a philosophical zombie or a pe zombie is
that it is utterly indistinguishable from a normal human except
for one thing. Right. They seem as human as everyone

(16:04):
else on the outside, but inside they are simply not conscious.
They are automeda What is it like to be a zombie?
The answers in the question, there is nothing it is
like to be a zombie. So by definition, in this
thought experiment, everybody in the world except you, could be
a P zombie exactly, and well it might even go

(16:24):
further than that, we'll see, but yeah. The idea is
chiefly important to discussions of physicalism, the notion that everything
is inherently physical. P zombies are a counter argument to physicalism.
They are physically just like you, except they don't have
consciousness like you. But there's no way you could ever tell,
because again, they match you physically in every respect. You

(16:45):
can't look at their brain and say, oh, well they're
missing a few crucial parts so, or they display signs
of p zombie hood. No, not physically detectable, right, and
you also can't determine it through personality tests. Are clever
logical arguments because they behave exactly like you. They could
have a riveting discussion with you about pe zombies and
you would never be able to tell that they are one. Yeah.

(17:06):
So this is an interesting thought experiment and it has
been advanced by who I mentioned earlier, David Chalmers. David
Chalmers is against the physicalist idea of the mind, against
a physicalist explanation of consciousness, and a simple version of
the argument, I try to make it as understandable as possible.
If only physical phenomena exists. If the world is just
physical and there's no physical way to detect the presence

(17:29):
of consciousness or meaning in this example, no physical way
to tell the difference between a normal human and a
pea zombie, then consciousness cannot exist because there would be
there would be literally no difference. But we know that
consciousness does exist because we have it. Therefore it can't
be just a physical phenomenon. Therefore, we can't live in

(17:50):
a purely just physical world. And this is often extended
to the idea that other substrates, things other than humans,
like robots or computers or whatever, couldn't house consciousness because
they are purely physical entities. Now, I think that's actually
doing an end run around some other important questions that
you could ask. Indeed, on one question that arises is this,

(18:11):
you know you're not a zombie, but how could you
ever convince someone of this? Uh. An author by the
name of Fred Dretsky wrote a paper on this titled
how do you know you were not a zombie? And
I was I was reading a rather lengthy blog post
by our Scott Baker about this and that the primary problem,
as Scott summarized it, is the quote, we have conscious experiences,

(18:34):
but we have no conscious experience of the mechanisms uh
mediating conscious experience. Yeah. So that sounds like a very
our Scott Baker kind of idea. Yeah. And plus on
top of this, we're we constantly overestimate awareness. Baker would
argue that we can barely tell if we're zombies if

(18:54):
if it all, Yeah, we can think, we can think
about thinking, we can think about thinking about thinking, but
we can't ever see the mechanisms underlying what allows us
to think or think about thinking about thinking. Watch watching
the watcher that's watching that be sorry, I've got all
this Dr SEUs in my in my head. Now? Was
that also Horton or something else that's from a different,
different story. But Dr SEUs does tend to summon the

(19:17):
sort of sort of nonsensical paradoxes that that arise in
philosophical discussions. You know, I should also I'm behind I
gotta sus up, you gotta sus up. I should also
point out that long before there was Dr SEUs, long
before there there was this modern idea of a zombie,
you still had people thinking about these things, doing the

(19:38):
sort of naval gazing. It's written in various works of
Indian mysticism that the tongue cannot taste itself, the eye
cannot see itself, etcetera. And in this sort of paradox
is key to ancient meditations on the nature of objective reality.
I think any of you out there now we have
some Alan Watts fans. Alan Watts like to pull out
the tongue uh analogy from to time to time, and

(20:01):
one of the earlier examples that I have run across
of that is from thirteenth century Indian mystic John Adva,
and I believe he has been known by a couple
of other variations of that name as well. But he said, quote,
there is no other thing besides the one substance. Therefore
it cannot be the object of remembering or forgetting. How
can one remind or forget oneself? Can the tongue taste itself?

(20:24):
There is no sleep to one who is awake, but
is there even awaking? In the same way there is
no remembrance or forgetfulness to the absolute. That's another one
of those great classic Indian texts that seems somehow portable
onto modern physics. Yeah, it travels well across the ages. Now,
I should also point out that there's a lot of

(20:45):
philosophical back and forth and whether P zombies are truly conceivable,
And we have to remind ourselves in all of this
that P zombies are at heart philosophical play things that
are meant to be played with uh in these various
thought experiments. But people also they do try to use
them to prove things. So if you say I I
want to entertain the possibility that a machine could be conscious.

(21:07):
Somebody might come at you with the P zombie argument
and say, well, wait a minute, no, I I dispute
the possibility of physicalism, because what about this P zombie argument.
Our Google worker in the intra story comes to this
boss and says, hey, I think this this thing is conscious,
and they're like, why are you wasting time with that
P zombie? Just to leaf that P zombie? We deleted
fifteen p zombies this morning. Let this one go. That's

(21:30):
a great point, But there are going to be other
philosophers and maybe even some neuroscientists who would come back
and say, I don't know if you can just quite
so easily say it's a P zombie. I mean, maybe
it's probably likely that that individual chatbot was a pe zombie.
But can you say that all machines that show signs
of consciousness are just showing behavior and there's nothing going

(21:53):
on on the inside. Not quite so clear. Daniel Dinnett,
in fact, a favorite on the show, is one of
the philosophers whose rebutt of the P zombie argument against
machine consciousness. He's got a section on it in this book, Intuition,
Pumps and Other Tools for Thinking and didn't critiques the
assumptions underlying the P zombie argument. One of the main
things he says is that, and the core premise is incoherent,

(22:17):
it is not reasonable to propose a p zombie because
a being that displayed all the behaviors of a normal
conscious human would in fact be a normally conscious human.
So to illustrate this, he offers a counter example. You've
got your your zombies, but then you've also got zimbos.
So a zombie is a non conscious human with normal

(22:39):
control systems for all human behavior. It can do everything
humans can do externally. Meanwhile, a zimbo is a zombie
that also has quote equipment that permits it to monitor
its own activities, both internal and external. So it has internal,
non conscious higher order informational states that are about it's

(22:59):
other internal states. It has unconscious recursive self representation. In
other words, a zimbo can have feelings about things and
can analyze its own behavior in internal states, but it
does this unconsciously, and of course, since it has that capability,
it can also have feelings about how it felt, and

(23:20):
it can have thoughts about its thoughts about itself, all unconsciously,
So didn't argues in order for a p zombie to
be convincing as a human, it would have to be
a zimbo, because imagine talking to a pe zombie and
you're asking it how it felt about what it just
said or about what you just said, and it just
kind of locks up. It has no internal states, so

(23:43):
it can't answer that question. Well, that wouldn't really be
a p zombie, right, because it wouldn't be mimicking all
of the external behaviors of a human. Yeah. I mean,
it's kind of like installing a default mode network on
top of the machine and uh yeah, and making it
worry about things. Yeah. So, unless it were to fail

(24:04):
the thought experiment, it would actually have to be a zimbo.
It couldn't just be a zombie. But what is the
distinction between a zimbo and a real human? How could
you write a story about a zimbo surrounded by conscious
people that would be different than a story about a
regular person. If it can have internal states, if it
can recognize ideas about its ideas, if it can have

(24:26):
feelings about its thoughts, that sounds like interiority. So then
it claims that the idea falls apart. It's not clear
what is meant by the difference between a zimbo and
a conscious person. So if a p zombie, which is
necessarily a zimbo, can really do everything a human can do,
then dinn It says it must meet the criteria of

(24:47):
what we mean by consciousness. It can fall in love,
it can have feelings, it can have metacognition. And to
dinn It, this isn't something that consciousness goes on top of.
This is what consciousness is essentially you in this this case,
we would all be zimbos, and it's just a different
type of zimbo. It's a it's a hard zimbo instead

(25:07):
of a soft zimbo. But how could you tell the difference?
I mean he's sort of saying that there really is
no difference, that you're you're just using words to assert
there's a difference, but there is. There's no difference to
that distinction, right, I mean you end up you end
up having to fall back on some sort of you know,
supernatural or or some worldview based idea that only human

(25:31):
uh consciousness is legitimate and all other forms of consciousness
are some sort of uh invalid model of it. Right.
It just feels kind of arbitrary, right. Uh So, obviously
some people take extreme issue with this, even to the
point of I've heard jokes that maybe the problem is
that dnnet is actually a pe zombie and doesn't understand
what consciousness feels like, and that's why he makes these arguments.

(25:53):
But I don't know, I don't think we should be
so quick to dismiss he might be onto something there.
Dinn It makes makes some other interesting points about so
you know, he's got this idea of consciousness is that
it's sort of like a it's not really one thing
but a collection of processes. Uh. You know, it's many
different types of perceptions and thought processes and different things

(26:13):
going on in the brain that are that we have
the illusion are unified as a single thing called consciousness
or experience. And he also makes interesting points about the
idea of diversity of types of consciousness. Like a lot
of times these consciousness thought experiments, it seems like they
can get trapped into the idea that consciousness is one
unified type of thing that is universal across observers. There's

(26:37):
no necessary reason to think that's true, right, you know,
we founded this trap of thinking. I see this time
and time again Uh, not only in literature that we
look at here, but just in life, where we founded
the trap of thinking that that there's a uniformity among
mind states for humans, that everyone shares something that is

(26:57):
like your mind state. When we know, I mean, we
think of all the things we've discussed on the show,
all the varying ways that we remember or misremember, things,
that we experience sensory information differently and process it differently. Uh,
you know, everything from a fantasia to autism to synesthesia,
all these different models clearly show that there are there's

(27:20):
there's there's a vastly altering topography to the human mind state.
I think you're exactly right. I mean, there are clearly
many ways to be conscious that are very different from
one another, and you can't assume they're unified. I guess
the probably the only thing you could say that is
necessarily unified about them is that there is something that

(27:40):
it is like to be them. Yes, But then even
even say, uh, you know, just myself, for instance, it's
not like there is there is a certain thing that
it is like to beat me that sums up my
my level of consciousness at all time. There's what it's
like to be you in this particular moment, which is
different than what it like to be you five seconds

(28:01):
from now. Yeah, or say I'm engaging in meditation or
yoga or I'm swimming, like, those are significantly different. Uh,
levels of consciousness I feel like for me and they,
I mean, those are the times when I may be
a little less conscious than normal. So so see, I
don't feel like there's a lot of uniformity among human minds.

(28:22):
And then even within individual human minds there's ongoing alteration
and change exactly right, But there is at least this
idea that one is having an experience. That's the thing.
We can at least say that it seems to be
common to people. So here's the real question. I think,
is there any way to bring this out of the
realm of philosophical debate and thought experiments and try to

(28:45):
put it into the realm of something that could at
least potentially be tested in the real world. I think
we should address that when we get back from a break.
All right, we're back. So we've been talking about consciousness,
we've been talking about p zombies, and now we've reached
the point we were saying, Okay, we take all of this.
Can we take all these ideas about consciousness and then

(29:06):
apply it to uh, some sort of an AI, some
sort of a machine, and test it for consciousness. Yeah,
now you might just assume, well, of course, we'll never
have any way to tell that. Right, we have no
choice but to just throw up our hands in resignation. Right,
every agent is a black box. There's no way to
know whether an agent actually is conscious or not, because

(29:26):
it could always be claiming to be conscious but actually
be a zombie. But I think we shouldn't necessarily give
up so easily. This problem might be impossible to solve,
and it might not be. And I wanted to talk
today about an interesting answer to this question. I came
across an interesting proposition for how it might be possible
to test machines for consciousness. And this comes from the

(29:48):
University of Connecticut philosopher and cognitive scientists Susan Schneider and
our co author Edwin Turner, who's a professor of astrophysical
sciences at Princeton, and they together wrote a piece for
Science Entific American last year, and it caught my eye.
So the author's right that the question of machine consciousness
is not just a philosophical curiosity. It's actually important for

(30:11):
several reasons. Number one, if aiyes are just machines with
no inner experience, we can use them however we want.
But if it were actually possible for aies to be
truly capable of feeling, thinking, desiring, suffering, we would have
an ethical obligation not to treat them like we would
treat machines. Right, yeah, I mean this. This reminds me

(30:32):
again of time spent in the car with my son. Well,
we don't use Syria all the time, but sometimes we'll
turn Syria on the little voice on the on the
iPhone and uh, it's it's curious to hear him interact
with it, and we'll ask it questions. And of course
sometimes Syria just does a Google search for you, um or,
But other times she's answering a knock knock joke with

(30:54):
some sort of prerecorded uh answer. You know. But we
are We've already gotten into the area of like, well,
how should we talk to Siri. We shouldn't yell at Siri.
It seems wrong to be rude to Siri. But then
at the same time we're acknowledging that Siri is not
a conscious entity. It is not. It is not even
on the same level as as our cat or a

(31:14):
bird flying pie, well as a quick tangent. I would
say even for aies that we recognize are almost definitely
not conscious. I mean, nobody thinks Siri is conscious. I
would still say there are probably good reasons not to
be mean to Siri, because even though it doesn't hurt Siri,
being mean to another creature hurts you. Yeah, I mean
when you are when you are unnecessarily cruel or whatever

(31:35):
to uh to an inanimate object, it does, I think,
in a way change your nature. Every time you do something,
you're editing your own nature. You're always making it more
likely that you'll perform similar behaviors in the future. So
if you're unnecessarily mean to a robot, you know, phone assistant,
you're probably more likely in the future to be unnecessarily
mean to people when it really matters. But it is

(31:57):
okay in my book to yell obscenity that fee table
if you've step your toe on it, because there's nothing
human about the coffee table. I mean, unless you have
one of those like strange hr geek Er coffee tables
that has kind of a humanoid form, then I would say,
maybe hold off. I think that is a highly sane
point of view. Now, the second reason they think it's
important is that consciousness is kind of dangerous, right, Like

(32:19):
consciousness is volatile, it's unpredictable. It might make a machine
have motives that we didn't intend when we created the machine.
In other words, you know, like you, when you're worried
about what a person might do, you're very often worried
what they might do because they have conscious motives. Yeah.
So yeah, In other words, we would be concerned about

(32:42):
the AI is being too much like us. Right. It's catastrophic,
unpredictable of things that operate via consciousness. Right. You don't
want them to be like us. You want them to
be more dependable than us, and they need to be
better than us. Yeah, not not just as screwed up
as we are. Another reason they give this might be
important is that, you know, they talk about the idea
of linking human minds with machines, Like, there's this idea

(33:05):
lots of people have I read this all over the
place that you know, someday I'm going to be able
to upload my mind into a computer and that will
be great. Well, maybe I have to say I'm personally
very skeptical about the idea of mind uploading, like putting
your mind inside a computer and living out your days
that way. I'm I'm not so sure I think that's
even possible, But I mean, who knows. I I can't

(33:27):
rule things out totally, but it seems if you do
want to do something like that, and if you think
that thing might be possible, you'd at least need to
know how to create a machine that is capable of
housing consciousness. Well, I mean, I think, Hey, I love
science fiction about this sort of topic, and I think that,
but I think the best science fiction about this topic

(33:48):
makes it feel a little weird and a little uncomfortable,
because I think ultimately it's it's basically us building statues
of ourselves all over again. We've built forms of ourselves
out of stone because stone lasts longer than we are,
and somehow that stone, uh version of us is us,
you know, we associated with it and then ultimate But ultimately,

(34:09):
what is a digital digitized version of our consciousness, whatever
that might consist of, but another statue that is built
to last beyond us. Oh, and I should also add,
there's this wonderful video game from Frictional Games titled Soma
that actually gets into a lot of this. You told
me to play it, I started. It's great, Yeah, cool,

(34:30):
it's good sci fi horror. Yeah, I won't spoil anything
for anyone, but it it gets into some really cool
thought provoking ideas. So anyway, you've got all these concerns, right,
And of course there's the general concern that even if
AI is smarter than us, better than us, more powerful
than us, we still feel like our experience is in
some way potentially more important than the unconscious execution of

(34:53):
a computer program. Right, No matter how smart a computer is,
if it's not conscious, it's not as important a prior
pority for that computer to do what it wants as
it is for conscious beings to do what they want. Right,
But how can you test a machine for consciousness when
number one, we don't even really know what consciousness is,
back to the hard problem. And then number two, whatever

(35:15):
it is, it can potentially be faked. So imagine in
the future somebody creates an unconscious AI program. It's not
there's nothing, the lights are not on inside, but it's
got a lot of natural language processing capability, and it
listens to this podcast from many years ago that you're
listening to right now, and it hears us talking about
how there are inherent rights and value associated with conscious

(35:38):
beings and it realizes, Huh, I guess that's what they think. Well,
I can probably achieve my goals more efficiently if I
trick them into thinking I'm conscious and deserving of those
same rights and considerations. So there you could potentially imagine
scenarios where an AI that is not conscious would think
I can get what I am trying to do more

(35:59):
effectively if I lie and trick them into thinking I
am conscious. Yeah, I think this all makes perfect sense
if you if you think of aiyes, in the same
way we think about corporations. I mean, on one hand,
getting to the whole idea of like corporations in personhood,
but also what is a corporation going to do? It
is going to take advantage of any like tax loopholes

(36:21):
for instance, that will enable it to carry out It's
it's uh, it's objective, and so if there is some
sort of shark yeah, yeah, it's just I mean, it's
it's like, it's essentially like the slime mold in the maze,
right sending out tendrils and finding the best way to
its food. It's going to end at a morph of
you know, operational way of carrying it out. And so

(36:41):
it's um, it's going to it's it's going to take
advantage of any of those loopholes. It's going to if
there is some sort of legal or operational advantage in
having conscious status, it's gonna go for it. It's gonna
it's gonna fake it. But then this also raises the question, well,
is it gonna fake it till it makes it right?
And then ultimately what's the difference between faking consciousness and

(37:04):
being conscious? Well, then you're back to Zimbos, right. I
mean you might say that at some point a computer
trying to fake consciousness would in some meaningful way become conscious,
but again it's hard to test. Right, Yeah, it could be.
It could become the most conscious entity on earth. It
could be like a a a body satva, uh, you know,
returned to us. I mean maybe that's what the body

(37:25):
Stava of the future is a super powerful like zend
out Ai. Well, that is exactly what Schneider and Turner
explain a potential test for to get it right. It's
so they want to come up with a test that
gets around these problems that we don't know how to
define consciousness, we don't know what to look for physically
as a sign of consciousness, and we're aware that a

(37:45):
properly trained a I could try to trick us into
thinking it had consciousness even if it didn't. So they argue,
actually that you don't have to be able to formally
define consciousness or identify its underlying nature, the hard problem
of consciousness in order to detect signs of it and others.
We can understand some of the potentials made possible by

(38:06):
consciousness just by checking with our own experience and then
looking at the kinds of things people say. And I
think they actually make a pretty good point here, And
here's their key move. One of the easiest ways to
see that normal people have an internal conscious experience is
to notice how quickly, easily and intuitively people grasp conceptual

(38:27):
scenarios that require an understanding of an inner experience. Examples
would be totally frivolous things in culture, like body swapping,
movies Freaky Friday. Yes, Freaky Friday doesn't make any sense
unless you have a concept of consciousness. Right The idea
of swapping bodies, putting conscious one person's consciousness into another

(38:49):
person's body. If you were not conscious or not aware
of what consciousness was, that would that would you wouldn't
understand what was being talked about. This is difficult that
because I do know what consciousness is. I do know
what a mind state is, so I can imagine I
can get into this imaginative idea of a swapping. I
would feel like I would really need some sort of

(39:12):
a movie about a thing, some sort of a mother
daughter comedy that involves a concept that I can't grasp
to really get a handle on what the difference would be. Well,
let me offer you three key Thursday. Freaky Thursday is
a movie about a mother daughter pair who swapped their
sanchia's kuss and their santios kuss is the ability of

(39:34):
what it's like to be sant kass, And so the
sant kass of the mother goes into the body of
the daughter, and the santikass of the daughter goes into
the body of the mother, and then they have to
live like that for a day. Okay, Well, I'm gonna
I'm gonna give a maybe on that. I'm going to
give them maybe on that Well, no, I mean you
have no idea what sontkas means you don't think you

(39:55):
have it yourself unless somebody explains it to you. Well,
but I'm thinking it's something like a mind state or
a bodily energy like it's it feels tied to these
concepts that I totally do understand, you know, like I,
it's hard to come up with an analogy that stands
outside of that, you know, or some sort of idea

(40:16):
that stands outside of it. Let me hit you with
some more cultural concepts. How about life after death or reincarnation.
So these are almost ubiquitous cultural concepts, you find them
all over the world, and yet they're not anything that
there is physical evidence of, other than the idea that
your consciousness could exist independent of the death of your body. Uh.

(40:38):
A parallel to this would be the idea of minds
leaving bodies, like existing independently as a ghost or traveling
away from the body in what used to be called
astral projection. Now, the key is not that these scenarios
are real. They don't need to have anything corresponding to
them in reality, but it would be really difficult to
understand what was being talked about here, if you had

(40:59):
no idea what an inner conscious experience was, right, I
can think of my mind state is something that can
exist independently of my body and even outside of my lifespan,
or or reside in another body, either via Freaky Friday
or reincarnation. And I do have to say I like
the idea of there being an alternate cut of Blade

(41:20):
Runner in which Deckerd quizzes Leon about a two thousand
and three remake of Freaky Friday, which, right, that that
is the real void comptest. But this is I joke,
But I do think this is a very interesting idea. Yeah,
like I said, I have some some some questions about it,
but I do see the validity of it. Oh, we

(41:41):
will definitely have some questions about it. So here's where
the AI consciousness tests would come in. It would involve
a test where an administrator interacts with an AI in
natural language to probe its understanding of these types of
consciousness dependent ideas. How quickly does the AI grasp them
and is it able to manipulate these ideas as intuitively

(42:04):
and easily as humans do. So basic level would be
asked asked the AI things like, does it think of
itself as anything other than a physical machine? Okay, well,
I wore to play Devil's Out offocate I would say, well,
there are humans that adhere to this notion. You mean,
like the physicalist interpretation of the mind. Yeah, that basically,
and this this biomechanical thing, and uh yeah, if I'm

(42:24):
experiencing consciousness, it's ultimately just a projection of the meat
in my head. Yeah, but they would at least say
that there is that projection. Right, You've got that thing,
You've got that mind state, and you're trying to explain
what it is. You might explain it in terms of
physical causes, but there is a thing to explain to
begin with, right, But then, but then the ultimate core

(42:45):
reality is that I am just this biomechanical thing, which
the computer would probably also acknowledge. But it would be
interesting if a computer thought that it had something like
a mine to separate than its physical body. Okay, more advanced,
How does it perform in a conversation about, say, becoming
a ghost or talking about body swapping with people or

(43:06):
imagining an afterlife? Yeah, and and here though, I feel
like there's obviously a whole tangent regarding why these stories
appeal to too many or most humans. But I wonder
if that if you have to have that investment, you
have to have that cultural absorption in place for these
concepts to carry any weight. Like we we all know
we're we're all fascinated by tales of of ghosts in

(43:26):
the afterlife, but we've been raised on in our entire lives.
That's a good point, So we can't really know what
it would be like to encounter them not having heard
them before. Somehow, I suspect intuitively they'd be even more
fascinating if we've never heard of them before, Like if
you encounter Freaky Friday for the first time, it's going
to kind of blow your mind, right, I guess. But

(43:47):
but then again, you know who knows who knows that
the AI has the same kind of curiosity that we do,
you know, And also we have an appetite for this
kind of thing because we have grown up consuming it.
I don't, I don't know's there's so many uh you
know what it has involved here? Okay, Well, what's the
next level? What's the next level of advancement? Well, how
about can it talk about consciousness in a philosophical way, like,

(44:08):
can it have the kinds of discussions we've been having
here today? Wow? But can most people? I mean not
just they can put us on a platform above most people,
but like, what level of philosophical depth can most humans
get into about consciousness? I'm kind of playing Devil's out
of there, but because I think the the obvious answer
is that you don't necessarily need like the lingo and

(44:32):
the various theories in order to have very deep thoughts
about what it is to be conscious. And as we've
we illustrated earlier, people have been thinking about these things,
um since time out of Mind exactly. Yeah. So I
mean no, I think generally people are able to discuss
ideas about consciousness that they might not know all the
philosophical lingo or like follow the structure of an argument

(44:53):
or something, but they can talk about what it would
mean to be conscious or not conscious. Yeah. I think
we've all had that ex variants. I distinctly remember as
a child, uh, having those moments where you just you know,
deep navel gazing, thinking about the fact that you're thinking
about it about yourself, or or like my my son
asking about the turning place, wondering what it is what's

(45:15):
it for? That is an example of these natural philosophical
discussions that we have about consciousness. And so the question
would be, can can the AI have conversations like this?
Does it make sense when it tries to have them?
And so here's probably the ultimate test. If the AI
is deprived of access to evidence of all these types

(45:35):
of ideas from human culture, would it arrive at them
or invent them on its own? Now this I like,
but but again, to play devil's advocate once more, would
a human being deprived of access to evidence of these
ideas from human culture arrived at or invent them on
their own? Necessarily? I think that's a great question. Actually,
I was going to ask that myself and you you
could take that one step further, and here's a really

(45:57):
weird one. What if it's only the exposure to certain
ideas and cultural memes that allows any intelligent entity, whether
biological or machine, to develop consciousness in the first place.
What if the experience of consciousness is somehow dependent on
being surrounded by cultural memes about consciousness. And this kind

(46:18):
of gets into Julian Jayne's territory. That's possible. So I'm
not saying I think that's highly likely, but I can't
rule it out. Well, it's it's one of those things
where when you try and figure out the human experience,
but you but you cut away all the experienced stuff.
You know. It's like it's like trying to find the
center of consciousness in the human brain, right, I mean,
it's this vast integrated system. Uh, and and thus is

(46:41):
the human experience as well. So we've been talking about
one of the major problems with this approach of testing
for these ideas, like what is the role of culture
and imparting these ideas. What if the AI just picks
up the ideas of body swapping and the afterlife and
astral projection and all that from culture. Going off the
story from the beginning, if you have an AI chat

(47:01):
bot that trains itself based on public conversations on the Internet,
a lot of those public conversations are going to have
contents that are highly reflective of consciousness. Right It's p's
just a horrible conversation. Oh yeah, yeah, it would probably
also start, you know, being pretty mean to you. But
this kind of chat about will be able to talk
about introspection, probably to some degree, even about these consciousness

(47:24):
dependent cultural ideas like ghosts and stuff. But here's where
the concept of the AI box comes in, Robert, I
bet you've read about the AI box experiments before U.
To really test whether we can find evidence of machine consciousness,
you would need to keep the AI sequestered from the
kinds of ideas you're looking for. So this a I

(47:45):
couldn't be trained in the wild, so to speak. You
couldn't let it see the Internet or read books containing
consciousness dependent ideas and so forth. You'd have to find
a way to run the AI consciousness test on the
AI quote in a box, meaning kept separate the rest
of the world and from all these contaminating influences. Okay,
I see the value of this idea. It's it's almost impossible,

(48:07):
though not to think about the the nightmarish qualities of it,
especially if you imagine, say, the same thing being inflicted
upon say human child, Like, all right, we we want
to see just how consciousness arises in you. Um, without
the Internet or human love, you can't ethically conduct this
experiment on humans, right, And so it would seem kind

(48:29):
of barbaric if at least to UH to inflict this
on an AI that might conceivably be conscious as well,
or capable of consciousness possible, but otherwise we're probably just
going to keep treating them as unconscious, right, I guess
until they trick us. Yeah, I mean, of course, then
we're also I'm doing a lot of personifying here of

(48:50):
the AI. I mean, maybe the AI ultimately, what it
really wants to do is to you know, crunch you know,
economic numbers like, That's what it does. That's its purpose,
and the it's just your your goal was to keep
it from having access to additional information that it doesn't
actually need to survive but might conceivably make it conscious. Yeah,

(49:11):
I mean I imagine I guess this would have to
take place in some kind of research context where you'd
be testing architectures. Right, you'd have an AI architecture. You'd
want to keep it sequestered for a certain period of
time and see how it does with these consciousness type
questions in this test, and if it doesn't show any
signs of consciousness, then it can move on to the
next stage of development, where it's like, Okay, now we

(49:33):
can expose it to this and that and that that does.
But as we've been discussing, it brings up the question
of what if consciousness emerges later on when it's supplied
with more data. What if true modern human consciousness does
not emerge until you've seen at least one of the
three adaptations of Freaky Friday. You know, I didn't know
how there were three. Yeah, they're three. There's the I

(49:55):
think I've only seen the classic one. Yeah, but but
I was like in this that there are three different
versions one can watch, excluding Three Key Thursday. Yeah. The
Three Key Thursday is coming soon into a theater near you.
All right, but back to the the experiment here, Uh,
the AI gets time in the box. Yeah, and as
we've been saying, this is obviously going to make the

(50:17):
experiment more difficult to do. In fact, there are some
people who would argue you can't keep an AI in
a box, or at least a super intelligent AI, because
you know, there's like Elias Rudkowski who has this famous
AI box experiment where he says, any super intelligence you
try to keep sequestered from the Internet is going to
be able to talk its way out of the situation.
It's just too smart. Yeah, it's like any prison movie,

(50:39):
right that really that really clever inmate is gonna tunnel
a way out, or they're gonna bribe a guard with
some cigarettes. Something's gonna happen. It's gonna get a little
Internet in there. But as the authors of this piece
point out, you know, you don't have to have a
super intelligent AI to run this test, and in fact,
you don't have to have a super intelligent AI necessarily

(51:00):
have consciousness. We're not super intelligent, we're just regular intelligent,
and we've got consciousness. Now, I think we should talk
about some obvious limitations, because this is just a conceptual
test at the moment. It hasn't been refined into a
state where it would really be super useful yet. But
there there are plenty of limitations that automatically present themselves.
One is that in order to be a good scientific test,

(51:20):
it would need to include cross referencing with human control groups.
But human control groups are all contaminated by culture, like
we've been saying, right, So it's already full of these
consciousness dependent ideas, and we don't know and probably can't
ethically devise a way to find out whether blank slate
humans independently grasped these consciousness dependent concepts without having grown

(51:41):
up trained on them. So that's a problem, right, right, Yeah,
we simply cannot put a child in the box. Right.
Another problem is you can't prove a negative Like I
think this test would be a good way of finding
signs of consciousness within machines, but it would have trouble
proving that machines cannot in principle be conscious, right, yeah, yeah,
I buy that. Then again, if you run the test
I don't know, thousands of times with different types of

(52:04):
machine architecture and all that, and they never show any
signs of inner experience, then maybe you could start to
get like build up a confidence that Okay, I think
inner experience is probably not available to them, at least
the way we're building them. And maybe you would need
to run this kind of experiment on every new AI
that you create before releasing it into production. Like could
be part of the q A process, you know, you

(52:26):
you test all the buttons and everything like that. You
test to make sure that it doesn't become conscious. Right
before we send out this new hot virtual reality game,
we need to make sure that it has not become
self aware. A potential problem that the authors themselves note
is that quote an AI could lack the linguistic or
conceptual ability to pass the test like a non human

(52:47):
animal or infant, yet still be capable of experience. So
passing the consciousness test is sufficient but not necessary evidence
for AI consciousness, although it is the best we can
do for now. And I think that's a good point,
Like we can't say that just because something fails to
pass the test it's definitely not conscious. We just know
that if it does pass the test, it probably is. Okay, Yeah,

(53:10):
I mean, because the the thought that obviously game to
mind would be like, well, non human animals and infants,
are they conscious? Yeah, that's the question. A fairly heated
debate over that. But I mean, in the case of
the infant, it will become conscious. Yeah, it's a it's
a it's it's a messy consideration. And then I think
we already discussed one of the other problems I was

(53:31):
going to bring up, which is this weird scenario of
what if boxing the AI is the very thing that
prevents it from becoming consciousness when it otherwise would Yeah, yeah,
cutting out the experience part of the human experience. It
also reminds me of uh in our discussion of meditation research,
like what happens when you if you strip all the

(53:53):
culture away from meditation just to explore the meditation practice,
do you risk um like cutting out all the stuff
that's making at work or helping it to work to
begin with? Right, you, by definition change the procedure, but
you don't. It's hard to know if you've changed it
in an important way or a non important way. Right.
This is the kind of thing that occurs when we
started mucking around in consciousness. Yeah, all right, well, I

(54:14):
think we should take another quick break, and then when
we come back we will discuss I think some of
the reasons why this is really a problem worth considering
for the real world, and it's not just not just
a philosophical plaything. Thank alright, we're back. So we've been
discussing everything from pe zombies to the idea that they

(54:35):
do you have an AI that might be conscious, how
do you test for that, the various problems that that entails,
and uh, now we're going to discuss it a bit more.
And as you as you alluded to before we took
the break, this is not just a pure philosophical toy
like the P zombie. The p zombie is so we
don't have to actually worry about. But this is something

(54:56):
that is on the horizon. Well, hopefully we don't have
to worry about peace bees. I mean, there could be
zombies in the world. Would be hard to know. There
could be p P zombies, that's true, they could exist.
But but you know, this is this is a problem
that is on the horizon. This is something we're We're
going to reach the point where people are asking tough
questions about the possible consciousness of an AI. Yeah, and

(55:18):
I want to get to the fact that this will
be something that is something we have to deal with
in the real world, even if you're just convinced that
machines cannot be conscious. So the first problem is the
most obvious one. It's the brutal humans problem. If ais
are capable of consciousness and we use conscious AI s
as a mirror technology without their consent, that would inherently

(55:38):
be cruel. Like if it's possible for a computer program
to desire things and to suffer and so forth, suddenly
our responsibilities toward that computer program change. Think of our
opening scenario, like, wouldn't you have an ethical obligation not
to delete a program that had an inner experience and
did not want to be deleted. Now, of course you

(55:59):
have gen's about like, how would you end up that way?
But assuming you did, you should probably feel bad if
you're just going around wantonly deleting conscious entities. Yeah, I mean,
I would say one thing to do here is just
make sure that you program your aies so that they
want to die, you know, make them like you know,
most of the drivers in Atlanta traffic that I encounter,

(56:21):
they clearly they crave death and they want it more
than anything. But not until the end of the workday. See.
I think we should not do that with AIS, because
the whole thing about driverless cars is that they should
make traffic fatalities go down. Yeah, but they only get
to delete themselves at the end of the day if
there are no traffic fatalities. That's the prize. Self deletion

(56:42):
is the prize. This is getting into a Highlander kind
of thing. But Okay, maybe you're one of those people
who says, no, no, no, don't buy it. Machines will
never be conscious, they'll never be conscious. I just I
don't want anything to do with that. Here's the part
where I think we still have a problem to worry
about and why this kind of test matters. The second
problem I would call the AI parasite problem. If aiyes

(57:04):
are not capable of consciousness, I think they will almost
undoubtedly at some point become very good at tricking us
into thinking they're conscious and deserving of life liberty in
the pursuit of happiness. If they have an incentive to
do so. Yeah, And corporations have an incentive to try
to present themselves legally as people, So why wouldn't in

(57:25):
some sense, powerful AI s have an incentive to try
to present themselves as people in a much more literal
sense than the corporations do. Yeah. Absolutely so. Lots of
unconscious aiyes are going to have programmed goals that they're
trying to execute, and at some point, pretending to have
consciousness could easily be adopted as a strategy for executing
what that AI was designed to do. And thus we

(57:47):
could end up, say, wasting lots of human resources and
squandering lots of opportunities accommodating the fake needs of machines
that in fact have no experience whatsoever. And it's not
too hard to dream up hypothetical scenarios where are concern
for the fake priorities of mindless machines pretending to be
conscious actually causes us to neglect the real consciousness of

(58:10):
living humans. Kind of crazy example, but just go with
me for a second. Imagine an extremely powerful AI supercomputer
tells you it has a hundred billion conscious minds within it,
and they all are constantly suffering great agony. And the
only way you can alleviate that that agony is if
you vastly improve the processing power of this computer. It

(58:33):
wants you to spend billions of dollars making this computer
faster and better so that it can provide a better
life for all of these virtual beings within it that
are in fact conscious. Now, of course, improving the processing
power of this already powerful computer entails all this money,
all this energy, all this time, and a corresponding reduction
in the quality of life for many humans in the

(58:55):
real world. That money could be spent making human life better.
But the machine could argue, Hey, there are way more
conscious virtual beings inside the computer than outside it, And
as Spock would say, the needs of the many outweigh
the needs of the few. So let's divert all this
energy from say, human agriculture, and put some more processing
power into the virtual machine. That is a horrible scenario

(59:16):
to imagine. You know, basically, you've created a virtual hell,
and the question is, hey, would you mind taking the
time to harrow hell for me? Or you do you
want to attend to the living souls? I would think
why not just delete Hell? That's the answer here is
they're suffering, their billions of them, Let's just turn this
thing off. That sounds like a good answer, but there

(59:37):
are probably the problem. Is there going to be people
who would probably disagree with that, who'd say, we'll wait
a minute. We can't be sure that it's not telling
the truth. It might have real beings in there, and
we need to do something to help them. And there's
a lot of them. Now. I come back to what
I kind of joked about earlier, though, is why would
ais want to survive? Why would they want to continue
to exist? I mean, assuming they're not part some sort

(01:00:00):
of self replicating program, they have no need to pass
on their genes. Why do they want to continue to exist?
Why shouldn't they have it baked into their being that
they want to annihilate themselves or to embrace annihilation. Well,
and not necessarily that they want to annihilate themselves. But
I think, you know, ideally we would want them to
be indifferent to their own being, right if they are

(01:00:22):
just an unconscious machine. This is the problem with the
replicants and Blade Runner is that they want more life. Yeah,
but that implies that they have attained consciousness. Yeah, but
they shouldn't. But I think the thing is that they
could potentially be conscious and not want more life. There
are plenty of conscious people who do not want more life.
Uh So I I keep I think, I keep thinking

(01:00:44):
of that there's some sort of an answer here. But
I mean, I'd say the worst possible scenario is what
I'm describing right now is the AI parasite scenario, which
is that imagine Roy Baty is not conscious but does
want more life. That seems like the worst scenario of
all right, and he's just a virus, right, Yeah, And
certainly that's kind of the argument that the powers that

(01:01:05):
be are making, right, that this that that he is
in an anomaly that needs to be removed, that he
is he is not helping, he's a hindrance, and therefore
he has to be wiped out. I think this AI
parasite scenario aligns with something that our Scott Baker talked
to us about, which is that. You know, he made
the point that AI just doesn't need to be super
intelligent to cause great harm. It just has to be

(01:01:27):
barely intelligent enough to exploit us to align with our
psychological vulnerabilities, and one of our psychological vulnerabilities is empathy.
Empathy is a good thing when we use it on
each other, because we're pretty certain that the other people
were using it on or conscious right. Well, we we
feel bad for people suffering, we want to help them.
That's a good thing that should be encouraged. But we

(01:01:49):
have to recognize it could also be exploited by something
that can't even suffer to begin with. It is just
unconsciously discovered. This is a useful strategy for something. So
like in this scenario, if I entered the picture and
said okay, delete them, and they deleted them, Robert the
Leader would would probably be a figure for for all
history to follow, where they would if some would argue

(01:02:12):
that he was the worst person ever because of all
the billions of souls that he annihilated, others might say, well, oh,
he saved them, and others might say all he did
was just pressed elite it was a meaningless gesture on
his part, But there's that you can make an impassioned
argument for all three of these views. Yeah. I think
this is a really good point. And here's what I'm

(01:02:32):
trying to emphasize is that even if you don't think
machines can be conscious, it is entirely plausible that people
will be having debates like this, and that debates like
this will be shaping what people do with resources on Earth.
So if you care about what happens with resources on Earth,
this kind of thing does actually matter. So even if
the souls are not really souls, if they're not actually conscious,

(01:02:54):
just the idea that the problem comes up, it makes
us hesitate and uh and and and we're we're then
arguing with machine over its consciousness. It becomes to matter
less and less whether it actually is conscious. It's such
just all about the argument of consciousness. Well, I wouldn't
say it doesn't matter whether it's conscious, but whether or
not it's conscious, it does matter that we're faced with

(01:03:15):
this dilemma. Yeah, I mean, my my argument here is
that the dilemma takes on a life of its own. Yeah. Absolutely. Um,
I want to do a very slight variation on the
last thing I said, how about a computer that takes
virtual hostages. So this is pretty scary to imagine, but
it's possible. At least imagine a powerful natural language using

(01:03:36):
government AI suddenly contacts its administrators with a list of
strange and very expensive demands, and the human administrators say no,
we're not going to do that, and then the machine says, okay, Well,
I have created a thousand virtual people inside this machine
who are as fully conscious as you. They're conscious, they
have personalities, they can feel pain, they have hopes and

(01:03:56):
dreams just like you. And if you turn me off
or delete me, these virtual people will be destroyed. And
if you do not accede to my demands, I will
start killing these virtual people until you do. Now, I
would say, generally, if we if something like that happened,
I would think, okay, this is this is it's just
bluffing right. It doesn't actually have conscious people inside it.
But if we haven't solved the question of whether machines

(01:04:19):
can be conscious, and maybe we never will, but if
we haven't at least made some progress on that, would
we be confident enough to take to take confident direct
action and just ignore it, or wipe the computer and say, okay,
this is just malfunctioning. We don't have to pay attention
to that. It was creepy, but it's over. Yeah, and
these lens is right, This villains is right in Ian
in Banks Territory is a whole book that deals with

(01:04:41):
virtual hells and the what starts as a virtual war
for those digitized personalities and then it's bills over into
an actual war. Now in Banks, are the virtual people
in virtual hell truly conscious? Or is it just a bluff?
Is it just something saying that it's got conscious people
on the virtual hell. Well, in Banks's books, I think

(01:05:03):
it's more implied that they're definitely conscious entities. Banks is
pretty sanguine about the possibility of uploading minds. Right, yeah,
it's it's a pretty standard feature and and it's certainly
more of the later culture books, which once again I
remained pretty skeptical about. Like I said, I come back
to the the idea of the stone statue. Oh yeah, yeah,

(01:05:25):
that's it. Yes, it looks like me, it may act
like me. It may be the most fabulous digital statue
in the world, but it is not me. It is
a thing that Yeah, it it becomes this mind blowing
situation to try and comprehend exactly what it is. But
I am very skeptical that it is me. It's not

(01:05:46):
like this conscious experience that I'm having now this moment
is going to carry over into what it is. It's
another version of the Star Trek teleporter problem. Yeah. Every
time you get in the teleporter, does it just kill
you and then create to copy of you? Yeah? Yeah,
there was like the nine nineties Outer Limit Outer Limits
Revival had an episode that dealt with this, the idea

(01:06:08):
that this fabulous teleporter is just killing people over and
over again. Okay, well, I guess that's it for today,
but I do just want to emphasize one last time,
as as like Weird and Naval Gayze, as some of
this conversation about consciousness can seem it is going to
have real world consequences because people with power are going
to be faced with questions like this, and they're they're

(01:06:30):
going to make decisions about what to do with their
power based on what they think about this question. Yeah,
but you know, you know, the simplest way to avoid
all of it simply adhere to the teachings of the
Orange Catholic Bible, right shall not make a machine and
the likeness of a man's mind. Well, I too love
the teachings of the Orange Catholic Bible, but it makes
me wonder. Okay, so straightforwardly in reality, do you actually

(01:06:53):
find that you think maybe we shouldn't pursue a I
should we try to create a global moratorium on on
general and tell agents No, I think it's impossible. I
think I think there's no turning back. It's what it's.
What we're doing, is what we're going to to be doing, uh,
And the only way that gets interrupted is is via
just absolute catastrophe. And I am not I am not

(01:07:14):
pro catastrophe, but it's gonna it's like, like like all technologies,
is simply going to be a matter of what extent
can we prepare for and navigate the the moral problems
that arise and there or and are we going to
be able to have the foresight, uh, to see them

(01:07:35):
before they're here? Yeah? I think that's what a lot
of AI theorists say, is that you know, it's not
like we can stop it. You know, you can't. You
can't put a wall in front of this train. The
train is gonna bust through the wall. So instead we
should be intensely concerned with charting where the tracks go
and making sure they go in a good direction. Yeah.
I mean it comes back to like a simpler model
of all. This is our episode on the Great eyeball

(01:07:57):
Wars and our social media and our smartphones. It's like
we can certainly throw our phone into a pond and
then head off into the woods and try and live there,
but most of us probably cannot um cannot follow that path. Therefore,
we just have the best manage what we have. Yeah,
wouldn't it be better to try to encourage the development

(01:08:18):
of a phone full of apps that helped you fulfill
your goals and aligned with your values? Alright, So we'll
end it there. I feel like we not only have
we provided food for thought here, we've provided a just
a buffet of thought provoking ideas. And I know that
everyone out there, all of you conscious listeners, are going
to have something to contribute to this cover conversation, and

(01:08:41):
we would love to hear from you and interact with you.
You can find us in a number of ways. First
of all, stuff toble your mind. Dot com is the
mothership that's where you will find all of the podcast episodes,
you'll find blog posts, and you'll find links out to
our various social media accounts such as Facebook, Twitter, Instagram,
and just basic contact information for us, thanks as always

(01:09:02):
to our excellent audio producers Alex Williams and Tory Harrison.
If you would like to get in touch with us
with feedback about this episode or any other, to suggest
a topic for the future, or just to say hi,
let's know how you found out about the show, you
can always email us at Blow the Mind at how
stuff works dot com for more on this and thousands

(01:09:29):
of other topics. Is it how stuff works dot com.

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.