Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production of I Heart Radios
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
How Stuff Works in My Heart Radio, and I love
all things tech. And there's a topic I have touched
on on several occasions in past episodes, but I really
(00:26):
wanted to dig down today into this topic because it's
one of those that's fascinating and is an underpinning for
tons of speculative fiction and horror stories. And since we're
now in October, I think here this would be kind
of thematically linked to Halloween text style. It turns out
that's pretty hard to do. Halloween technology stories have already
(00:48):
covered stuff like haunted house technology. So today we're going
to talk about consciousness and whether or not it might
be possible that machines could one day achieve consciousness. Now,
I could start this off by talking about the Turing test,
which many people have used as the launch point for
a machine intelligence and machine consciousness debates. The way we
(01:11):
understand that test today, which by the way, is slightly
different from the test that Alan Turing first proposed, is
that you have a human interviewer who through a computer interface,
asks questions of a subject, and the subject might be
another human, or it might be a computer program posing
as a human, and the the interviewer just sees text
(01:34):
on a screen. So if the interviewer is unable to
pass a certain threshold of being able to tell the difference,
to be able to determine whether it was a machine
or a person, then the program or machine that's being
tested is said to have passed the Turing test. It
doesn't mean the program or machine is conscious or even intelligent,
(01:54):
but rather says that to outward appearances, it seems to
be intel, religent, and conscious. See, we humans can't be
absolutely sure that other humans are conscious and intelligent. We
assume that they are because each of us knows of
our own consciousness and our own intelligence. We have a
(02:16):
personal experience with that direct personal experience, and other people
seem to display behaviors that indicate they too possess those traits,
and they too have a personal experience. But we cannot
be those other people, and so we have to grant
them the consideration that they too are conscious and intelligent.
(02:38):
And I agree that is very big of us. This
is actually called the problem of other minds in the
field of philosophy, and the problem is this, it is
impossible for any one of us to step outside of
ourselves and into any other person's consciousness. We cannot feel
(02:58):
what other people are feeling or experience their thoughts firsthand.
We are aware of our own abilities, but we are
only aware of the appearance that other people share those abilities.
So assuming that other people also experience consciousness rather than
imitating it, really, really, well, that's a step we all
(03:18):
have to take. Turings point is that if we do
grant that consideration to other people, why would we not
do it to machines as well? I mean, the machine
appears to possess the same qualities as a human. This
is a hypothetical machine, so we cannot experience what that
machine is going through, just as we can't experience what
(03:41):
another person is going through, at least not the intrinsic
personal level. So why would we not grant the machine
the same consideration that we would grant to people. And
Touring was being a little cheeky, But while I just
gave kind of a super fast, high level description of
the Herring test, that's not actually where I want to start.
(04:03):
I want to begin with the concept of consciousness itself. Now,
the reason I want to do this isn't just to
make a longer podcast. It's because I think one of
the most fundamental problems with the discussion about AI intelligence,
self awareness, and consciousness is that there tends to be
a pretty large disconnect between the biologists and the doctors
(04:26):
who specialize in neuroscience, particularly cognitive neuroscience, and does have
some understanding about the nature of consciousness and people. And
then you have computer scientists who have a deep understanding
of how computers process information. And while we frequently will
compare brains to computers, that comparison is not one to one.
(04:46):
It is largely a comparison of convenience, and in some
cases you could argue it's not terribly useful, it might
actually be counterproductive. And so I think at least some
of the speculation about machine consciousness is based on a
lack of understanding of how complicated and mysterious this topic
is in the first place, and this ends up being
(05:08):
really tricky. Consciousness isn't an easily defined quality or quantity.
Some people like to say, we don't so much define
consciousness by what it is but rather what it isn't
and this will also will will kind of bring us
into the realm of philosophy. Now, I'm gonna be honest
with you, guys, the realm of philosophy is not one
(05:30):
I'm terribly comfortable in. I'm pretty pragmatic, and philosophy deals
with a lot of stuff that is, at least for now, unknowable.
Philosophy sometimes asks questions that we do not and cannot
have the answer to, and in many cases we may
never be able to answer those questions. And the pragmatist
emmy says, well, why bother asking the question if you
(05:53):
can never get the answer. Let's just focus on the
stuff we actually can answer. Now, I realize this is
a limitation on my part. I'm owning that I'm not
out to upset the philosophical apple cart. I'm just of
a different philosophical bent. And I realized that just because
we can't answer some questions right now, that doesn't necessarily
(06:16):
mean they will all go unanswered for all time. We
might glean a way of answering at least some of them,
though I suspect a few will be forever unanswered. If
we go with the basic Dictionary definition of consciousness. It's
quote the state of being awake and aware of one's
surroundings end quote. But what this doesn't tell us is
(06:39):
what's going on that lets us do that. It also
doesn't talk about being aware of oneself, which we largely
consider consciousness to be part of. Is not just aware
of your surroundings, but aware that you exist within those surroundings,
your relationship to your surroundings, and things that are going
on within you, yourself, your feelings, and your thoughts. The
(07:03):
fact that you can process all of this, you can
reflect upon yourself. We tend to group that into consciousness
as well. So how is it that we can feel
things and be aware of those feelings? How is it
that we can have intentions and be aware of our intentions.
We are more complex than beings that simply react to
(07:24):
sensory input. We are more than beings that respond to
stuff like hunger, fear, or the desire to procreate. We
have motivations, sometimes really complex motivations, and we can reflect
on those, We can examine them, we can question them,
we can even change them. So how do we do this? Now?
(07:45):
We know this is special because some of the things
we can do are shared among a very few species
on Earth. For example, we humans can recognize our own
reflections in a mirror, starting it around age two or so.
We can see the mirror image and we recognize the
mirriage images of us. Now, there are only eight species
(08:08):
that can do this that we know about anyway. Those
species are the great apes. So you've got humans, gorillas orangutans, binobos,
and chimpanzees, the magpie, the dolphin, and that's it. Oh,
and the magpies are birds, right, That's that's all of them.
Recognizing one's own form and a mirror shows a sense
(08:30):
of self awareness, literally, of awareness of one's self. Now,
there are a lot of great resources online and offline
that go into the theme of consciousness. Heck, there are
numerous college level courses and graduate level courses dedicated to
this topic. So I'm not going to be able to
go into all the different hypotheses, arguments, counter arguments, et
(08:53):
cetera in this episode, but I can cover some basics. Also,
highly recommend you check out v Sauces video on YouTube
that's titled what is Consciousness? Because it's really good. And no,
I don't know, Michael, I have no connection to him.
I've never met him. This is just an honest recommendation
(09:13):
from me, and I have no connection whatsoever to that
video series. The video includes a link to what v
Sauce dubs a lean back, which is a playlist of
related videos on the subject at hand, in this case, consciousness.
Those are also really fascinating. But I do want to
point out that, at least at the time of this recording,
a couple of the videos in that playlist have since
(09:35):
been delisted from YouTube for whatever reason. So there are
a couple of blank spots in there. But what those
videos show, and what countless papers and courses and presentations
also show, is that the brain is so incredibly complex
and nuanced that we don't know what we don't know.
We do know that there are some pretty funky things
(09:57):
going up in the gray matter up in our noggins,
and we also know that many of the explanations given
to describe consciousness rely upon some assumptions that we don't
have any substantial evidence for. You can't really assert something
to be true if it's based on a premise that
you also don't know to be true. That's not how
(10:19):
good science works. This is also why I reject the
arguments around stuff like ghost hunting equipment. The use of
that equipment is predicated on the argument the ghosts exist
and they have certain influences on their environment. But we
haven't proven that ghosts exist in the first place, let
alone that they can affect the environment. So selling a
(10:40):
meter that supposedly detects a ghostly presence from electromagnetic fluctuations
makes no logical sense. For us to know that to
be true, we would already have to have established that
one ghosts are real and two that they have these
electromagnetic fluctuation effects, and we haven't done that. It's like
working science in reverse. That's not how it works. Anyway.
(11:04):
There are a lot of arguments about consciousness that suggests
perhaps there's some ineffable force that informs it. You can
call it the spirit or the soul or whatever. So
that argument suggests that this thing we've never proven to
have existed is what gives consciousness its own and that's
a problem. We can't really state that. I mean, you
(11:26):
can't say the reason this thing exists is that this
other thing that we've never proven to exist makes it exist. Well,
that you've just made it harder to even prove anything,
and we have evidence that also shows that that whole
idea doesn't hold water. The evidence comes in the form
of brain disorders, brain diseases, and brain damage. We have
(11:47):
seen that disease and damage to the brain affects consciousness,
which suggests that consciousness manifests from the actual form and
function of our brains, not from any mysterious force. Our
ability to perceive, to process information, to have an understanding
of the self, to have an accurate reflection of what's
(12:09):
going on around us within our own conceptual reality, all
of that appears to be predicated primarily upon the brain. Now,
originally I was planning to give a rundown on some
of the prevailing theories about consciousness. In other words, I
want to summarize the various schools of thought about how
consciousness actually arises. But as I dove down into the research,
(12:33):
it became apparent really quickly that such a discussion would
require so much groundwork and more importantly, a much deeper
understanding on my part than would be practical for this podcast.
So instead of talking about the higher order theory of
consciousness versus the general workspace theory versus integrated information theory.
(12:54):
I'll take a step back, and I'll say there's a
lot of ongoing debate about the subject, and no one
has conclusively proven that any particular theory or argument is
most likely true. Each theory has its strengths and its weaknesses.
And complicating matters further is that we haven't refined our
language around the concepts enough to differentiate various ideas. That
(13:19):
means you can't talk about an organism being conscious of
something and that degree of consciousness is somehow inherently specific,
it's not. That's the issue. So, for example, I could
say a rat is conscious of a rat terrier, type
of dog that hunts down rats, and so as a
result of this consciousness of the rat terrier, the rat
(13:41):
attempts to remain hidden so as not to be killed.
But does that mean the rat merely perceives the rat
terrier and thus is trying to stay out of its way,
And that's as far as the consciousness goes, or doesn't
mean that the rat actually has a deeper, more meaningful
awareness of the rat terrier. The language isn't much help here,
and moreover, there's debate about what degrees of consciousness there
(14:05):
even are. Also While I've been harping on consciousness, that's
not the only concept we have to consider. Another is intelligence,
which is distinct from consciousness, and there are some similarities.
Like consciousness, intelligence is predicated upon brain functions. Again, a
long history of investigating brain disorders and brain damage indicates this,
(14:28):
as it can affect not just consciousness but also intelligence.
So what is intelligence? Well, get ready for this, But
like consciousness, there's no single agreed upon definition or theory
of intelligence. In general, we use the word intelligence to
describe the ability to think, to learn, to absorb knowledge,
(14:49):
and to make use of it to develop skills. Intelligence
is what allowed humans to learn how to make basic tools,
to gain an understanding of how to cultivate plants and
develop agriculture, to develop architecture, to understand mathematic principles, and
all sorts of stuff. So in humans, we tend to
lump consciousness and intelligence together. We tend to think in
(15:10):
terms of being intelligent and being self aware, but the
two need not necessarily go hand in hand. There are
many people who believe that it could be possible to
construct an artificial intelligence or an artificial consciousness independently of
one another. When we come back. I'll explain more, but
first let's take a quick break. So, in a very
(15:40):
general sense, the group of hypotheses that fall into the
integrated information theory umbrella state that consciousness emerges through linking
elements in our brains. These would be neurons processing large
amounts of information, and that it's the scale of this
endeavor that then leads to consciousness. In other words, if
(16:03):
you have enough processors working on enough information and they're
all interconnected with each other and it's very complicated, bang,
you get consciousness. Now, it is clear our brains process
a lot of information. If you do a search in
textbooks or online, you'll frequently encounter the stat their brains
(16:24):
have around one hundred billion neurons in them and ten
times as many glial cells. Neurons are like the processors
in a computer system, and glial cells would be the
support systems and insulators or those processors. Anyway, those numbers
have since come under some dispute. As an associate professor
at Vanderbilt University named Susanna Herculano Husel, she explained that
(16:49):
the old way of estimating how many neurons the brain
had appeared to be based on taking slices of the brain,
estimating the number of neurons in that slice, and then
kind of extrapolating that number to apply across the brain
in general. But that ignores stuff like the density of
cells and the distribution of the cells across the brain.
(17:10):
So what she did, and this also falls into the
category of Halloween horror stories, she took a brain and
she freaking dissolved it. She could then get account of
the neuron nuclei that was in the soupy mix. By
her accounting, the brain has closer to eighty six billion
(17:32):
neurons and just as many glial cells. Still a lot
of cells, mind you, But you gotta admit it's a
bit of a blow to lose fourteen billion neurons overnight. Still,
we're talking about billions of neurons that interconnect through an
incredibly complex system in our brains, with different regions of
the brain handling different things. And so, yeah, we're processing
(17:54):
a lot of information all the time, and we do
happen to be conscious. So could it be possible that
with a sufficiently powerful computer system, perhaps made up of
hundreds or thousands or tens of thousands of individual computers,
each with hundreds of processors that you could end up
with an emergent consciousness, or, as some people have proposed,
(18:17):
could the Internet itself become conscious due to the fact
that it is an enormous system of interconnected nodes that's
pushing around incredible amounts of information. Well, maybe maybe it's possible.
But here's the kicker. This theory doesn't actually explain the
mechanism by which the consciousness emerges. See, it's one thing
(18:41):
to process information, it's another thing to be aware of
that experience. So when I perceive a color, I'm not
just perceiving a color. I'm aware that I'm experiencing that color.
Or to put it in another way, I can relate
something to how it makes me feel, or some other
subjective experience that's personal to me. So a machine might
(19:04):
objectively be able to return data about stuff like what
is a color of a piece of paper? It analyzes
the light that's being reflected off that piece of paper,
it compares that light to a spectrum of colors. But
that's still not the same thing as having the subjective
experience of perceiving the color. And there may well be
some connection between the complexity of the interconnected neurons in
(19:26):
our brains and the amount of information that we're processing
and our sense of consciousness. But the theory doesn't actually
explain what that connection is. It's more like saying, hey,
maybe this thing we have, this consciousness experience, is also
linked to this other thing, without actually making the link
(19:47):
between the two. It appears to be correlative but not
necessarily causal to relate that to our personal experience. Imagine
that you've just poofed into existence. You have no prior
knowledge of the world, or the physics in that world,
or basic stuff like that, so you're drawing conclusions about
the world around you based solely on your observations as
(20:11):
you wander around and do stuff. And at one point
you see an interesting looking rock on the path, so
you bend over and you pick up the rock, and
when you do, it starts to rain, and you think, well,
maybe I caused it to rain because I picked up
this rock. And maybe it happens a few times where
you pick up a rock and it starts to rain,
(20:31):
which seems to support your thesis. But does that mean
you're actually causing the effects that you are observing. If so,
what is it about picking up the rock that's making
it rain. Now, even in this absurd case that I'm making,
you could argue that if there's never an instance in
which picking up the rock wasn't immediately followed by rain,
(20:53):
there's a lot of evidence to suggest the two are linked,
but you still can't explain why they are linked. Wine
is one caused the other? And that's a problem because
without that piece, you're never really totally sure that you're
on the right track. That's kind of where we are
with consciousness. We've got a lot of ideas about what
(21:14):
makes it happen, but those ideas are mostly missing key
pieces that explain why it's happening. Now, it's possible that
we cannot reduce consciousness any further than we already have,
and maybe that means we never really get a handle
on what makes it happen. It's also possible that we
could facilitate the emergence of consciousness and machines without knowing
(21:36):
how we did it. Essentially, that would be like stumbling
upon the phenomenon by luck. We just happened to create
the conditions necessary to allow some form of artificial consciousness
to emerge. Now, I think this might be possible, but
it strikes me as a long shot. I think of
it like being locked in a dark warehouse filled with
(21:57):
every mechanical part you can imagine, and you start trying
to put things together in complete darkness, and then the
lights come on and you see that you have created
a perfect replica of an F fifteen fighter jet. Is
that possible? Well, I mean, yeah, I guess, but it
seems overwhelmingly unlikely. But again, this is based off ignorance.
(22:19):
It's based off the fact that it hasn't happened yet,
so I could be totally wrong here. Now, on the
flip side of that, programmers, engineers, and scientists have created
computer systems that can process information and intricate ways to
come up with solutions to problems that seem, at least
at first glance, to be similar to how we humans think.
(22:41):
We even have names for systems that reflect biological systems,
like artificial neural networks. Now the name might make it
sound like it's a robot brain, but it's not quite that. Instead,
it's a model for computing in which components in the
system act kind of like neurons. They're interconnected and each
one does a specific process. The nodes in the computer
(23:05):
system connect to other nodes. So you feed the system
input whatever it is you want to process, and then
the nodes that accept that input performs some form of
operation on it and then send that resulting data the
the answer after they've processed this information onto other nodes
(23:26):
in the network. It's a nonlinear approach to computing, and
by adjusting the processes each node performs, this is also
like known as adjusting the weight of the nodes, you
can tweak the outcomes. Now, this is incredibly useful. If
you already know the outcome you want, you can tweak
the system so that it learns or is trained to
(23:47):
recognize something specific. For example, you could train a computer
system to recognize faces, so you would feed it images.
Some of the images would have faces in them, some
would not have faces in them. I might have something
that could be a face, but it's hard to tell.
Maybe it's a shape in a picture that looks kind
of like a face, but it's not actually someone's face. Anyway,
(24:09):
you train the computer model to try and separate the
faces from the non faces, and it might take many
iterations to get the model trained up using your starting data.
Your training data. Now, once you do have your computer
model trained up. You've tweaked all the nodes so that
it is reliably producing results that say, yes, this is
(24:30):
a face or no, this isn't. You can now feed
that same computer model brand new images that it has
never seen before, and it can perform the same functions.
You have taught the computer model how to do something.
But this isn't like spontaneous intelligence, and it's not connected
to consciousness. You couldn't really call it thinking so much
(24:52):
as just being trained to recognize specific patterns. Pretty well. Now,
that's just one example of putting an art official neural
network to use. There are lots of others, and there
are also systems like IBM S Watson, which also appears
at you know, casual glance to think. This was helped
in no small part by the very public display of
(25:14):
Watson competing on special episodes of Jeopardy, and which went
up against human opponents who were former Jeopardy champions themselves.
Watson famously couldn't call upon the Internet to search for answers.
All the data the computer could access was self contained
in its undeniably voluminous storage, and the computer had to
(25:36):
parse what the clues in jeopardy were actually looking for
then come up with an appropriate response. And to make
matters more tricky, the computer wasn't returning a guaranteed right answer.
The computer had to come to a judgment on how
confident it was that the answer it had arrived at
was the correct one. If the confidence met a certain threshold,
(25:57):
then Watson would submit an answer. If it did not
meet that threshold, Watson would remain silent. It's a remarkable achievement,
and it has lots of potential applications, many of which
are actually in action today. But it's still not quite
at the level of a machine thinking like a human,
and I don't think anyone at IBM would suggest that
(26:17):
it possesses any sense of consciousness. When we come back,
i'll talk about a famous thought experiment that really starts
to examine whether or not machines could ever attain intelligence
and consciousness. But first let's take another quick break. And
(26:40):
now this brings me to a famous thought experiment proposed
by John Searle, a philosopher who questioned whether we could
say a machine, even one so proficient that could deliver
reliable answers on demand, would ever truly be intelligent at
least on a level similar to what we human identify
(27:01):
as being intelligent. It's called the Chinese room argument, which
Searle included in his article titled Minds, Brains, and Programs
for the Behavioral and Brain Sciences Journal. Here's the premise
of the thought experiment. Imagine that you are in a
simple room. The room has a table and a chair.
(27:23):
There's a ream of blank paper, there's a brush, there's
some ink, and there's also a large book within the
room that contains pairs of Chinese symbols. In the book.
Oh and we also have to imagine that you don't
understand or recognize these Chinese symbols. They mean nothing to you.
There's also a door to the room, and the door
(27:45):
has a mail slot, and every now and again someone
slides a piece of paper through the slot. The piece
of paper has one of those Chinese symbols printed on it.
And it's your job to go through the book and
find the matching symbol in the book, plus the corresponding
symbol in the pair, because remember I said there were
symbols that were paired together. You then take a blank
(28:08):
sheet of paper, you draw the corresponding symbol from that
pair onto the sheet of paper, and finally you slip
that piece of paper through the mail slot, presumably to
the person who gave you the first piece of paper
and the original part of this problem. So to an
outside observer, let's say it's actually the person who's slipping
(28:29):
the piece of paper to you, it would seem that
whomever is inside the door actually understands Chinese symbols. They
can recognize the significance of whatever symbol was was contributed,
was sent in through the mail slot, and then match
it to whatever the corresponding data is for that particular symbol,
(28:51):
and then return that to the user. So to the
outside observer, it appears as though whatever is inside the
room comprehends what it is doing. But argues Searle, that's
only an illusion because the person inside the room doesn't
know what any of those symbols actually means. So, if
if this is you, you have no context. You don't
(29:13):
know what any individual symbol stands for, nor do you
understand why any symbol would be prepared with any other symbol.
You don't know the reasoning behind that. All you have
is a book of rules, but the rules only state
what your response should be given a specific input, the
rules don't tell you why, either on a granular level
(29:35):
of what the symbols actually mean or on a larger
scale when it comes to what you're actually accomplishing in
this endeavor. All you are doing is filling a physical
action over and over based on a set of rules
you don't understand. And Searle then uses this argument to
say that essentially we have to think the same way
about machines. The machines process information based on the input
(29:59):
they receive eve and the program that they are following.
That's it. They don't have awareness or understanding of what
the information is. Searle was taking aim at a particular
concept in AI, often dubbed strong AI or general AI.
It's a sort of general artificial intelligence. So it's something
(30:20):
that we could or would compared directly to human intelligence,
even if it didn't work the same way as our
intelligence works. The argument is that the capacity and the
outcomes would be similar enough for us to make the comparison.
This is the type of intelligence that we see in
science fiction doomsday scenarios where the machines have rebelled against humans,
(30:42):
or the machines appear to misinterpret simple requests or the
machines come to conclusions that, while logically sound, spelled doom
for us all. The classic example of this, by the way,
is appealing to a super smart artificial intelligence and you say,
could you please bring about world peace because we're we're
all sorts of messed up, and the intelligence processes this
(31:05):
and then concludes that while there are at least two humans,
there can never be a guarantee for peace because there's
always the opportunity for disagreement and violence between two humans,
and so to achieve true peace, the computer then goes
on a killing spree to wipe out all of humanity. Now,
Cerl is not necessarily saying that computers won't contribute to
(31:29):
a catastrophic outcome for humanity. Instead, he's saying they're not
actually thinking or processing information in a truly intelligent way.
They are arriving in outcomes through a series of processes
that might appear to be intelligent at first glance, but
when you break them down, it all reveals themselves to
be nothing more than a very complex series of mathematical processes.
(31:52):
You can even break it down further into binary and
say that ultimately each apparent decision would just be a
particular sequence of switches that are in the on or
off position, and the stays of each switch would be
determined by the input and the program you were running,
not some intelligent artificial creation that is reasoning through a problem. Essentially,
(32:15):
Searle's argument boils down to the difference between syntax and semantics.
Syntax would be the set of rules that you would
follow with those symbols. For example, in English, the letter
Q is nearly always followed by the letter you. The
few exceptions to this rule mostly involved romanizing words from
(32:38):
other language, uh, in which the letter Q represents a
sound that's not natively present in English. So you could
program a machine to follow the basic rule that the
symbol Q should be followed by the symbol you, assuming
you're eliminating all those exceptions I just mentioned. But that
doesn't lead to a grasp of semantics, which is actual meaning. Moreover,
(33:02):
Searle asserts that it's impossible to come to a grasp
of semantics merely through a mastery of syntax. You might
know those rules flawlessly, but Searle argues, you still wouldn't
understand why there are rules, or what the output of
those rules means, or even what the input means. There
are some general counter arguments that philosophers have made to
(33:25):
Searle's thought experiment, and according to the Stanford Encyclopedia of Philosophy,
which is a phenomenal resource, it's also incredibly dense. But
these counter arguments tend to fall into three groups. The
first group agrees with Searle that the person inside the
room clearly has no understanding of the Chinese symbols. But
(33:47):
the group counters the notion that the system as a
whole can't understand it. In fact, they say the opposite.
They say, yes, the person inside the room doesn't understand,
but you're looking at a speci of a component of
a larger system. And if we consider the system, or
maybe a virtual mind that exists due to the system,
(34:09):
that does have an understanding, this is sort of like
saying a neuron in the brain doesn't understand anything. It
sends along signals that collectively and through mechanisms we don't
fully understand, become thoughts that we can become conscious of.
So in this argument, the person in the room is
just a component of an overall system, and the system
(34:30):
possesses intelligence even if the component does not. The second
group argues that if the computer system either could simulate
the operation of a brain, perhaps with billions of nodes,
approaching the complexity of a human brain with billions of neurons,
or if the system were to inhabit a robotic body
that could have direct interaction with its environment, then the
(34:53):
system could manifest intelligence. The third group rejects Cearl's arguments
more thoroughly and on the basis of various grounds, ranging
from Searle's experiment being too narrow in scope to an
argument about what the word understand actually means. This is
where things get a bit more lucy goosey, And sometimes
(35:14):
I feel like arguments in this group amount to oh yeah,
but again, I'm pragmatic, so I tend to have a
pretty strong bias against these arguments, and I recognize that
this means I'm not giving them fair consideration because of
those biases. A few of these arguments take issue with
Searle's assertion that one cannot grasp semantics through an understanding
(35:35):
of syntax. And here's something that I find really interesting.
Searle originally published this argument way back in nineteen It's
been nearly forty years since he first proposed it, and
to this day there is no consensus on whether or
not his argument is sound. So why is that? Well,
(35:55):
it's because, as I've covered in this episode, the concepts
of intelligence and more to the point, consciousness are wibbly wobbly,
though not as far as I can tell, Timey, whymy.
When we can't even nail down specific definitions for words
like understand, it becomes difficult to even tell when we're
agreeing or disagreeing on certain topics. It could be that
(36:18):
while people are in a debate and are using words
in different ways, it turns out they're actually in agreement
with one another. Such is the messiness that is intelligence. Further,
we've not yet observed anything in the machine world that seems,
upon closer examination, to reflect true intelligence and consciousness, at
(36:39):
least as the way we experience it. In fact, we
can't say that we've seen any artificial constructs that have
experienced anything, because, as far as we know, no such
device has any awareness of itself. Now, I'm not sure
if we'll ever create a machine that will have true
intelligence and consciousness, using the word true here to mean
(37:01):
human like. Now, I feel pretty confident that if it
is possible, we will get around to it eventually. It
might take way more resources than we currently estimate, or
maybe it will just require a different computational approach. Maybe
it'll rely on bleeding edge technologies like quantum computing. I figure,
if it's something we can do, we will do it.
(37:24):
It's just a question of time, really, And further, it's
hard for me to come to a conclusion other than
it will ultimately prove possible to make an intelligent conscious construct. Now.
I believe that because I believe our own intelligence and
our own consciousness is firmly rooted in our brains. I
(37:46):
don't think there's anything mystical involved. And while we don't
have a full picture of how it happens in our brains,
we at least know that it does happen, and we
know some of the questions to ask and have some
ideas on how to search for answers. It's not a
complete picture, and we still have a very long way
to go, but I think it's if it's possible to
(38:06):
build a full understanding of how our brains work with
regard to intelligence and consciousness, we'll get there too, sooner
or later. Probably later, I suppose there's still the chance
that we could create an intelligent and or conscious machine
just by luck or accident. And while I intuitively feel
(38:28):
that this is unlikely, I have to admit that intuition
isn't really reliable in these matters. It feels to me
like it is the longest of long shots, but that's
entirely based on the fact that we haven't managed to
do it up until now, and including now. Maybe the
right sequence of events is right around the corner. Just
(38:49):
because it hasn't happened yet doesn't mean it can't or
won't happen at all, And it's good to remember the
machines don't need to be particular early intelligent or conscious
to be useful or potentially dangerous. We can see examples
of that playing out already with devices that have some
limited or weak AI. And by limited I mean it's
(39:13):
not general intelligence. I don't mean that the AI itself
is somehow unsophisticated or primitive. So it may not even
matter if we never create devices that have true or
human like intelligence. We might be able to accomplish just
as much with something that does not have those capabilities.
And in other words, this is a very complicated topic
(39:36):
one that I think gets oversimplified, and a lot of
fiction and also just a lot of speculative prognostications about
the future. I mean, you'll see a lot of videos
about how in the future AI is going to perform
a more intrinsic role, or maybe it will be an
existential threat to humanity or whatever it may be. And
(39:58):
I think a lot of that is predict cated upon
a deep misunderstanding or underestimation of how complicated cognitive neuroscience
actually is and how little we really understand when it
comes to our own consciousness, let alone how we would
bring about such a thing in a different device. What
(40:19):
do you guys think? And do you think that maybe
I'm overstating the complexity? Do you think that I'm off base?
Do you agree with me? And do you have any
other topics you would like me to cover. I invite
you to let me know. Send me an email. The
address is tech stuff at how stuff works dot com,
or drop me a line on Facebook or Twitter. The
(40:39):
handle it both of those is tech Stuff h s W.
Don't forget to go to our website that's tech Stuff
podcast dot com. That's where we have an archive of
all of our past episodes, as well as a link
to our online store, where every purchase you make goes
to help the show. We greatly appreciate it, and I
will talk to you again really soon. Text Stuff is
(41:03):
a production of I heart Radio's How Stuff Works. For
more podcasts from I heart Radio, visit the i heart
Radio app, Apple Podcasts, or wherever you listen to your
favorite shows.