All Episodes

January 8, 2024 41 mins

Is it really possible for a machine to achieve consciousness? What does consciousness even mean? From philosophy to technological obstacles, we look at the problems and possibilities.

 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio, and I love all
things tech and I owe you guys an apology. We're
you're having a rerun episode today, So this is a

(00:27):
rerun from twenty nineteen about machine consciousness, and it's again
one of those tricky concepts. Consciousness is a difficult thing
to define, even for humans. So sit back and enjoy
this rerun. I'll talk to you again at the end
of the episode. There's a topic I have touched on
on several occasions in past episodes, but I really wanted

(00:51):
to dig down today into this topic because it's one
of those that's fascinating and is an underpinning for tons
of speculative fiction and horror stories. And since we're now
in October, I fig this would be kind of thematically
linked to Halloween text style. It turns out that's pretty
hard to do Halloween technology stories. I've already covered stuff

(01:13):
like haunted house technology. So today we're going to talk
about consciousness and whether or not it might be possible
that machines could one day achieve consciousness. Now, I could
start this off by talking about the Turing test, which
many people have used as the launch point for machine
intelligence and machine consciousness debates. The way we understand that

(01:37):
test today, which by the way, is slightly different from
the test that Alan Turing first proposed, is that you
have a human interviewer who, through a computer interface, asks
questions of a subject, and the subject might be another human,
or it might be a computer program posing as a human,
and the interviewer just sees text on a screen. So

(02:00):
if the interviewer is unable to pass a certain threshold
of being able to tell the difference, to be able
to determine whether it was a machine or a person,
then the program or machine that's being tested is said
to have passed the Turing test. It doesn't mean the
program or machine is conscious or even intelligent, but rather
says that to outward appearances, it seems to be intelligent

(02:25):
and conscious. See, we humans can't be absolutely sure that
other humans are conscious and intelligent. We assume that they
are because each of us knows of our own consciousness
and our own intelligence. We have a personal experience with
that direct personal experience, and other people seem to display

(02:47):
behaviors that indicate they too possess those traits, and they
too have a personal experience. But we cannot be those
other people, and so we have to grant them the
consideration that they too are conscious and intelligent. And I
agree that is very big of us. This is actually
called the problem of other minds in the field of philosophy,

(03:12):
and the problem is this, it is impossible for any
one of us to step outside of ourselves and into
any other person's consciousness. We cannot feel what other people
are feeling or experience their thoughts firsthand. We are aware
of our own abilities, but we are only aware of
the appearance that other people share those abilities. So assuming

(03:36):
that other people also experience consciousness rather than imitating it,
really really, well, that's a step we all have to take.
Turing's point is that if we do grant that consideration
to other people, why would we not do it to
machines as well. I mean, the machine appears to possess

(03:57):
the same qualities as a human. This is a hypothetical machine,
so we cannot experience what that machine is going through,
just as we can't experience what another person is going through.
At least not on the intrinsic personal level. So why
would we not grant the machine the same consideration that
we would grant to people? And Touring was being a

(04:19):
little cheeky. But while I just gave kind of a
super fast, high level description of the Turing test, that's
not actually where I want to start. I want to
begin with the concept of consciousness itself. Now, the reason
I want to do this isn't just to make a
longer podcast. It's because I think one of the most
fundamental problems with the discussion about AI intelligence, self awareness,

(04:44):
and consciousness is that there tends to be a pretty
large disconnect between the biologists and the doctors who specialize
in neuroscience, particularly cognitive neuroscience, and thus have some understanding
about the nature of consciousness and people. And then you
have computer scientists who have a deep understanding of how
computers process information. And while we frequently will compare brains

(05:07):
to computers, that comparison is not one to one. It
is largely a comparison of convenience, and in some cases
you could argue it's not terribly useful, it might actually
be counterproductive. And so I think at least some of
the speculation about machine consciousness is based on a lack
of understanding of how complicated and mysterious this topic is

(05:30):
in the first place, and this ends up being really tricky.
Consciousness isn't an easily defined quality or quantity. Some people
like to say, we don't so much define consciousness by
what it is, but rather what it isn't. And this
will also will kind of bring us into the realm
of philosophy. Now, I'm going to be honest with you, guys.

(05:52):
The realm of philosophy is not one I'm terribly comfortable in.
I'm pretty pragmatic, and philosophy deals with a lot of
stuff that is, at least for now, unknowable. Philosophy sometimes
asks questions that we do not and cannot have the
answer to, and in many cases we may never be
able to answer those questions. And the pragmatist em says, well,

(06:16):
why bother asking the question if you can never get
the answer. Let's just focus on the stuff we actually
can answer. Now, I realize this is a limitation on
my part. I'm owning that I'm not out to upset
the philosophical apple cart. I'm just of a different philosophical meant,
and I realize that just because we can't answer some

(06:38):
questions right now, that doesn't necessarily mean they will all
go unanswered for all time. We might glean a way
of answering at least some of them, though I suspect
a few will be forever unanswered. If we go with
the Basic Dictionary definition of consciousness, it's quote the state
of being awake and aware of one's end quote. But

(07:02):
what this doesn't tell us is what's going on that
lets us do that. It also doesn't talk about being
aware of oneself, which we largely consider consciousness to be
part of. Is not just aware of your surroundings, but
aware that you exist within those surroundings, your relationship to
your surroundings, and things that are going on within you, yourself,

(07:26):
your feelings, and your thoughts. The fact that you can
process all of this, you can reflect upon yourself. We
tend to group that into consciousness as well. So how
is it that we can feel things and be aware
of those feelings? How is it that we can have
intentions and be aware of our intentions. We are more

(07:46):
complex than beings that simply react to sensory input. We
are more than beings that respond to stuff like hunger, fear,
or the desire to procreate. We have motivations, sometimes really
complex motivations, and we can reflect on those, we can
examine them, we can question them, we can even change them.

(08:06):
So how do we do this? Now? We know this
is special because some of the things we can do
are shared among a very few species on Earth. For example,
we humans can recognize our own reflections in a mirror.
Starting at around age two or so, we can see
the mirror image and we recognize the mirrorage image is

(08:29):
of us. Now, there are only eight species that can
do this that we know about anyway. Those species are
the great apes. So you've got humans, gorillas orangutans, bonobos,
and chimpanzees, the magpie, the dolphin, and that's it. Oh
and the magpies are birds, right, That's that's all of them.

(08:51):
Recognizing one's own form in a mirror shows a sense
of self awareness, literally, of awareness of one's self. There
are a lot of great resources online and offline that
go into the theme of consciousness. Heck, there are numerous
college level courses and graduate level courses dedicated to this topic.

(09:12):
So I'm not going to be able to go into
all the different hypotheses, arguments, counter arguments, etc. In this episode,
but I can cover some basics. Also, I highly recommend
you check out V. Sauce's video on YouTube that's titled
what is Consciousness? Because it's really good. And no, I

(09:32):
don't know Michael. I have no connection to him. I've
never met him. This is just an honest recommendation from me,
and I have no connection whatsoever to that video series.
The video includes a link to what V Sauce dubs
a lean back, which is a playlist of related videos
on the subject at hand, in this case, consciousness. Those
are also really fascinating. But I do want to point

(09:55):
out that, at least at the time of this recording,
a couple of the videos in that playlist since been
delisted from YouTube for whatever reason, So there are a
couple of blank spots in there. But what those videos show,
and what countless papers and courses and presentations also show,
is that the brain is so incredibly complex and nuanced

(10:16):
that we don't know what we don't know. We do
know that there are some pretty funky things going up
in the gray matter up in our noggins, and we
also know that many of the explanations given to describe
consciousness rely upon some assumptions that we don't have any
substantial evidence for. You can't really assert something to be

(10:38):
true if it's based on a premise that you also
don't know to be true. That's not how good science works.
This is also why I reject the arguments around stuff
like ghost hunting equipment. The use of that equipment is
predicated on the argument that ghosts exist and they have
certain influences on their environment. But we haven't proven that

(11:00):
ghosts exist in the first place, let alone that they
can affect the environment. So selling a meter that supposedly
detects a ghostly presence from electromagnetic fluctuations makes no logical sense.
For us to know that to be true, we would
already have to have established that one, ghosts are real,
at two that they have these electromagnetic fluctuation effects, and

(11:22):
we haven't done that. It's like working science in reverse.
That's not how it works anyway. There are a lot
of arguments about consciousness that suggests perhaps there's some ineffable
force that informs it. You can call it the spirit
or the soul or whatever. So that argument suggests that
this thing we've never proven to have existed, is what

(11:44):
gives consciousness its own and that's a problem. We can't
really state that. I mean, you can't say the reason
this thing exists is this other thing that we've never
proven to exist makes it exist. Well, that you've just
made it harder to even prove anything. And we have
evidence that also shows that that whole idea doesn't hold water.

(12:07):
The evidence comes in the form of brain disorders, brain diseases,
and brain damage. We have seen that disease and damage
to the brain affects consciousness, which suggests that consciousness manifests
from the actual form and function of our brains, not
from any mysterious force. Our ability to perceive, to process information,

(12:29):
to have an understanding of the self, to have an
accurate reflection of what's going on around us within our
own conceptual reality, all of that appears to be predicated
primarily upon the brain. Now, originally I was planning to
give a rundown on some of the prevailing theories about consciousness.
In other words, I want to summarize the various schools

(12:50):
of thought about how consciousness actually arises. But as I
dove down into the research, it became apparent really quickly
that such a discussion would require so much groundwork and
more importantly, a much deeper understanding on my part than
would be practical for this podcast. So instead of talking

(13:12):
about the higher order theory of consciousness versus the general
workspace theory versus integrated information theory, I'll take a step
back and I'll say there's a lot of ongoing debate
about the subject, and no one has conclusively proven that
any particular theory or argument is most likely true. Each

(13:32):
theory has its strengths and its weaknesses, and complicating matters
further is that we haven't refined our language around the
concepts enough to differentiate various ideas. That means you can't
talk about an organism being conscious of something and that
degree of consciousness is somehow inherently specific, it's not. That's

(13:54):
the issue. So, for example, I could say a rat
is conscious of a rat terrier type of that hunts
down rats, and so as a result of this consciousness
of the rat terrier, the rat attempts to remain hidden
so as not to be killed. But does that mean
the rat merely perceives the rat terrier and thus is
trying to stay out of its way, and that's as

(14:15):
far as the consciousness goes. Or does it mean that
the rat actually has a deeper, more meaningful awareness of
the rat terrier? The language is in much help here,
and moreover, there's debate about what degrees of consciousness there
even are. Also, while I've been harping on consciousness, that's
not the only concept we have to consider. Another is intelligence,

(14:38):
which is distinct from consciousness, and there are some similarities.
Like consciousness, intelligence is predicated upon brain functions. Again, a
long history of investigating brain disorders and brain damage indicates this,
as it can affect not just consciousness but also intelligence.
So what is intelligence? Well ready for this, but like consciousness,

(15:02):
there's no single agreed upon definition or theory of intelligence.
In general, we use the word intelligence to describe the
ability to think, to learn, to absorb knowledge, and to
make use of it to develop skills. Intelligence is what
allowed humans to learn how to make basic tools, to
gain an understanding of how to cultivate plants and develop agriculture,

(15:24):
to develop architecture, to understand mathematic principles and all sorts
of stuff. So in humans, we tend to lump consciousness
and intelligence together we tend to think in terms of
being intelligent and being self aware, but the two need
not necessarily go hand in hand. There are many people
who believe that it could be possible to construct an

(15:45):
artificial intelligence or an artificial consciousness independently of one another.
When we come back, i'll explain more, but first let's
take a quick break. So, in a very general sense,

(16:06):
the group of hypotheses that fall into the integrated information
theory umbrella state that consciousness emerges through linking elements in
our brains. These would be neurons processing large amounts of information,
and that it's the scale of this endeavor that then
leads to consciousness. In other words, if you have enough

(16:28):
processors working on enough information and they're all interconnected with
each other and it's very complicated, bang, you get consciousness. Now,
it is clear our brains process a lot of information.
If you do a search in textbooks or online, you'll
frequently encounter the stat that their brains have around one

(16:49):
hundred billion neurons in them and ten times as many
glial cells. Neurons are like the processors in a computer system,
and glial cells would be the support systems and insulators
for those processors. Anyway, those numbers have since come under
some dispute. As an associate professor at Vanderbilt University named

(17:09):
Susanna Herculano Husel. She explained that the old way of
estimating how many neurons the brain had appeared to be
based on taking slices of the brain, estimating the number
of neurons in that slice, and then kind of extrapolating
that number to apply across the brain in general. But
that ignores stuff like the density of cells and the

(17:32):
distribution of the cells across the brain. So what she did,
and this also falls into the category of Halloween horror stories,
is she took a brain and she freaking dissolved it.
She could then get account of the neuron nuclei that
was in the soupy mix. By her accounting, the brain

(17:53):
has closer to eighty six billion neurons and just as
many glial cells. Still a lot of cells, mind you,
But you got to admit it's a bit of a
blow to lose fourteen billion neurons overnight. Still, we're talking
about billions of neurons that interconnect through an incredibly complex
system in our brains, with different regions of the brain

(18:15):
handling different things, and so, yeah, we're processing a lot
of information all the time, and we do happen to
be conscious. So could it be possible that with a
sufficiently powerful computer system, perhaps made up of hundreds or
thousands or tens of thousands of individual computers, each with
hundreds of processors, that you could end up with an

(18:38):
emergent consciousness, Or, as some people have proposed, could the
Internet itself become conscious due to the fact that it
is an enormous system of interconnected nodes that is pushing
around incredible amounts of information. Well, maybe maybe it's possible.
But here's the kicker. This theory doesn't actually explain the

(19:00):
mechanism by which the consciousness emerges. See, it's one thing
to process information, it's another thing to be aware of
that experience. So when I perceive a color, I'm not
just perceiving a color. I'm aware that I'm experiencing that color.
Or to put it in another way, I can relate

(19:21):
something to how it makes me feel, or some other
subjective experience that is personal to me. So a machine
might objectively be able to return data about stuff like
what is a color of a piece of paper? It
analyzes the light that's being reflected off that piece of paper,
it compares that light to a spectrum of colors. But
that's still not the same thing as having the subjective

(19:43):
experience of perceiving the color. And there may well be
some connection between the complexity of the interconnected neurons in
our brains and the amount of information that we're processing
and our sense of consciousness, but the theory doesn't actually
explain what that connection is. It's more like saying, hey,
maybe this thing we have, this consciousness experience, is also

(20:07):
linked to this other thing, without actually making the link
between the two. It appears to be correlative, but not
necessarily causal to relate that to our personal experience. Imagine
that you've just poofed into existence. You have no prior
knowledge of the world, or the physics in that world,
or basic stuff like that, so you're drawing conclusions about

(20:31):
the world around you based solely on your observations as
you wander around and do stuff. And at one point
you see an interesting looking rock on the path, so
you bend over and you pick up the rock, and
when you do, it starts to rain, and you think, well,
maybe I caused it to rain because I picked up
this rock. And maybe it happens a few times where

(20:53):
you pick up a rock and it starts to rain,
which seems to support your thesis. But does that mean
you're actually causing the effects that you are observing? If so,
what is it about picking up the rock that's making
it rain? Now, even in this absurd case that I'm making,
you could argue that if there's never an instance in

(21:14):
which picking up the rock wasn't immediately followed by rain,
there's a lot of evidence to suggest the two are linked,
but you still can't explain why they are linked, why
does one cause the other? And that's a problem because
without that piece, you're never really totally sure that you're
on the right track. That's kind of where we are

(21:35):
with consciousness. We've got a lot of ideas about what
makes it happen, but those ideas are mostly missing key
pieces that explain why it's happening. Now, it's possible that
we cannot reduce consciousness any further than we already have,
and maybe that means we never really get a handle
on what makes it happen. It's also possible that we

(21:56):
could facilitate the emergence of consciousness and machines without knowing
how we did it. Essentially, that would be like stumbling
upon the phenomenon by luck. We just happened to create
the conditions necessary to allow some form of artificial consciousness
to emerge. Now, I think this might be possible, but
it strikes me as a long shot. I think of

(22:18):
it like being locked in a dark warehouse filled with
every mechanical part you can imagine, and you start trying
to put things together in complete darkness, and then the
lights come on and you see that you have created
a perfect replica of an F fifteen fighter jet. Is
that possible? Well, I mean, yeah, I guess, but it

(22:38):
seems overwhelmingly unlikely. But again, this is based off ignorance.
It's based off the fact that it hasn't happened yet,
so I could be totally wrong here. Now, on the
flip side of that, programmers, engineers, and scientists have created
computer systems that can process information in intricate ways to

(22:59):
come up with solutions to problems that seem, at least
at first glance, to be similar to how we humans think.
We even have names for systems that reflect biological systems,
like artificial neural networks. Now, the name might make it
sound like it's a robot brain, but it's not quite that. Instead,
it's a model for computing in which components in the

(23:20):
system act kind of like neurons. They're interconnected and each
one does a specific process. The nodes in the computer
system connect to other nodes. So you feed the system
input whatever it is you want to process, and then
the nodes that accept that input perform some form of
operation on it and then send that resulting data the

(23:45):
answer after they've processed this information onto other nodes in
the network. It's a non linear approach to computing, and
by adjusting the processes each node performs. This is also
known as adjusting the weight of the nodes, you can
tweak the outcomes. Now, this is incredibly useful. If you
already know the outcome you want, you can tweak the

(24:08):
system so that it learns or is trained to recognize
something specific. For example, you could train a computer system
to recognize faces, so you would feed it images. Some
of the images would have faces in them, some would
not have faces in them. Some might have something that
could be a face, but it's hard to tell. Maybe
it's a shape in a picture. That looks kind of

(24:30):
like a face, but it's not actually someone's face. Anyway.
You train the computer model to try and separate the
faces from the non faces, and it might take many
iterations to get the model trained up using your starting
data your training data. Now, once you do have your
computer model trained up, you've tweaked all the nodes so
that it is reliably producing results that say, yes, this

(24:54):
is a face or no, this isn't. You can now
feed that same computer model brand new images that it
has never seen before, and it can perform the same functions.
You have taught the computer model how to do something.
But this isn't like spontaneous intelligence, and it's not connected
to consciousness. You couldn't really call it thinking so much

(25:16):
as just being trained to recognize specific patterns pretty well. Now,
that's just one example of putting an artificial neural network
to use. There are lots of others, and there are
also systems like IBM's Watson, which also appears at casual
glance to think. This was helped in no small part

(25:37):
by the very public display of Watson competing on special
episodes of Jeopardy, and which it went up against human
opponents who were former Jeopardy champions themselves. Watson famously couldn't
call upon the Internet to search for answers. All the
data the computer could access was self contained in its
undeniably voluminous storage, and the computer had to parse what

(26:01):
the clues in Jeopardy were actually looking for, then come
up with an appropriate response. And to make matters more tricky,
the computer wasn't returning a guaranteed right answer. The computer
had to come to a judgment on how confident it
was that the answer it had arrived at was the
correct one. If the confidence met a certain threshold, then

(26:21):
Watson would submit an answer. If it did not meet
that threshold, Watson would remain silent. It's a remarkable achievement,
and it has lots of potential applications, many of which
are actually in action today, but it's still not quite
at the level of a machine thinking like a human,
and I don't think anyone at IBM would suggest that
it possesses any sense of consciousness. When we come back,

(26:46):
i'll talk about a famous thought experiment that really starts
to examine whether or not machines could ever attain intelligence
and consciousness. But first, let's take another quick break. And
now this brings me to a famous thought experiment proposed

(27:09):
by John Searle, a philosopher who questioned whether we could
say a machine, even one so proficient that could deliver
reliable answers on demand, would ever truly be intelligent, at
least on a level similar to what we humans identify
as being intelligent. It's called the Chinese room argument, which

(27:30):
Searle included in his article titled Minds, Brains, and Programs
for the Behavioral and Brain Sciences Journal. Here's the premise
of the thought experiment. Imagine that you are in a
simple room. The room has a table and a chair.
There's a ream of blank paper, there's a brush, there's

(27:52):
some ink, and there's also a large book within the
room that contains pairs of Chinese symbols in the book.
Oh and we also have to imagine that you don't
understand or recognize these Chinese symbols. They mean nothing to you.
There's also a door to the room, and the door
has a mail slot, and every now and again someone

(28:13):
slides a piece of paper through the slot. The piece
of paper has one of those Chinese symbols printed on it.
And it's your job to go through the book and
find the matching symbol in the book plus the corresponding
symbol in the pair, because remember I said there were
symbols that were paired together. You then take a blank
sheet of paper, You draw the corresponding symbol from that

(28:38):
pair onto the sheet of paper, and finally you slip
that piece of paper through the mail slot, presumably to
the person who gave you the first piece of paper
and the original part of this problem. So to an
outside observer, let's say it's actually the person who's slipping
the piece of paper to you, it would seem that
whomever is inside the door understands Chinese symbols. They can

(29:03):
recognize the significance of whatever symbol was contributed, was sent
in through the mail slot, and then match it to
whatever the corresponding data is for that particular symbol, and
then return that to the user. So to the outside observer,
it appears as though whatever is inside the room comprehends
what it is doing. But argues Serle, that's only an

(29:27):
illusion because the person inside the room doesn't know what
any of those symbols actually means. So if this is you,
you have no context. You don't know what any individual
symbol stands for, nor do you understand why any symbol
would be prepared with any other symbol. You don't know
the reasoning behind that. All you have is a book

(29:50):
of rules, But the rules only state what your response
should be given a specific input. The rules don't tell
you why, either on a granular level of what the
symbols actually mean, or on a larger scale when it
comes to what you're actually accomplishing in this endeavor. All
you are doing is filling a physical action over and

(30:10):
over based on a set of rules you don't understand.
And Searrele then uses this argument to say that essentially
we have to think the same way about machines. The
machines process information based on the input they receive and
the program that they are following. That's it. They don't
have awareness or understanding of what the information is. Searle

(30:33):
was taking aim at a particular concept in AI, often
dubbed strong AI or general AI. It's a sort of
general artificial intelligence. So it's something that we could or
would compare directly to human intelligence, even if it didn't
work the same way as our intelligence works. The argument
is that the capacity and the outcomes would be similar

(30:55):
enough for us to make the comparison. This is the
type of intelligence that we see in science fiction doomsday scenarios,
where the machines have rebelled against humans, or the machines
appear to misinterpret simple requests, or the machines come to
conclusions that, while logically sound, spell doom for us all.

(31:16):
The classic example of this, by the way, is appealing
to a super smart artificial intelligence and you say, could
you please bring about world peace because we're all sorts
of messed up, and the intelligence processes this and then
concludes that while there are at least two humans, there
can never be a guarantee for peace because there's always
the opportunity for disagreement and violence between two humans. And

(31:42):
so to achieve true piece, the computer then goes on
a killing spree to wipe out all of humanity. Now,
Cyril is not necessarily saying that computers won't contribute to
a castrophic outcome for humanity. Instead, he's saying they're not
actually thinking or processing information in a truly intelligent way.

(32:02):
They are arriving in outcomes through a series of processes
that might appear to be intelligent at first glance, but
when you break them down, it all reveals themselves to
be nothing more than a very complex series of mathematical processes.
You could even break it down further into binary and
say that ultimately, each apparent decision would just be a

(32:22):
particular sequence of switches that are in the on or
off position, and the status of each switch would be
determined by the input and the program you were running,
not some intelligent artificial creation that is reasoning through a problem. Essentially,
Serle's argument boils down to the difference between syntax and semantics.

(32:46):
Syntax would be the set of rules that you would
follow with those symbols. For example, in English, the letter
Q is nearly always followed by the letter you. The
few exceptions to this rule mostly involve romanizing words from
other language in which the letter Q represents a sound

(33:07):
that's not natively present in English. So you could program
a machine to follow the basic rule that the symbol
Q should be followed by the symbol you, assuming you're
eliminating all those exceptions I just mentioned. But that doesn't
lead to a grasp of semantics, which is actual meaning. Moreover,
Searle asserts that it's impossible to come to a of

(33:30):
semantics merely through a mastery of syntax. You might know
those rules flawlessly, but Searle argues, you still wouldn't understand
why there are rules, or what the output of those
rules means, or even what the input means. There are
some general counter arguments that philosophers have made to Searle's
thought experiment, and according to the Stanford Encyclopedia of Philosophy,

(33:55):
which is a phenomenal resource, it's also incredibly dense. But
these counter arguments tend to fall into three groups. The
first group agrees with Cerle that the person inside the
room clearly has no understanding of the Chinese symbols. But
the group counters the notion that this system as a

(34:15):
whole can't understand it. In fact, they say the opposite.
They say, yes, the person inside the room doesn't understand,
but you're looking at a specific component of a larger system.
And if we consider the system, or maybe a virtual
mind that exists due to the system that does have
an understanding, this is sort of like saying a neuron

(34:38):
in the brain doesn't understand anything. It sends along signals
that collectively and through mechanisms, we don't fully understand become
thoughts that we can become conscious of. So in this argument,
the person in the room is just a component of
an overall system, and the system possesses intelligence even if
the component does not. The second group argues that but

(35:00):
if the computer system either could simulate the operation of
a brain, perhaps with billions of nodes, approaching the complexity
of a human brain with billions of neurons, or if
the system were to inhabit a robotic body that could
have direct interaction with its environment, then the system could
manifest intelligence. The third group rejects Searle's arguments more thoroughly

(35:24):
and on the basis of various grounds, ranging from Searle's
experiment being too narrow in scope to an argument about
what the word understand actually means. This is where things
get a bit more loosey goosey, And sometimes I feel
like arguments in this group amount to oh yeah, but again,
I'm pragmatic, so I tend to have a pretty strong

(35:45):
bias against these arguments, and I recognize that this means
I'm not giving them fair consideration because of those biases.
A few of these arguments take issue with Searle's assertion
that one cannot grasp semantics through an understanding syntax. And
here's something that I find really interesting. Searle originally published
this argument way back in nineteen eighty. It's been nearly

(36:10):
forty years since he first proposed it, and to this
day there is no consensus on whether or not his
argument is sound. So why is that? Well, it's because,
as I've covered in this episode, the concepts of intelligence
and more to the point, consciousness are whibley wobbly, though
not as far as I can tell, timey whymy. When

(36:32):
we can't even nail down specific definitions for words like understand,
it becomes difficult to even tell when we're agreeing or
disagreeing on certain topics. It could be that while people
are in a debate and are using words in different ways,
it turns out they're actually in agreement with one another.
Such is the messiness that is intelligence. Further, we've not

(36:55):
yet observed anything in the machine world that seems, upon
closer examination and to reflect true intelligence and consciousness, at
least as the way we experience it. In fact, we
can't say that we've seen any artificial constructs that have
experienced anything, because, as far as we know, no such
device has any awareness of itself. Now, I'm not sure

(37:18):
if we'll ever create a machine that will have true
intelligence and consciousness, using the word true here to mean
human like. Now, I feel pretty confident that if it
is possible, we will get around to it eventually. It
might take way more resources than we currently estimate, or
maybe it will just require a different computational approach, maybe

(37:40):
it'll rely on bleeding edge technologies like quantum computing. I
figure if it's something we can do, we will do it.
It's just a question of time, really, and further, it's
hard for me to come to a conclusion other than
it will ultimately prove possible to make an intelligent, conscious construct.

(38:01):
Now I believe that because I believe our own intelligence
and our own consciousness is firmly rooted in our brains.
I don't think there's anything mystical involved. And while we
don't have a full picture of how it happens in
our brains, we at least know that it does happen,
and we know some of the questions to ask and

(38:22):
have some ideas on how to search for answers. It's
not a complete picture, and we still have a very
long way to go, but I think it's if it's
possible to build a full understanding of how our brains
work with regard to intelligence and consciousness, we'll get there too,
sooner or later, probably later. I suppose there's still the

(38:42):
chance that we could create an intelligent and or conscious
machine just by luck or accident. And while I intuitively
feel that this is unlikely, I have to admit that
intuition isn't really reliable in these matters. It feels to
me like it is the longest of long shots, but

(39:04):
that's entirely based on the fact that we haven't managed
to do it up until now, and including now. Maybe
the right sequence of events is right around the corner.
Just because it hasn't happened yet doesn't mean it can't
or won't happen at all. And it's good to remember
that machines don't need to be particularly intelligent or conscious

(39:26):
to be useful or potentially dangerous. We can see examples
of that playing out already with devices that have some
limited or weak AI. And by limited I mean it's
not general intelligence. I don't mean that the AI itself
is somehow unsophisticated or primitive, so it may not even matter.
If we never create devices that have true or human

(39:49):
like intelligence, we might be able to accomplish just as
much with something that does not have those capabilities. And
in other words, this is a very complicated top one
that I think gets oversimplified, and a lot of fiction
and also just a lot of speculative prognostications about the future.

(40:10):
I mean, you'll see a lot of videos about how
in the future AI is going to perform a more
intrinsic role, or maybe it'll be an existential threat to
humanity or whatever it may be. And I think a
lot of that is predicated upon a deep misunderstanding or
underestimation of how complicated cognitive neuroscience actually is and how

(40:33):
little we really understand when it comes to our own consciousness,
let alone how we would bring about such a thing
in a different device. I hope you enjoyed that rerun,
and as always, I also hope that you are all
well and I will talk to you again really soon.

(40:58):
Tech Stuff is an Iheartreate Radio production. For more podcasts
from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever
you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.