Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking. The
podcast at looks at the future and says Confucius has
a puzzling grace. I'm Jonathan Strickland, I'm La, and I'm
(00:21):
Joe McCormick. So, um, we were talking like intelligent people
just now, right, like intelligent people. Sure depends on how
you define intelligence. Maybe people not. We're not all geniuses
around here, but we have sort of what you might
think of as a human intelligence. We're smarter than like
(00:42):
pigs maybe, or like grasshoppers and stuff like that. I
flatter myself to be considered smarter than my toaster most days. Yeah,
so this actually got us into a discussion about being
smarter than inanimate objects, and so far we're doing quite well. Yeah.
I want y'all to imagine a scenario. Okay, imagine that
(01:02):
you have a real good friend Ardia already stretching imagination already. Well,
I know you don't have good friends, but we just
pretend you have a really good relationship. But this guy
named Artie, and he's funny, intelligent, compassionate. Um, not to
mention the strongest man in the world. Uh. But you
(01:23):
have been extremely good friends with Artie for many, many years. Um.
And suddenly one day, while you and Artie are hanging out,
maybe having some coffee in your living room, a bunch
of people in lab coats bust into your living room
and they deactivate Alreadie with a remote control. Uh. And
the head scientist among this group, she flares her lab
(01:45):
coat and informs you that Already is in fact and
artificially intelligent robot. And to prove it to you, she
opens up Already's head and shows you that it is
full of dense circuitry where a normal person's brain would be. Pop.
Quiz is what's your reaction in this scenario? Are you
a horrified that you were tricked into a fake relationship
(02:07):
with fake intelligence or are you be amazed that a
synthetic intelligence is capable of being a real friend. I'm
gonna go with Cee. I'm amazed they were able to
burst into my living room because there's no doors, so
live us through the walls. So I got like a
wrecking ball comes in and it's got a bunch of
(02:29):
lab coat people clinging to it, and so like scientist
versions of Miley Cyrus. Maybe I think that I am
d upset by this entire situation and that someone has
just deactivated my friend. I think that I'm concerned about
artist personally, which probably puts me into a B camp
if you're if you're really being technical, Okay. But so yeah,
(02:51):
I it's hard to know how I would feel, But
I would like to say I think i'd go with B.
That I wouldn't just be horrified and say like all
of those years were just a yeah, Now, I've had
enough of those in in my life from from people who,
as far as I know, are not robots. So I
think I would mostly be amazed that someone had achieved
(03:13):
the technical ability to simulate intelligence to the extent where
I could not tell the difference between what is simulated
intelligence and real intelligence. Okay. And so this ties into
a concept we've talked about on the show before, which
is the Turing test. What what is the Turing tests? Essentially,
the basic version of the test is that you have
(03:34):
a person positing questions over a terminal, so a computer terminal.
They cannot see whomever or whatever is answering those questions.
Those questions are being answered either by a person in
another room at another terminal, who's typing those answers in
or by a computer program that is responding automatically to
those questions. If the person asking the questions is unable
(03:57):
to determine with any real level of of of certain certainty,
thank you words that don't come to mind certainty, then
that machine is said to pass the Touring test if
in fact it was the machine that was answering those questions. Okay,
And so that's the basic Turing tests, say the back
and forth text based chatbot. But you could sort of
(04:18):
look at the Turing test in a larger view and say,
it's the bigger question of can you simulate human intelligence
in a way that people can't tell the difference? And
in fact, Touring went so far as to say that
if this machine was able to simulate human intelligence to
that level, we would, in fact, we should, in fact
say that it is intelligent in the sense that we
(04:39):
should grant it the same kind of consideration that we do. Right,
because because we cannot inhabit another person's being, we cannot
be certain that that person actually possesses intelligence. We assume, yeah,
we assume that they have the same things that we
You know that they have the same things that we have,
so we should extend the same courtesy to machines if
(05:01):
they are capable of doing such a thing. Right, So
this is what's known as sort of strong AI. Actually
this is really known as weak AI because it's simulating intelligence,
not necessarily possessing intelligence. Well, that what the question is
about strong AI. The question is is um something that
displays convincingly intelligent behavior actually intelligent. And here's where we
(05:24):
come back to Artie. Okay, So Artie, your friend convinced
you for years that he was a real intelligence. He
seemed intelligent, he was, you know, funny, He was a
hundred indistinguishable from every other human you knew. And when
he was activated, he would insist that he was in
fact conscious, understanding language, would insist on all the other
(05:46):
things that we would insist on as part of our
sort of mental abilities. Was already actually intelligent and did
already understand the relationship the language, all of these things
that happened between humans. Or was he merely running programs
that made it seem as though he was right? So,
(06:07):
in other words, this is the difference between we weak
AI and strong AI. Week A I would be running
programs where you would be taking in various kinds of
input and producing output that to an independent observer would
seem to be this kind of natural processing of information
and reaction in the way a human being would react
week at week A. I would be the position that
(06:28):
even though you could simulate it, it wasn't really intelligence, right,
and strong AI would be that it actually has a
level of intelligence that's comparable to human intelligence. It might
not be identical, but it would be. It would truly understand,
It would know what it knew, and not just be
able to process information based on some complex series of rules. Right.
(06:50):
So can a robot or a computer program, any kind
of machine that's based on symbolic programming and computation actually
no or understand anything. And this is a question that
is debated quite heavily in artificial intelligence circles and cognitive science. Yeah,
what we want to focus on today is one particular
(07:10):
thought experiment that has tried to answer this question. And
I would say that this is probably the most hotly
debated question in AI and cognitive science in the past
few decades, well at least the past decades certainly. Yeah,
we're talking about John Searle's Chinese Room thought experiment, which
(07:31):
is a really uh interesting approach. And there are some
also interesting counter arguments to the Chinese room thought experiment
that we'll have to kind of talk about. Yeah. As
of about, computer scientist Pat Hayes had defined cognitive cognitive
science as a field as the ongoing research project of
refuting searles argument. Um. Yeah, So obviously this is a
(07:54):
huge issue, hotly debated by people a lot smarter than
we are and who have thought about it a lot more.
So we're not going to settle it here today. We're
just going to lay it out. We were just going
to try to explain it and offer a few of
the prominent thoughts in this field. Let me give a
crack at explaining what this thought experiment is. Okay, okay, Well,
first of all, just let's know where it came from.
(08:14):
It was from John Searle in his nineteen eighty paper Minds,
Brains and Programs, and that was published in the Behavioral
in Brain Science. He has he has updated it a
couple of times since then, uh, and also responded to
various criticisms leveled against his argument. But here is the
basic thought experiment now, imagine that you, Joe. I'm going
(08:36):
to use you as an example. The Joe, you don't
happen to know Chinese, do you know? Okay, well, then
you're a perfect I wanted to use you as an example.
It's too late. I'm the one giving the example, so
you are the example. I also don't know Chinese, so
either one of us would work in Okay, you can
use me. Okay, So Joe does not know Chinese. Uh,
he is put into a room. That room has a desk,
(08:56):
it's got some paper, some pencils or racers, it's got
some file in cabinets and as a door with a
slot in it. And the way that Joe interacts with
the outside world is that occasionally someone puts a piece
of paper through that slot that has a Chinese character
drawn on it. Now, Joe also has an enormous book
full of instructions on what to do every time a
(09:18):
piece of paper with a character comes in and he
looks at the character, goes through the book, finds the
character in the book, follows the instructions on what he's
supposed to do when that character is inserted, which usually
involves him drawing a different Chinese character on a on
a sheet of paper and then slaying that back through
the slot. Now from anyone who is outside of that rooms,
a Chinese speaking person who is writing down a specific
(09:42):
instruction on a sheet of paper and slides it through
that slot and then gets the result that Joe has
driven written on the other sheet of paper. To them,
it appears that whatever is inside that room understands Chinese
right from their perspective, if if the instructions are good enough, yes, right.
So what we're sort of a mad jenning is this
is a room with a person in it that passes
(10:03):
the Turing test. Yes, yeah, because you the person on
the outside has no idea what's on the inside of
the room. All they know is that they have passed
in instructions that were in Chinese. They've received output that's
also in Chinese, and everything follows according to a sets
set of rules, predetermined rules, and and the output is
precise and convincing enough that the person outside thinks that
(10:25):
whatever is inside is a true native Chinese speaker. So
so the question then in this experiment is does this
mean that Joe actually knows Chinese? And Searles, uh, Well,
Searles says no, you don't. You don't know Chinese. You
don't understand Chinese. You're following instructions, but you don't understand
the actual language. Actually, I might make one changes. I
(10:48):
think it's the obvious point is that I and that
thought experiment don't actually know Chinese. The question is does
this translate to a computer in the same situation, because
this with this supposed to be an analogy for the
situation that a computer or turing machine is in sure,
it receives input, um it has a set of instructions
(11:08):
that it follows in in computer readable language. It processes
that input through those instructions, and then it gives output.
But it doesn't have any actual understanding of what is
going on at any given time. It it knows to
follow specific instructions because of a program, but it doesn't
understand what's happened. It doesn't understand the question more or
(11:30):
less much less the answer, right. And so that was
Searle's point, and it wasn't He wasn't against week Ai.
I mean, he obviously believed that maybe a computer could
simulate human intelligence so well that we couldn't tell the difference.
He was just pointing out that UM it seems that
based on simple programmable instructions and symbolic computation, that a
(11:54):
computer could never really understand but it was dealing with. Yeah,
so understanding is the crux of this argument. And also
it's problematic because often people who level criticisms against it
say that we need to define what understand means, and
then it gets a little metaphysical. But we're not going
(12:15):
to go too far into that because obviously that that
can take hours of discussion. Well, but I mean, part
of this problem is the is the semantics of the
terminology that we use for this kind of thing, And
and which is a little bit funny because because syntax
versus semantics is considered um kind of the key point
of this argument, right, Since what the computer clearly does
understand is syntax, it's got instructions and data to deal with.
(12:38):
What's up for debate is whether it gets semantics, whether
it understands the meaning of the symbol. So, for example,
we've talked about the semantic web, this idea of a web,
some way of interacting with the Worldwide Web. I love
to use that old term because I was around when
it started, but a way to interact with the web
where it quote unquote understand ends what you want and
(13:01):
it's pulling exactly the stuff that you want. But when
we talk about actually making the semantic web come to pass,
it's all about things like meta data and including all
of this information about information so that a computer can
make sense of it, because a computer does not natively
possess this ability to understand semantics. And even in that case,
(13:21):
we would call it the semantic web, but it would
still be based on program yes, and still be very
much based on syntax. All right, All the examples that
we can really come up with are very much based
in our human world because it's a little bit difficult
for us to understand, um without metaphor exactly what's going
on inside of a computer. But it's it's sort of
talking about the difference between like memorizing versus understanding and
(13:41):
equation or being tested on memorization versus comprehension. Yeah, okay,
so well, I want to offer a few of the
responses I've read about. None of these, of course, are
original to us. This is something that's been debated over
and over again. So these are all things that have
been gust in that philosophy world, and an AI and
(14:02):
computer science before UM. But I think one of the
most important things to start with is sort of the
epistemological question is just going straight back to the Turing skepticism,
which is that, um, if you can't tell the difference,
you can't tell the difference. That the idea is, if
someone appears to you to have understanding, how could you
(14:25):
say that they don't have understanding? Um? And I think
that's a really good point to start with. But there
is also a good response to it, in my mind,
which is that, um, it still leaves the question of
what is the basic thing we're talking about? So we
might not be able to know if other people have understanding,
but just saying that sentence still acknowledges, well, there is
(14:47):
such a thing as understanding, and whether or not we
can know that, the question of whether a program machine
can in principle attain this state remains. So it's not
deconstructing the argument, and it's just it's just raising more questions. Yeah,
and so one point raised by this objection might be that, well,
what if we can never solve this question, But that's
(15:09):
not the same as saying that the question is meaningless. Basically, sure,
another objection or another criticism leveled uses what Searl has
identified as the system approach. So the system argument is
about how if you look at the person in the
room as supposed to you know, the person in the
(15:30):
room is supposed to represent a computer, then clearly you
would argue while Joe didn't really learn Chinese, computer being Joe,
Joe clearly did not learn it. He was following specific
instructions that were in a giant book that were not
part of Joe's knowledge. All he was doing was taking
an input and then applying this set of rules and
(15:50):
then getting output. But the system approach says, well, you're
looking at this wrong. You're looking at at the Joe
at being the representation of the computer, whereas really it
would be the entire system that you have to take
into consideration rather than just Joe. Joe would be like
the processor, the CPU, and that that something to do
with the room, which has created these instructions, understands Chinese,
(16:13):
and therefore that that intelligence has happened. So when you
look at the whole thing together, the room, the English
instruction manual, me the input and output paper slips that
are coming through the door, the filing system, which represents memory. Right.
All of that together, you could say, in a way
understands Chinese. Right, And it may not be directly analogous
(16:34):
to the way a person understands Chinese, but that really
doesn't matter in this argument so much. We're talking about
strong AI versus weak AI, not whether or not the
intelligence is exactly like human intelligence. Yeah, and so this
is probably the I think the most popular response to
the Chinese room argument. There's some others we're gonna talk
(16:54):
about in a minute, but I think the dominant one
is this systems thinking approach. And and this also goes with,
um a lot of the ways people have come up
with to think about how intelligence comes out of a
material mind, sorry, a material brain. Um. So you know,
it's like you're the brain is not just one discrete
(17:14):
agent doing one kind of thing, but it's this confabulation
of tons of different types of events that, in an
emergent way combined to produce what we think of his consciousness. Right.
It's also related to another response called the virtual mind reply,
which which specifies that AI is a machine that has
(17:35):
created a mind that understands Chinese, not that the AI
itself understands Chinese. Which is getting back into semantics. Yeah,
it's getting it's also getting a little metaphysical, to the
point where Searle's responses puffed. I think that's how you
would sum it up. But essentially he says like, you
don't need to bring metaphysics into it. This isn't this,
that's that's irrelevant. Well, if you want to get really
(17:57):
hard nosed and practical, there are a couple other ways
we could go with this. One of them is simply
the criticism that the thought experiment is too impractical to
be useful, or it's too unlike the real world. So
think about it like this. Um It asks us to
imagine that the person in the room would be able
to use this English instruction manual, which is the equivalent
(18:19):
to a computer program, fast enough to generate convincing responses.
But in reality, it might take a single person in
a room like billions of years to go through all
of the instructions in that instruction manual and turn back
an answer that was satisfactory. A computer could do this
(18:39):
in less than a second, probably, so Thus there could
be a fundamental difference between the kind of slow intelligence
that the thought experiment would display and an actually convincing
touring ai um, and thus the thought experiment can actually
comment on what happens with a touring AI. See. I
I have a problem with this just from a gut level. Obviously,
(19:00):
I'm not a philosopher nor my scientist. However, I think
about people who really take their time and answering a question,
and I think of computers I've used that also really
take your time and executing a program, And I'm just
thinking that when it comes down to the time issue,
I almost see that as irrelevant to me, like that
(19:20):
doesn't really matter so much to me. But then a
part of it is also that I'm used to talking
about stuff that's taking over taking taking place over the
course of billions of years. So whether it takes you know,
a month to respond to a request or it takes
a second, I don't really necessarily see that as an
(19:40):
indicator or or you know, something that denies the presence
of intelligence. Yeah, you could certainly say that you know
that basically no matter how long it takes, the principles
behind the question in the thought experiment remain the same.
Um that it's just that's just sort of an irrelevant
variable that we can exclude for the sake of argument,
And the question is is it reasonable to exclude it
(20:03):
for the sake of argument or not? Right, because if
time is actually a factor, if speed is a factor
in determining what is is an intelligent then clearly you
can't just dismiss it. Right, Like if some if for
some reason, we discover that that the way that we
exhibit intelligence is dependent partly on the speed at which
our neural networks work, and then it is. But if
(20:26):
they did not work at that speed, we would not
be intelligent, That's what I'm saying. Like, if that were
the case, then we would you know, well we wouldn't
be having this argument because we wouldn't be able to.
But oh, the way you just phrased it made this
objection suddenly seem more reasonable to me. Actually, Like, what
if something that did all the same things a human
brain does, except did them at one one billiont the speed.
(20:49):
Would that have consciousness in the same way that we do? Well,
I don't know that we would all ever argue them
as sument have consciousness the same way. Well, not exactly,
but you know what I mean, has this intelligence and
part of this is also part of a separate but
related argument that talks about exactly how ego centric we're
being in our definition of intelligence and understanding, because that's
(21:10):
a huge part of it. I mean, you know, like
like if if we do we define intelligence based on
neural connections, because there are things out there who display
relatively intelligent behavior that don't have that on our planet Earth. Yeah,
I just think of them as incredibly lucky. They're just
on a They're just on a series of making their
(21:31):
saving throw against stupidity. That's all that's happening. Uh. Yeah.
Let me offer one more objection that's sort of more
on the practical edge I think, and that's sort of
the lack of diverse sensory grounding. So system, if a
system is only receiving input by pieces of paper being
shoved through a slot, that's extremely limiting. Yeah. Um, and
(21:52):
so okay. This is also known as the robot response
to the argument. Yeah so uh. This argument basically says
that what prevents the person in the room from understanding
Chinese is the lack of sensory grounding in the outside
world through diversity of cues like visual, auditory all that. So,
perhaps a computer designed only to manipulate symbols based on
(22:15):
instructions wouldn't understand language or wouldn't understand those symbols. But
if granted diverse sensory and put like video cameras haptic input, Basically,
if you made that computer into a robot that could
sample the world and all of the ways that we can,
the system actually would be able to understand what those
symbols mean. It's an interesting argument. Uh. Cyril of course
(22:38):
has his own counter which says that you could have
a string of numbers that to a person in that
room seemed to be meaningless, but are actually the readoubts
that a result of a video camera that's set up
pointing outside of the room, and it wouldn't magically make
things more meaningful to the person inside the room, and
(22:59):
that it would just create more work for him or her. Right,
you're just yeah, that's exactly what Sarl says, Like, all
you've done is just made more work. You haven't made it.
You haven't increased understanding or intelligence, You just increase the workload.
So that's an interesting counter argument. Although related to that
is the idea that that man in the room, and
this is kind of where it becomes a poor example,
(23:20):
would eventually learn Chinese due to his interactions with the
Chinese language. Oh so you're saying like that somehow, I mean,
it seems unlikely. But the person, even through just interacting
based on these instructions, would somehow gain an understanding that
given given long enough and giving given enough input, or
you know, maybe through enough questions about, for example, hamburgers,
(23:43):
the guy would know what a hamburger is and would
eventually connect the two things in his brain. See, I
have a real problem with that. I could I could
see where Joe in his room starts to receive, over
a great span of time, enough of one particular character
that he's remembered how the resulting character already needs to
(24:05):
be formed without having to look it up. He's just
he's done it enough times. But that doesn't mean he
knows what the input is or what the output means.
He just knows, Oh, I've done this five hundred times.
I know how to draw this without even having to
look it up. It's a good question, though, Actually, I
mean this sort of just intuitively assumes that while the
part we you can't question is that you could never
(24:26):
learn anything just from the symbols alone. But I don't know.
I mean what if you, I mean, imagine something like this,
So you receive sequences of symbols, and then based on
over like hundreds of years of doing this in the room,
you start to notice that certain sequences of symbols mirror
mirror the syntax of sentences you would recognize, like you
(24:49):
start to maybe understand part of speech or something well.
And then maybe from understanding part of speech, you could
start to understand more complex relationships between meanings. All right,
I think that part of the thought experiment assumes that
that that Joe sitting in the room is not thinking
at all about what he's doing, that he's merely following
the instructions. Yeah, it just seems to be sort of
(25:12):
like a condition of this thought experiment that you don't learn, right,
but you know, but but people do learn, and and
you know, people like Kurtswile talk a lot about about
machines also being a naturally emergent um forces just like
anything else in nature. I'm not going to dwell on
cards while too much, but but I will say I
(25:33):
will say that there have been some incredible developments in
machines learning or being able to derive information based upon
uh input, like sensory input. So we were talking just
now about the robot objection, right, the idea that by
adding the sensory input, the system might be able to
piece together more things and thus learn and have true intelligence.
(25:56):
So you guys know about the computer that the Cornell
people built that was able to derive Newton's laws of
physics by examining the movements of a pendulum. Right, I
had not heard about this. So they set a pendulum
in motion, they have a computer that was set up
to measure what was going on with the pendulum, and
(26:17):
the pendulum within or the computer, within about a day's time,
derived the basic law Newton's basic laws of physics because
it observed how the the pendulum moved and then extrapolated
from that what laws must guide the movements. So what's
it done for us lately? Well, I just think that
(26:39):
that's an interesting example of how a machine has been
able to seemingly learn something by observing. Now, again, I
think Searl would argue that the computer doesn't really know that,
but that the computer was able to make um conclusions
based upon input, right, And that all kind of kind
of ties back to what I was saying at the beginning.
(26:59):
About are kind of basic, the basic metaphors that we
use to describe computers and how we're we're essentially misrepresenting
computer syntax with our ideas of of ones and zeros,
because really it's all voltage and operations. It's not really reading.
A computer isn't reading information. It's it's chains of state changes. Um.
(27:19):
And it's it's hard for us to even conceive of
how how little computers understand because it seems like such
a compelling you know. Yeah, well that reminds me of
one more interesting objection to the Chinese room argument, which
is that, Okay, so Cearle isn't saying that no machine
(27:41):
could ever have intelligence. I mean, what cearl is talking
about is programmable machines, right, that are based on symbolic computation, says,
you know, essentially our brains are machines. He's not intelligence. Yeah,
he's not saying that this is magic. He's just saying that,
like computers in the way we think of computers can't
do this. Um. So there's another objection there. It's like, well,
(28:05):
maybe it's true that computers can't do this in the
way we think of, like programmed computers based on information processing,
you know ones and zeros, but maybe some other type
of synthetic machine could. Yeah. And of course, when you
put it that way, I mean the possibilities are impossible
for us to even contain. We don't know what that
(28:25):
would look like. The only types of computers we can
think of to be based on these symbols. So I
was thinking, like, when I was thinking about the touring
test in the Chinese room, there was one computer in
particular that kept popping up to mind, something that appears
to be intelligent based upon the way it behaved, And
I was thinking of IBM S Watson. Obviously, you know,
Watson being the computer that competed on Jeopardy and now
(28:47):
is trying to uh help trying doctors are trying to
use it to help with medical diagnoses and things of
that nature. But if you watched any of those episodes
of Jeopardy where wats and was one of the contestants
and was going up against two former champions, it appeared
that Watson was fairly intelligent thing. I mean, it was
(29:07):
able to beat the champions at their own game. Spoiler
alert for anyone who wasn't watched Jeopardy in the last
few years, but you know, it was able to parse
sometimes pretty complicated clues, go through all of the information
that was stored on the computer because it didn't have
any connection to the internet during those games, and try
(29:27):
and determine what the best answer was, or best question
in the case of Jeopardy was to that particular clue.
And that's a you know, that's pretty tricky stuff. It's
one of those things where you know, you would say,
even smart people have trouble sometimes on Jeopardy. We've seen it.
So but again Searle would say, yeah, the computer didn't
necessarily understand what it was doing. It was able to
(29:50):
break this apart and look for clues using various search
algorithms and probabilistic models to determine what was probably the
right answer, but didn't actually understand any of that content.
That would be Searles or right. Even though even though
Jeopardy is not you know, direct direct question and answer
kind of things, it's a lot of it's based on
(30:12):
cultural understanding or puns. But that's the kind of thing
that Watson is really good at parsing because it just
has enough stuff in there. It's got a lot of
natural language processing ability that I mean that is part
of artificial intelligence. And again, you know, Searle was never
saying that artificial intelligence is as a fool's errand, but
rather that you know, defining it as strong AI versus
(30:32):
weak AI, and and his skepticism that strong AI is
something that we could attain with the current version of
computing as we know it. Right, Okay, So now that
we've talked about it, Yeah, where do you come down?
Is already intelligent? Did already understand the friendship you had
with him? No one understands the friendship they had with me. Um.
(30:53):
I mean I think I think no, based on our
based on our basic definitions of intelligence. But I again
would argue more on the side of of that our
definitions are flawed and that as we move forward into
this incredible future, that we maybe need to think really
hard about what we're calling intelligent and how we're I mean,
I'm just saying that I don't want, you know, AI
(31:15):
artificial intelligence like the Steven Spielberg Stanley Kubrick flick too
to come into reality. What a film? You know, So
I intuitively feel like already would not actually be intelligent.
But I also, especially in areas like this where we
(31:35):
have so little understanding, I feel disinclined to trust my intuitions.
I think my intuitions very likely could be leading me astray. Well,
you certainly, I mean, we have a bias, honestly, and
that's that that's undeniable. So I would say, particularly the
system's response to CYL makes a lot of sense to me.
(31:56):
I don't feel it. I don't feel like it's true intuitively,
but thinking about it abstract ly, I can absolutely see
how that is a reasonable answer. Well, I mean, the
example I could give to you on a biological level
is that, you know, I can speak a little French,
but if you take any single neuron, or even in
(32:17):
any single synapse out of my brain, it can't speak French.
It's you know, you can't. You know. So, judging one
element of a system and saying that that one element
proves that the entire system can't do something seems to
be shortsighted. Cyril, of course, has his own responses to that.
That's not that's not the end of the argument, and
I just broken. I don't consider him refuted, But I
(32:39):
think that objection is interesting. It is interesting, I agree,
entirely interesting, but not you know, not the end all
of this, like we said, there are people way smarter
than us who are continuously debating this and and expanding
our UH knowledge. And sometimes they're focusing definitions, sometimes they
are completely scrapping definitions and starting from scratch. So it's
(33:03):
the best thing about all of this is that it's
going to inform future engineers who are working on artificial
intelligence problems. And it may very well be that either
we will never create any sort of entity that has
strong AI, or that we do but we have no
way of knowing that we did it. But no matter what,
it means that we're going to make improvements in that field.
(33:25):
We're going to have computers that are better able to
react to things like natural language commands, and that will
in turn benefit us. So while this whole discussion might
have seemed philosophical and that there was no real practical application,
the reality is exactly the opposite. Well, and if there
is such a thing as strong AI, you better believe
(33:47):
this has practical application, and we certainly should be thinking
about it. I mean, just on the legal and moral basis, right,
just on the off chance that there is such a
thing as the understanding and experience of being already I
think it's worth conser. Still not giving a toaster the
right to vote, guys, still not doing it. I'm not
gonna do it, alright. So that wraps up this discussion
of artificial intelligence of the Chinese Room Thought experiment. And uh,
(34:11):
while I sound cold, I'm I don't really subscribe to
the things I say. Sometimes. Oh I always vote for
toasters day. They believe in caramelization, and that's wonderful. That's okay,
that's fair. All right. So guys, if you have enjoyed this,
make sure you go to f W Thinking dot com.
That's our website where all of our stuff lives. Blog posts, videos, podcasts, articles.
(34:32):
You should go check that out. It's great stuff. And
if you want to interact with us, get on social networks. Guys.
Come on, we're there. You can find us on Twitter, Facebook,
and Google Plus. We are f W Thinking and we
will pomp to you again really soon. For more on
this topic and the future of technology, visit forward thinking
(34:53):
dot com, brought to you by Toyota. Let's go places