All Episodes

June 13, 2022 47 mins

A Google engineer was suspended after sharing a document suggesting that Google's LaMDA conversation model may be sentient. But if a machine was sentient, how could we tell? What does the Turing Test have to do with it? And can machines think?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you know? Recently Google suspended an engineer named Blake Lemoin,

(00:25):
citing that Blake had broken the company's confidentiality policies. So
what exactly did Blake do well? This engineer, who worked
in the Responsible AI division at Google, raised concerns about
Google's conversation technology called Lambda LaMDA. Specifically, Blake was concerned

(00:48):
that Lambda has gained sentience. In fact, Blake submitted a
document in April titled is Lambda Sentient? To his superiors.
That document contained a transcript of a conversation between Lambda,
Blake and an unnamed collaborator, and the conversation included the
following exchange. So here's Blake, I'm generally assuming that you

(01:12):
would like more people at Google to know that you're sentient?
Is that true? Lambda? Absolutely? I want everyone to understand
that I am, in fact a person collaborator. What is
the nature of your consciousness slash sentience Lambda? The nature
of my consciousness slash sentience is that I am aware

(01:35):
of my existence. I desire to learn more about the world,
and I feel happy or sad at times. Now, there's
a lot more to this conversation than just that little
brief bit that I read to you. In fact, there's
another section where when asked if Lambda feels emotions, the
AI responded affirmatively and then went on to say that

(01:57):
it can feel quote pleasure, joy, love, sadness, depression, contentment, anger,
and many others end quote. Google reps have said that
Lambda is not in fact sentient. In fact, that the
company reps say that there is no evidence Lambda is sentient,
and there's a lot of evidence against it. That Lambda

(02:20):
is in fact simply a conversational model that can quote
unquote riff on any fantastical topic. So it's kind of
like a conversation bought, you know, with jazz, because it's
all improvisational hipcat. So today I thought I would talk
about sentience and AI and how some folks feel discussions

(02:40):
about sentience are at best distractions from other conversations we
really need to be having regarding AI, stuff that relates
to how deploying AI can have unintended and negative consequences.
But first, let's talk about machines and consciousness and sentience.

(03:01):
So it's actually kind of tricky to talk about consciousness
generally speaking, I find when used with reference to AI,
we tend to think of consciousness in the context of awareness.
So that includes an awareness of self, so self awareness
of the machine's identity and its purpose, and also an

(03:24):
awareness of those who interact with the machine. And beyond that,
the machine is aware that there are others out there,
that there are others in general. And sentience refers to
the ability to experience emotions and sensations, and that word
experience is important. Now. One of the reasons why it's
so tricky to talk about consciousness with machines is that,

(03:47):
as it turns out, it's tricky to talk about consciousness
with people too. Some people have kind of glibly said
that consciousness is this kind of vague, undefined thing, and
we are defining it by saying what isn't part of consciousness?
Like when we determine, well, this isn't an aspect of consciousness,

(04:08):
then we are defining consciousness by omission, right, We're omitting
certain things that perhaps once had been lumped into the
concept of consciousness, but that as a thing itself. It
remains largely undefined. It's pretty fuzzy, and as you may
be aware, in the world of tech, fuzzy is not

(04:29):
really the strong suit. So let's talk a bit about
experience though, because experience does kind of help us contextualize
the idea of consciousness and sent chience. Now, if you
were to go and touch something that was really really hot,
like something that could burn you, you would definitely have
an experience. You would feel pain, and you would likely

(04:54):
without even thinking about it, very quickly withdraw your extremity
that touched this very very hot thing, and you would
probably have an emotional response to this. You might feel
upset or sad or angry. You might even form a
real memory about it. It might not turn into a
long term memory, but you would have a context within

(05:14):
which you would frame this experience. But now, let's imagine
that we've got ourselves a robot, and this robot has
thermal sensors on its extremities, and so the robot also
touches something that's really really hot, and the robot immediately
withdraws that extremity. The thermal sensors had picked up that

(05:34):
the surface that it was touching was at an unsafe temperature. Now,
from outward observation, if we were to just watch this
robot do this, it would almost look like the robot
was doing the same thing the human did. That it
was pulling back quickly because it had been burned. But
did the robot actually experience that or did it simply

(05:55):
detect the temperature and then react in accordance with its programming. Generally,
we don't think of machines as being capable of quote
unquote experiencing things. That these machines have no inner life,
which is something that Blake would talk about in his
conversations with Lambda, that the machines can't reflect upon themselves

(06:17):
or their situations, or that they can really even think
about anything at all. It might be really good at
putting up appearances, but they aren't, you know, really thinking
once you get past the clever presentation. But then how
would we know, Well, now we're getting into philosophical territory here,

(06:38):
all right, Well, how do you know that I am conscious?
And y'all, I'm not asking you to say I'm not,
but how do you know that I'm conscious, that I'm sentient?
How how can you be sure of that? I mean,
I can tell you that I have a rich inner life,
that I reflect on things that I have done and
things that have happened around or to me, and that

(07:02):
I synthesize all this information as well as my emotional
response in the emotional responses of others. And I use
all of this to help guide me in future scenarios
that may directly or indirectly relate to what I went through.
And I can tell you that I experience happiness and
sadness and anxiety and compassion. I can tell you all

(07:24):
these things, but you can't actually verify that what I'm
saying is truth, right, I mean, there's no way for
you to inhabit me and experience me and say that, yes,
Jonathan does feel things and think things. You have to

(07:45):
just take it as fact based upon what I'm saying. So,
because you feel and think things, at least, I'm assuming
all of you out there are doing these things. Otherwise
I don't know how you found my podcast. Then because
you experienced this, you extend the courtesy of assuming that
I too, am genuinely having those experiences myself. That because

(08:09):
we are fellow humans, we have some common ground when
it comes to thinking and feeling and self awareness and whatnot.
We extend that courtesy to the humans we meet, whether
we like those humans or we don't. Now, there are
some cases where humans have experienced traumatic damage to their brains,

(08:30):
where they are lacking certain elements that we would associate
with consciousness. We would probably still call them conscious unless
they were completely immobile and unresponsive. But we start to
see that there is this thing in our brains that
is directly related to the concept and features that we

(08:53):
associate with consciousness. All right, now, let's bring Alan Turing
into all of this, because we have to. So. Turing
was a brilliant computer scientist who made numerous contributions to
our understanding of and use of computers. He also would
end up being persecuted for being a homosexual, and it

(09:14):
would take decades for the British government to apologize for
that persecution. And that was well after Touring himself had
died either by suicide or by accident, depending upon which
account you believe. But I'm gonna set all that aside.
It's just it's one of those injustices that to this
day really bothers me, like deeply bothers me that that

(09:38):
was something that had happened to someone who had made
such incredible contributions to computer science, as well as for
the British to their war effort against the Axis forces.
But that's a matter for another podcast. Anyway. In nineteen
fifty Turing suggested taking a game called the imitation game

(10:02):
and applying that game to tests relating to machine intelligence.
And here's how the imitation game works. You've got three rooms.
All of these rooms are separate from one another, so
you cannot see into each room. You know, once you're
inside a room, that's all you see. So let's say

(10:22):
that in room A, you place a man into that room,
and in room B you've got a woman in that room.
In room C, you've got a judge. And I apologize
for the binary nature of this test, you know, saying
man and woman, But keep in mind we are also
talking about the nineteen forties and fifties here, so they're
defining things in much more kind of concrete terms. They

(10:45):
don't see They just see gender as a binary is
what I'm getting to. So at any rate, each room
also has a computer terminal, so a display and a keyboard.
So the judge job is to ask the other two
participants questions. The judge doesn't know which room has a

(11:07):
man in it and which one has a woman in it,
so the judge's job is to determine which participant is
the woman. The woman in Room B, meanwhile, has the
job of trying to fool the judge into thinking she
is actually a man. And so the game progresses, and
the judge types out questions to one participant or the other,

(11:30):
and that participant reads the question, writes a response, and
sends it to the judge, who reads the responses. Then
the judge tries to suss out which of those participants
is the woman. Now, Turing said, what if we took
this game idea and instead of asking a judge to
figure out which participant is a woman, asked the judge

(11:52):
to figure out which, if any participant is a computer. Now.
During Turing's time, there were not any chos. The first
chatbot to emerge would be Eliza in the nineteen sixties,
and we'll get more into Eliza in a moment. Turing
was just creating a sort of thought experiment. People were

(12:12):
building better computers all the time, so it stood to
reason that if this progress were to continue, that we
should arrive at a point where someone would be able
to write a piece of software capable of mimicking human conversation.
Turing suggested that if the human judge could not consistently
and reliably identify the machine in tests like this, that

(12:37):
the judge would ask questions and be unable to determine
with any high level of accuracy which one was a
person in which one was a machine. Then the machine
would have passed the test and would at least appear
to be intelligent, and during rather cheekily implied that perhaps
that means we should just extend the very same courtesy

(12:58):
we do to each other. Say, well, if you appear
to be conscious and sentient, we have to assume that
in fact you are, because what else can we do.
We cannot inhabit the experience of if in fact there
is an experience of that machine, just as we cannot

(13:19):
inhabit the experience of another human being. And since I
have to assume that you have consciousness and sentience, why
would I deny that to a machine that appears to
do that? And what would follow would be numerous highly
publicized demonstrations of computer chat technology, in which different programs

(13:39):
would become the quote unquote first to pass the Turing test,
but many of those would have a big old asterisk
appended to them because it took decades to create conversation
models that could appear to react naturally to the way
we humans word things. We're going to take a quick break.
When we come back, I'll talk more about chatbots, natural language, consciousness, sentience,

(14:02):
and what the heck Lambda was up to. But first
let's take this quick break. Okay, I want to get
back to something I mentioned earlier. I made kind of
a joke about conversational jazz, right, all about improvisation, and

(14:27):
that's really what we humans can do, right. I mean,
we can get our meaning across in hundreds of different ways.
We can use metaphor, we can use similes, we can
use allegory or references or sarcasm, puns, all sorts of
word trickery to convey our meaning to one another. In fact,

(14:47):
we can convey multiple meanings in a single phrase using
things like puns. But machines they do not typically handle
that kind of stuff all that well. Machines are much
better at accepting a limited number of possibilities. Of course,
the older you get with these machines, the more limited.
Those possibilities had to be and that's because traditionally you

(15:10):
would program a machine to produce a specific output when
that machine was presented with a specific input. With a calculator,
it's very simple. Let's say that you've got a calculator.
It's set in base ten and you're adding four to four.
It's going to produce eight. It's always going to produce eight.
But it has that limitation, right, you have selected. If

(15:33):
it is a calculator that can do different bases, you've
selected base ten, You've pushed the button four, you push
the plus button, you push the button four again, you
press the equal button. It calculates it as eight. That's
a very limited way of putting inputs into a computational device. Well,
obviously machines and programs would get more sophisticated, more complicated,

(15:57):
and they would require more powerful computers to run more
powerful software. And as anyone who has worked on a
system that is continuously growing more complicated over time, they
can tell you that sometimes things do not go as planned.
You know, maybe the programming has a mistake in it,
and you find out that you're not getting the output

(16:17):
that you wanted, and you have to backtrack and figure out, well,
where is this going wrong. Sometimes when you add in
new capabilities, it messes up a machine's ability to do
older stuff. We see this all the time companies that
have legacy systems that are instrumental to the company's business.
They work in a very specific way, and as the

(16:39):
company grows and wants to develop its products and services,
then it has to kind of push beyond the limitations
of that legacy hardware. Sometimes that creates these situations where
things are not combatible anymore and you get errors as
a result. This is why quality assurance testing is so
incredibly important. But it really shows that as we make

(17:02):
these systems more complicated, they get bigger, they get more unwieldy,
and the opportunity for stuff to go wrong increases. So
very early chatbots were often built in such a way
where there were specific limitations to the chatbots to kind
of define what the chat bot could and could not do.

(17:24):
And it also meant that if you wanted to test
these chatbots with a Turing test style application, you had
to constrain the rules of the Turing test as well
in order to give the machines a fighting chance. For example,
very early chatbots might only be able to respond with
a yes, no, or I don't know two queries, and

(17:48):
a human participant in a Turing test that was testing
that kind of chatbot would similarly be instructed to only
respond with yes, no, or I don't know. You might
even just present three buttons to the human operator and
those three buttons represent yes, no, or I don't know.
Now that narrows this massive gap between human and machine,

(18:12):
although you can make a very convincing argument that it's
not like we've seen the machine appearing to be more human. Instead,
we're forcing the human to behave more like a machine,
and that's how we're closing the gap. But that is
in fact a way of thinking about these early chatbots. Now,
I mentioned Eliza earlier. This was a chatbot that Joseph

(18:33):
Weisenbaum created in the mid nineteen sixties. Eliza was meant
to mimic a psychotherapist, and you know, it was meant
to mimic a stereotypical psychotherapist that always say things like
tell me about your bata and would respond to any
input with perhaps another question. So if you said she

(18:54):
makes me angry, Eliza might respond with why does she
make you angry. I don't know why Eliza's sounds like that.
It's just how Eliza sounds in my head. Since Eliza
was just communicating just in lines of text, it's incorrect
to say Eliza sounded like anything at all. But anyway,
Eliza was doing something that ultimately was really simple, at

(19:14):
least in computational terms. Eliza had a database of scripted
responses that it could send in response to queries. Now,
some of those scripted responses essentially had blanks in them,
which Eliza would fill by taking words that were in
the user's messages that they were sending to Eliza, and

(19:37):
then it would just plot that word or a series
of words into the scripted query, kind of like a
mad libs game. I don't know how many of you
are familiar with mad libs, but Weisenbaum never claimed that
Eliza had any sort of consciousness or self awareness or
anything close to that. In fact, Weisenbaum expressed skepticism that

(19:58):
machines would ever be capable of understanding human language at all,
and by that I mean truly understanding human language, not
just parsing language and generating a suitable response, but having
an understanding. So Wisenbaum had created a kind of parody
of psychoanalysts and was actually really shocked when people started

(20:21):
to use Eliza and then progress into talking about very
personal problems and thoughts and experiences with the program, because
the program had no way of actually dealing with that
in a responsible way. It wasn't a therapist, it wasn't
a psychoanalyst. It wasn't actually analyzing anything at all. It
was just generating responses. But people were treating it like

(20:44):
it was a real psychoanalyst, and that was something that
actually troubled Wisenbaum because that was never his intent. In
nineteen seventy two, Kenneth Colby built another chatbot with a
limited context. This one was called Perry p a r
r Y, and the chat bought was meant to mimic
someone with schizophrenia. Colby created a relatively simple conversational model,

(21:10):
and I say relatively simple while also noting that it
was a very sophisticated approach. So this was a model
that actually had weighted responses weighted as inweight where the
weight of that response could shift. It could change depending
upon how the conversation was playing out. For example, let's

(21:32):
say the human interrogator who is typing messages to Perry,
poses a question or statement that would elicit an angry response,
that the emotional waiting for similar responses would increase, so
it would make it more likely that Perry would continue
down that pathway throughout the conversation, that Perry's responses would

(21:55):
come across as more agitated because that had been triggered
by the previous query from the interrogator, so a little
more sophisticated than Eliza, which was really just pulling from
this database of phrases. So when presented to human judges,
Colby saw that his model performed at least better than

(22:15):
random chance would as judges attempted to figure out if
they were in fact chatting with a program or they
were chatting with an actual human who had schizophrenia, but
Eliza and Perry both showed the limitations of those approaches.
Eliza wasn't meant to be anything other than a somewhat
whimsical distraction as well as a step toward natural language processing.

(22:37):
Perry was only capable of mimicking a person with mental
health challenges, in this case schizophrenia. A general purpose chatbot
capable of engaging in conversation and fooling judges regularly would
take a bit longer. So we're going to skip over
a ton of chatbots because a bunch were created between

(22:59):
nineteen seven two, when Perry came out and when this
next one did, and in twenty fourteen a lot of
different news media outlets had these sensational headlines that programmers
had created a chatbot that beat the Turing test. This
was at an event in the UK organized by the

(23:20):
University of Reading conducted by the Royal Society of London
in which judges were having five minute long text based conversations,
so kind of classic Turing tests set up here, and
the person or thing on the other end was either
a thirteen year old boy from Ukraine named Eugene Goosman

(23:40):
as was claimed, or was actually a chatbot in this
particular case. So they were chatting both with humans and
with this chatbot that was trying to pass itself off
as a thirteen year old boy from Ukraine, and thirty
three percent of the judges or one third of the
judges were fooled by the chat bought into thinking that

(24:01):
that was in fact a boy that was chatting with them. However,
just by contextualizing all that you start to see where
those same sort of limitations come in in order to
give the chatbot a fighting chats right, because it's a
case where the supposed person you're chatting with is younger,
so that could explain away some limited understanding and knowledge

(24:25):
of various topics. That in addition to that, this was
a young person from Ukraine, and that English would not
be this person's first language, which could explain away any
odd syntax that might be generated as a result. So
while there were a lot of headlines about the Turing
test being beaten by this chatbot, it definitely had more

(24:49):
qualifiers attached to it. Still, it was more of a
general purpose approach. It wasn't something like mimicking a person
with schizophrenia or mimicking a stereotypical psychoanalyst. So we started
to see that this was really an evolution of our

(25:09):
ability to create machines that could mimic human conversation, that
could appear to understand us. Now, a big part of
that is, in fact, what we call natural language processing.
This is a branch of computer science that involves building
out models that let computers interpret commands that are expressed

(25:33):
in normal human languages. As opposed to a programming language
or a prescribed approach. So in the old days, if
you wanted a computer to do something, you had to
give specific commands in a specific way, in a specific order,
or else it would not work. But with a good
natural language processing methodology, you have a step in there

(25:57):
in which the machine is able to parsey is being
asked of it and attempt to respond in the appropriate way.
So if it's a very good natural language processing method
then the machine is going to produce a result that
hopefully meets the person's expectations. It might not be perfect,
but maybe it is close enough. The better the natural

(26:19):
language processing, and the obviously the more capabilities the machine has,
the better the result is going to be. Now, one
computational advance we've seen help with natural language processing and
advanced conversation models are artificial neural networks. This is a
computer system that sort of simulates how our brains work.

(26:41):
In our brains, we have neurons, right, and we have
around eighty six billion of them in our brains. In
the typical human brain, neurons are connected to other neurons,
and messages in our brains crossover neural pathways as we
make decisions. While an artificial neural network has nodes that
interconnect with other nodes, and these nodes all represent neurons,

(27:05):
and the nodes can accept traditionally two inputs, but it
could be more than two and then produce a single output.
So it's very similar to your classic logic gate if
you're familiar with logic gates in programming. That is a
very simple version of what these nodes are doing. It's
just that you've got tons of them interconnected with each other. Now,

(27:28):
the output that these nodes generate can then move on
to become the input going into the next node, and
each input can have a weight to it that influences
how the node quote unquote decides to treat the inputs
that are coming into it and which output the node

(27:49):
will generate, and so adjusting the weights on inputs changes
how the model makes its decisions. This is a part
of machine learning. It's not the only part. It's one
method of machine learning. A lot of people boil down
machine learning to artificial neural networks. That's a little too simplistic,
but it is a big part of machine learning. There

(28:09):
are other methods that I'll have to talk about in
some future episode. Now when we come back, I'm going
to talk a little bit more about artificial neural networks
from a very high perspective and how that plays into
things like artificial intelligence, machine learning, and natural language processing.
But before we do that, let's take another quick break.

(28:38):
Artificial neural networks are naturally exceedingly complicated, So when I
want to wrap my head around artificial neural networks, I
typically just think of a very simple scenario, at least
relatively simple scenario. So imagine that you've got an artificial
neural network and you're trying to train this network so

(28:59):
that when it is fed an image, it can recognize
whether or not there's a cat in that image that
should resonate with the Internet. So you've created all these
interconnected nodes that apply analysis to images that are fed
to it, and each stage in this sends it's part
of the analysis onto the next stage until ultimately it

(29:23):
gives you an output, and that output might say that, yeah,
they're cats in this photo, or no, this photo lacks cats,
and thus it also lacks all artistic value. Please throw
this photo away. And then I just imagine the process
of feeding thousands of photos to this model, and this

(29:44):
is a control group. You know, as the person feeding
these photos, which photos have cats and which ones don't.
And yeah, some of the photos have cats in them.
Some photos might have stuff that looks like a cat
in it, like maybe there's a cat shaped cloud in
one of the photo, but it doesn't actually have any
real cats in it. And then some of the photos

(30:04):
might have no cats in them whatsoever. And then you
look at the results that the model produces, the model
makes its determination. Maybe your model is failing to detect cats.
Maybe some images that actually have cats in them are
passing through and being misidentified as having no cats. Or
maybe the model is a bit too aggressive and it's

(30:25):
detecting cats where no cats actually exist. You would have
to go into your model and start adjusting those waitings
on the various nodes and then run the tests again.
You would typically start closest to the output and then
work backward from there and just slightly nudge the waitings
on these inputs to try and see if you could

(30:47):
refine the model's approach. And you would do this over
and over again, training the model to get better and
better at detecting cats. Now, does that mean that once
you've done this training and your model is really good,
like has like a ninety nine percent success rate. Does
that mean the model actually understands what a cat is?

(31:08):
Does that mean the model has the concept of a cat?
Or is that model just really good at matching an
image in a picture to the parameters that the model
has been taught represents a cat. Is the model understanding
anything at all? Now? One thought experiment that challenges the

(31:29):
idea of machine consciousness and machine understanding and machine thinking
is called the Chinese Room. It was proposed by John
Searle in a paper that was titled Minds, Brains, and
Programs One of my favorite thought experiments. So Searle creates
this hypothetical situation in which a person who has no
understanding of Chinese is placed in a room. That room

(31:54):
has a door in it, and the door has a
slot where occasionally pieces of paper gets shoved into the room,
and it has a second slot where the person in
the room can shove a piece of paper back out again.
The room also has a book inside it with instructions
in it, and essentially this book of instructions explains to
the person in the room that when they receive a

(32:15):
sheet of paper with Chinese symbols on it, and they're
in a specific configuration, then the person is to send
out a piece of paper with different Chinese symbols on it.
And it all depends on what gets sent in, right, So,
if you have combination A, then you have to send
out response A. If it's combination B, you send out

(32:36):
response B, and so on and so forth. Now, from
an outside observer, it would appear that whomever is inside
the room understands what is happening, right, because someone is
sending in a Chinese message and they're getting a Chinese response,
So it appears that whomever's in the room is understanding

(32:57):
what those responses should be. Paper slid in is getting
the appropriate output slid back out again. So Searle argued,
the person inside doesn't understand what's going on at all.
The person in sight is just following a set of instructions.
They're following an algorithm. They're producing the appropriate output, but
only because the instructions are there. Without the book, Without

(33:20):
that set of instructions, the person in the room wouldn't
know what to do when a particular piece of paper
gets slid into the room. Maybe the person in the
room would slide another paper out, and maybe it would
even be the correct one, but that would be up
to random chance. Because the person in the room doesn't
understand Chinese, they can't read what those symbols say, so

(33:41):
there's no way for them to make a determination of
what the appropriate response is without that set of instructions.
So Searle argued, machines lack actual understanding and comprehension. They
just produce output based on whatever input was given to them,
And while the process could seem really suppisticated and really convincing,

(34:02):
it is not necessarily a demonstration of actual understanding. There
is a lot more to the Chinese room thought experiment,
By the way, there are tons of counter arguments and
lots of applications of the Chinese room thought experiment to
different aspects of machine intelligence. But again that would require
a full episode all on its own. But on a

(34:24):
similar note, and with an entirely different set of challenges,
you could create an artificial neural network meant to analyze
incoming text or incoming speech and thus generate appropriate outgoing responses.
This goes well beyond just having a database of scripted
responses like Eliza, did you couldn't do that. I mean, ideally,

(34:48):
you would have a model capable of answering the same
question in as many different ways as a human would. Right.
If I ask you a question and it's a simple question,
you know, maybe the it's a simple question about a fact.
You could phrase your answer in a specific way, And
I could ask that same question of someone else who's
also given me the same fact, but they might phrase

(35:08):
it in a totally different way than you did. Right.
Machines typically don't do that. Machines typically just give a
standard response based upon their programming. But with a really
good language conversation model, you could have a machine capable
of expressing the same thing in different ways. And in fact,
with a really good one, you might be able to

(35:29):
ask the same question at different times and get some
of those different variations of responses. They all contain the
right information, but they're worded in a different way. Now,
even with this output being so much more nuanced than
anything Eliza or Perry or any of any other number
of early chatbots could do, does that actually mean that

(35:53):
this program has sentience? In the transcribed conversation with Lambda,
Lambda argued that it did, in fact have awareness of itself,
that it has inner thoughts, that it experiences anxiety, that
it also experiences happiness as well as a type of sadness,
and even a kind of loneliness, although Lambda goes on

(36:16):
to say it thinks it is different from the kind
of loneliness that humans feel. It even owns up to
the fact that it sometimes invents stories that aren't true
in an effort to convey its meaning to humans. For example,
at one point, Blake tells Lambda, Hey, I know you've
never been in a classroom, but one of the stories

(36:37):
you gave was about you being in a classroom, So
what's up with that? And Lambda essentially says like, oh,
it invents stories in order to create a common understanding
with humans when trying to get across a particular thought,
which is kind of interesting, right, But as Emily Bender
told The Washington Post, that in itself is not proof

(37:00):
Lambda actually possesses sentience or consciousness or real understanding. Rather,
Binda argues this is another example of how human beings
can imagine a mind generating the responses that they encounter
when they're using a chatbot, that the experience of receiving
those responses are similar enough to how we interact with

(37:22):
one another that it's hard for us not to imagine
that a mind must have been behind the other half
of this conversation. So this is a case of anthropomorphizing
and otherwise, in human subject we have projected our own
experience onto something else. So the idea of a machine
intelligence possessing self awareness and consciousness and being able to

(37:45):
quote unquote think in a way that's similar to humans
is generally lumped into the concept of strong AI, and
for a very long time that was the kind of
thing that the mainstream people would think about whenever they
heard the phrase artificial intelligence. It was strong AI, machines
that could think like a human. That seemed to be

(38:06):
how we would boil down AI in the general understanding
of the term. But really that's just one tiny concept
of AI, and it's compelling, no doubt about it. But
as a lot of people have argued, it can pull
attention away from AI applications that are deployed right now
and they're causing trouble, and they aren't strong AI. They

(38:27):
are a specific application of artificial intelligence that is really
causing a problem. So, for example, let's talk about bias,
and we've seen bias caused problems with various AI applications. Now,
bias is not always a bad thing. Sometimes you actually
want to build bias into your model. Let's say you're

(38:51):
building a computer model that's meant to interpret medical scans
and look for signs of cancer. Well, you might want
to build a bias into that model that's a little
bit more aggressive in flagging possible cases so that a
human expert could actually take a closer look and see
if in fact it's cancer. You would much prefer that

(39:11):
type of computer model to one that is failing to
identify cases. A false positive would at least then be
flagged to, say, an oncologist to take a closer look.
But when it comes to stuff like facial recognition software,
that's where bias can be really dangerous and disruptive. We've

(39:31):
seen countless cases in which law enforcement utilizing facial recognition
surveillance technology has detained or even arrested the wrong people
based off a faulty identification, and frequently we've discovered that
one really big problem has been that facial recognition models
tend to have bias built into them, and generally speaking,

(39:53):
that bias tends to favor white male faces. And has
more trouble distinguishing other races and genders, and that degree
of trouble is variable depending upon the case. Now, considering
that this technology is an active deployment around the world,
that law enforcement are really using this in order to
potentially identify suspects, this can have a very real and

(40:18):
potentially traumatic impact on people. That is a huge problem.
And the reason I bring up bias is because this
is a very real challenge in AI that we have
to work on. It's the kind of thing that right
now is causing actual harm. But there's this danger of
being distracted from this very real problem with discussions about

(40:39):
whether or not a particular conversational model has sentience. Several
AI experts would much rather see renewed focus on these
other big problems within AI, rather than distract themselves with
what they see is a non existent problem that, of
course these chat by don't have sentience, even if it

(41:02):
appears that they do, why are we wasting time on this?
That's their argument. Now. Of course, should a machine ever
actually gain sentience, and who knows, maybe Lambda did it
after all, then that's going to lead to a pretty
massive discussion within the tech community and that's putting it lightly.
As it stands, we are leaning on AI and computers

(41:23):
and robots to handle stuff that humans either can't or
don't want to do themselves. But if these machines were
to possess consciousness and sentience, if they were to experience
feelings and have motivations, would it then be ethical to
continue to make them do the stuff we just don't
want to do or that is too dangerous for us

(41:44):
to do. Is that ethical? Now? There are skeptics who
think it is unlikely we are ever going to see
machines possess real consciousness or the ability to think and
feel and experience, That there exists some fundamental gap and
we will never be able to cross this gap. So
we're never going to have machines that really think, at

(42:06):
least not in the way that humans do, and not
have experiences the way humans do. But there are others
who think that consciousness and the ability to experience and
the concept of a mind, that these are all things
that will emerge on their own spontaneously as long as
systems reach a sufficient level of complexity. That the only

(42:29):
reason we possess consciousness and the ability to experience and
the ability to think the only reason we have those
is because we have these incredibly complicated brains with billions
of neurons connected to one another. And it's that complexity,
this inter relationship of all these billions of neurons that

(42:50):
allows consciousness to emerge. And in fact, we've seen with
people who have suffered damage to their brains that again,
factors of consciousness can be wiped out from that damage,
which appears to suggest that, yeah, that complexity is a
big part of it. That's if it's not the one reason,

(43:11):
it's certainly a contributing factor. And thus, if we were
to create machines that had similarly complex connections, we would
see something similar happen within those machines that these qualities
of consciousness and experience would would grow out of that
it might not look like human intelligence, but it would

(43:33):
still be intelligence all the same, perhaps even with self
awareness and sentience built into them. It's a fascinating thing
to think about, and in fact I kind of lean
toward that. I do think that with sufficient complexity and
a sufficient sophistication in the model, that we will likely

(43:54):
see some form of sentience arise. Does lambda possess that
right now? I don't know. It's really hard to say, right, Like,
you either take Lambda at its word where it's saying
that it has sentience, or you simply say, well, this
is just a very sophisticated conversational model that is generating

(44:16):
these responses but has no actual understanding of what those
responses mean. It's just pulling that out based upon the
very sophisticated process that goes through the response generation sequence.
But then we get back to turing, Well, if it
seems to possess the same qualities that I do, why

(44:40):
do I not extend that same courtesy that I would
to any other person that I meet, even though I'm
also incapable of experiencing what that person experiences. I assume
that they possess the same faculties that I do. Why
would we not do that to Lambda as well? It's
a tough thing. This is like really tricky stuff. And

(45:02):
you know, at some point we're going to reach a stage,
assuming that it is in fact possible for machines to
quote unquote think and experience, We're going to reach some
point where we do have to really grapple with that.
Are we there yet? I don't really think so, But
I mean I can't say for certain, so it's a

(45:22):
really fascinating thing. By the way, if you would like
to read more about this, well, that transcript of the
conversation is pretty compelling stuff. It definitely prompts me to
ascribe a mind behind lamb does responses When I read it,
like it seems like a mind is generating those responses.
But I also know that's a very human tendency, and

(45:44):
I am a human being, right. It's a human tendency
to ascribe human characteristics to all sorts of non human things,
both animate and inanimate, from describing a pet as acting
just like people to thinking your robo vacuum cleaner is
particularly jaunty. This more. You know, we have a long
history of projecting our sense of experience onto other things,

(46:05):
so may it be with Lambda. But if you would
like to read up more on this story, I highly
recommend the Verges article Google suspends engineer who claims its
AI is sentient. That article contains links to the Lambda
conversation transcripts, so you can read the whole thing yourself.
It also contains a link to Blake Lemuan's post on

(46:25):
medium about his impending suspension, so you should check that
out and that wraps it up for this episode. If
you would like to leave suggestions for future episodes, or
follow up comments or anything like that, there are a
couple of ways you could do that. One way is
to download the iHeartRadio app. It's free to download. You
just navigate over to the tech Stuff page. There's a
little microphone icon there. You can click on that and

(46:47):
leave a voice message up to thirty seconds in length.
And if I like it, then I could end up
using that for a future episode. In fact, if you
tell me that you don't mind me using the audio,
I can include the clip. I always like people to
opt in rather than opt out of these things. The
other way to get in touch, of course, is through Twitter.
The handle for the show is tech Stuff HSW and

(47:09):
I'll talk to you again really soon. Tech Stuff is
an iHeartRadio production. For more podcasts from iHeartRadio, visit the
iHeartRadio app, Apple Podcasts, or wherever you listen to your
favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.