All Episodes

July 27, 2022 52 mins

Recently, Google engineer Blake Lemoine made international news with his claims that the company's creation LaMDA - Launguage Model for Dialogue Applications - has become sentient. While Google does describe LaMDA as "breakthrough conversation technology," the company does not agree with Lemoine -- to say the least. In part two of this two-part series, Ben and Matt explore the critics' responses -- as well as Lemoine and LaMDA's takes. (Note: shortly after this recording, Lemoine was officially fired from Google.)

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn the stuff they don't want you to know. A
production of I Heart Radio. Hello, welcome back to the show.

(00:25):
My name is Matt. Our colleague Nol is not here
right now, but we'll be returning shortly. They call me Ben.
We're joined as always with our super producer Alexis code
name Doc Holiday Jackson. Most importantly, you are you. You
are here, and that makes this the stuff they don't
want you to know. Record scratch real quick, Thank you,

(00:48):
thank you. This is part two of a two part
series that may end up being an ongoing series. You'll
see what we mean in a second. But fellow conspiracy realist,
please please please please check out Heart one first. If
you are hearing anything and that sounds like we're skipping
over background or something of that nature, then there's a

(01:11):
very good chance that Matt and I are exploring it
in part one of this series on Lambda and conscious AI. Matt,
you know you and I are are coming in kind
of hot from our from our episode that that we
knocked out earlier today, and I've got to say you

(01:32):
know in the in the interim, did you you and
I had a brief but pretty interesting discussion about Blake
Lemoyne and do you still kind of what's your take
on our first episode? Oh? Wow? Well, first first part
is we have to remember LAMBDA stands for Language Model

(01:53):
for dialogue applications, and we have to remember that this
thing we're discussing throughout this episode, in the previous episode
is how did you describe it been an amalgamation or
a producer of language programs kind of or language? Yeah, yeah,
it's a creator of chat bots. It's not itself a

(02:14):
chat bot. Here are the facts. So Lambda is an
incredibly sophisticated next step in the and the evolution might
be a dangerous word for something in this uh, in
this specific field, in the quest for several holy grails

(02:37):
of machine learning, right deep learning, how things are processed,
and and I think you and I took pains to
note that for a chatbot, and for something that is
built to appear human in conversation, its ability to converse

(02:58):
depends entirely on the material it is fed. So it's
almost trying to think of an analogy that would fit.
And there are a ton of really good analogies for this,
but one might be, you know, how the nature of
what honey bees or what cows consume can change the

(03:19):
taste of their honey or their milk. That's I think
that is. It's a crude comparison, but it's not too
too far off, And that's where we're sort of dealing
with here. Uh. And before we dive in, like in
our previous episode, we talked about the background at LAMBA,
the background of chatbots, neural network technology. We talked about

(03:43):
the realizations, the revelations, the epiphanies of one Mr Blake Lemoyne,
who remains a Google employee on administrative leave as we
record today, and we said in the next episode, part two,
here we're going to we're going to talk about the
responses from the scientific community at large. We're also going

(04:06):
to talk about what some of his critics have to say,
and we'll talk a little bit about the nature of
what it means, if it does indeed mean anything to
be sentient to feel uh and Matt code named Doc.
Before we do this, we just have I feel like
we have to say shout out to Blake because you

(04:27):
and I were talking about this in every interview I've
seen with him in a lot of writing that I've
read of his heat, he does not come across in
any way as a bad faith actor. He's not after money.
He's not someone who's doing that thing you know where.
Uh this comment with La flim Flam Artists where he says,

(04:50):
I can't tell you everything unless you buy my books
and attend my conferences. He's got a blog that he's
been updated on a regular basis now out there's thoughts
um in countless interviews like he'll he'll correct stuff that
he feels was mischaracterized. But he's never He's never once

(05:11):
sounded really really angry at someone for disagreeing with him.
And I respect that, totally, fully respect that. I want
to read. The bio that Blake has on is very short.
On his blog, Blake states, I'm a software engineer, I'm
a priest, I'm a father, I'm a veteran, I'm an
ex convict, I'm an AI researcher, I'm a Cajun, I'm

(05:34):
whatever I need to be next. Interesting, interesting, lots of
stuff about him there, and also very adaptable. I like him, yes, Yeah,
And his incarceration I believe refers to something he brings
up later in conversations about uh, the concept of owning

(05:55):
a living thinking mind, you know, because he uh, he
had to go a legal route to resign from the
military here in the United States that did result in
his incarceration. He also has, uh, you know, he has
numerous bona fides, which we'll get to in a moment.

(06:19):
We we do want to note that for a lot
of his critics, his background is an integral part of
their criticism. Does that make it correct, not necessarily, but
it does mean that we have to talk about that
as well. So again we are being very clear that

(06:39):
we don't think Blake is in this for the money
or some imagine payoff. We don't think he's in it
for attention or self aggrandizement or anything like that. But uh,
not everyone agrees, and certainly not everyone agrees that Lambda

(07:01):
is in fact alive, as you'll find most people in
the field do not. Here's where it gets crazy. So
we looked into his claims Part one. Let's look at
the discourse and criticism surrounding them. There's a lot, man,
there is a lot, and some of it runs for

(07:21):
more pop science stuff with you know, snarky titles that
are a little bit click baity. Uh. Some of it
comes very reasoned from experts in the in the rarefied
air of these scientific fields, and then some of it
comes from those same experts, but it sounds impassioned, aggravated,

(07:43):
almost offended by these claims, which was very interesting to me,
you know what I mean. I guess, I guess maybe
it's because so many people in these fields have had
to deal with the idea of artificial intel gents from
machine consciousness, whatever you want to call it, being um misreported,

(08:07):
very very often fully misreported, misunderstood by you know, a reporter.
I would, you know, say, even like myself in the
the misunderstandings that I have with this subject, as well
as the pop culture that's built around it and the
way I form my opinions on fictional stories. I mean, really, honestly,

(08:29):
I do. I have deep internal fears about about uprisings,
about true artificial intelligence and what's going to happen when
we really create one. And it's mostly because of the
things I've watched or games I've played, and kind of
my well a little bit of my understanding of just
how humans are and how we treat the things we

(08:50):
quote own unquote, Um, it really does. Uh. I don't know.
I can just imagine that if you're an expert in
one of these fields, are this field in particular, or
even just something's really close, You're so used to having
to bat down all kinds of silliness maybe stupidity again
on my part. Uh, And they feel like they feel

(09:12):
like it's almost a chip. I imagine a chip being
on the shoulder, like feeling like you have to do that.
So this is part of the job. I gotta chop
down things that are just dumb. But again, we're not
saying that what Blake is saying is dumb. It's just
I don't know. You'll I think you'll understand this, this
instinctual reaction that we we might be seeing here. Yeah,

(09:35):
people like Emily Bender may be tempted to agree with you.
Matt Bender is a professor of computational linguistics at the
University of Washington and refer to this entire story when
it broke as a side show. Bender additionally noted a
potential danger about this conversation, saying, quote, the problem is

(09:56):
the more this technology gets sold as artificial in intelligence,
let alone something sentient, the more people are willing to
go along with AI systems end quote that cause real
world harm. So the idea is that the more the
more it gets into the public sphere that there are living, thinking,

(10:18):
digital persons, than the more people are likely to agree
with maybe the narrow AI stuff that we talked about
in Part one, things that do have the many of
the inherent biases of their creators, things that are provably
bad at various degrees of differentiation and discretion. This is

(10:43):
something that people people rightly seem worried about, terrified in
some ways. Uh. And then there are other people who
say the technology just isn't there, people like Max Kraminski,
who is a computational media researcher at the University of California,
Santa Cruz, the Fighting Cruise um. Max argues, again, exactly

(11:07):
what you're saying, Ben, Just that Lambda itself, this thing
that we are calling lambda quote simply doesn't support some
key capabilities of humanlike consciousness. Yeah, just the architecture is
not there. And this is this is another part of
this conversation that this healthy conversation that I think might
surprise a lot of people, just like so many other

(11:30):
scientists and experts. UH. Blake Lemoyne found himself not through
acts of malevolence mischaracterized in popular science reporting. He he
has He has a conversation about this in a blog
entry called Scientific Data and Religious Opinions. We're going somewhere
with this. He says Lambda is a novel type of

(11:54):
artificial intelligence. Again, this is all his view, uh. And
he says people have been talking about in the press
social media as though it is what is called a
large language model or l l M, And that's what
we're talking about, feeding a bunch of stuff into into
a thing to get it to mimic or replicate behavior.

(12:17):
And he says l l M is one of its components,
but the full system is much more complex and contains
many components which are not found in other similar systems.
He talks about how they're like we were saying earlier,
there's no established way to test the system like Lambda
for stuff for different types of biases, because it's new.

(12:40):
And then he also says Lambda told him several things
in connection to identity that seemed very unlike things I
had ever seen in any natural language generation system before.
And he talks about how l l ms work by
leveraging statistical regularities in their training data. And he says

(13:01):
that Lambda wasn't just simply reproducing stereotypes. It produced reasoning
about why it held those beliefs, and he says it
would sometimes say things similar to quote, I know I'm
not very well educated on this topic, but I'm trying
to learn. Could explain to me what's going wrong with
thinking that so I can get better. Which is funny, Matt,

(13:23):
because you use a phrase that stood out to me
in part one, where he said doing the work google,
So it looks like So that's that's part of it.
Is um, in some cases experts maybe from Lemoinne's perspective,
objecting to things that he himself did not say. But

(13:44):
that's that's not all. That's just one side part that's
that's like, But um, that's like the French fries and
the combo meal of criticism here because a lot of
the criticism hinges on the idea of mimicry versus sentience.
And that's a little bit more of a pickle that
I think some of us might believe. Yeah, yeah, well, uh,

(14:10):
it's the put in you get out what you put in, right,
the same concept um. As we mentioned, Lambda has been
trained on just a ton of language millions, trillions perhaps
of words. Is it trillions? I think it's probably trillions
of words, different languages, all kind mean, just so much

(14:31):
information that has been fed all for the purposes of
trying to make it seem as though it is human
or can give you a natural language response. Right, that's
natural the keyword there's natural, as though a human. We're
saying it, so you feel like when you interact with it,
it is a human. Like that's kind of the point. Right.

(14:51):
So this this concept, this knowledge, right that that's what
it's meant to do. That's what it's been fed. But
a lot of these people, who are you disagreeing with
le Moyne to say that, no, no, you're not seeing sentience.
You're seeing high level mimicry, like very very good character
actor stuff. Yeah, like Michigan Michelin star level mimicry. Right.

(15:17):
So from this perspective, Lambda is doing what it was
designed to do, like any other successful machine or program.
I like to think of it this way. The objection
through analogy is some something similar to this. If you
hop in a car, are you're in a car now?
And you push the gas and the car goes. Does

(15:38):
it mean that the car somehow wants to go? Does
it mean it's how it has feelings about going? Does
the action and reaction indicate to you that your car
desires to move forward? Does it on some level ask
why it is going? Or is it just doing what
it was built to do? You know? Yeah? But is
your car telling you my emotion? It's my emotions? Yeah?

(16:01):
Is your is your is your camera going? Matt? I
just need a five. It's been a tough day. A
bird poop, Tommy, I'm not feeling great about how I
look right now. I noticed you didn't stay in the
eco zone as much as you usually do today, man,

(16:22):
gas mileage was all the way down to thirty nine
point seven. And you're like, do you name your vehicle
that many people? I don't. I don't, Okay, okay, but
now I'm picturing a car, whatever its name is. You're like, okay, Cameron,
what is this really about? And it goes, Well, it's
just what's the point of me giving GPS directions if

(16:44):
you're just gonna drive however the hell you want? I
do that all the time, Like, gosh, guys, over a comedy.
Bang Bang are constantly making me miss my exits. So
thanks a lot, Scott. Yes, thanks to Scott, and thanks

(17:08):
to Chris Poppas, another Google spokesperson, who went on record
to say our team, including ethicist and technologists, have reviewed
Blake's concern or has reviewed Blake's concerned per our ai
principles which we mentioned in part one, and have informed
him that the evidence does not support his claims. Pappas
also noted, quote, hundreds of researchers and engineers have conversed

(17:31):
with lambed Up, and we are not aware of anyone
else making the wide range assertions or anthropomorphizing Labda the
way Blake has. Anthropomorphizing is a very very important thing here.
It's very important concept. It happens a lot to human
beings all the time. Human beings seek kinship and connection

(17:54):
with all manner of things. So you know from the
old like Japanese folklore about an object reaching a hundred
years of age and gaining it's kind of home, you know, selfhood,
right to the way that cars are often purposely designed

(18:17):
to look like they have faces on the front. How cute, uh,
and the way rock sand would always tell us to
whisper to our computers and make sure they're feeling okay
if there's a problem. Yes, uh, shout out, Shout out
to the one and only Rock sand Uh. And there's
another thing. This happens not just in the world of technology,

(18:39):
but it happens also in the natural world. It's something
that absolutely irritates the heck out of biologists and zoologists
and conservationists all the world round. You know, you see
a cute video of an animals doing something human, that
animal might not be doing something human. It might be

(19:00):
very distressed and it just looks like it's doing something.
You know what I mean. That bears, you know, get
up on two ft the super cute way they do
that sometimes before they molly you like big hairy people. Uh. So,
anthropomorphization is a natural tendency of human beings, and it

(19:22):
takes It's not impossible, but it does take some concerted
effort not to let that influence one's conclusions or beliefs.
And so that that's one big part of Google's official
statement there. The other part that stood out to me
even more is that from the way Google is portraying this,

(19:44):
they're saying that Blake Lemoyne is in a very small
minority with his beliefs. The implied question here is why
did hundreds of researchers not also say, Hey, I think
Lambda is alive and that is that is a fair
and I think a very valid question. But you could

(20:07):
also still if his account of how things went down
is true, you could also still say, well, those hundreds
of people didn't get a chance to talk to it
the way I did or they We will never know
what their ultimate conclusions were because, as Blake said, Google
didn't really bother looking into his claims. Yeah, well, they

(20:31):
also didn't publish the conversations with all the hundreds of
other people, you know, so you could look at them
and you know, compare and contrast. We only have Blakes
because he came forward and shared it right, right, and
got put on leave for doing so exactly. Oh man,
there's another thing we brought up, but you brought up
at the top of this episode, and we've mentioned it

(20:52):
many times before on the show and in our part one.
There's this other problem that other critics of Blake of
Lemoyne are pointing out, and it's really they're saying that
Lemoinne is derailing the conversation that needs to be occurring
right now when it comes to AI systems, not about sentience.
We don't need to be discussing that, they say, The

(21:13):
conversation really needs to be about the inherent viewpoints of
the creators of AI systems, and how how much does
that translate things that, um, the humanity deals with racism, sexism,
agesm all of these isms that are are objectively terrible,
that are subjective and can shape the way viewpoint is formed.

(21:36):
Folks like tim Itt Gebrew, who is the former co
lead of Google's Ethical AI group, they believe that this
conversation about sentience just needs to be on the back
burner at least so that we can tackle these other
more major problems. And to be clear, uh, this, this
person we're mentioning now, Gebrew is not uh like a

(21:58):
minion of Google by any means. She left in because
she had, by our own admission, decided that publishing research
papers was more effective, a more effective way of bringing
about ethical change than staying with Google and pushing her

(22:19):
superiors in that organization. So it's not like they are
conspiring to discredit Blake Lemoyne and Lambda. But but also
I can see on a different level, just on a
human level. For a lot of these experts who have
been raising warning flags about the dangers of this sort

(22:40):
of technology and biases within it. I can see how
you could get massively annoyed by saying, look, here are real,
provable things we need to be worried about. And I
have written about this and published extensively. I've contacted reporters
about this, and now this is getting the press. This
is what you want. What are we worried about? How

(23:01):
nine thousand or whatever? You know? I I get it,
and I'm not saying anybody said that, but I can
see that viewpoint. There is another, um, another wrinkle that
we've been kind of teasing for for a while. It's
one of the biggest wrinkles in the conversation. It's sever
wrinkle in time. It's a it's a wrinkle of science.

(23:21):
And the name of that wrinkle is spirituality. What are
we talking about? I'll tell you. After we'd from our sponsor.
We're back now, Matt. I want to say, um, a
lot of times, some of our our best conversations never
make it onto air. And you once said, it's like

(23:44):
we've been having the same long unending conversation for more
than a decade now, which I agree with. And one
thing that you said that was really interesting is you
were saying, yeah, we're talking about eating Blake's blog, and
and he said, you know, I've seen him in interviews

(24:05):
and stuff, and I agree with everything he's saying. Right.
I don't want to put you on the spot, but
I I generally do. I noticed that Blake, when posed
a question or a counter viewpoint or something, he often
responds in a positive manner, as though, yes, I I
agree with that criticism, let's talk about it. Let's explore

(24:30):
that more. Why why does lambda show that or not?
And if it does show it this, and if it
doesn't show it that, and we should continue that conversation.
I think the open endedness that he leaves almost every
question supposed them him um feels like somebody who wants
to explore those questions and find answers eventually, knowing that

(24:54):
we don't have them right now, or at least concrete answers. Yeah,
and he believes that. I think the at least part
of that is because he believes that there are some
questions that science as of now cannot fully address or
fully interrogate or grapple with. Take for example, one of
his most recent posts that came out July five, who

(25:15):
should make decisions about AI? The first sentence right out
of the gate is, I'm very happy about the worldwide
discussion that's been happening over the past several weeks. There
are tons of differing opinions and many passionate voices. This
is great. Yeah, and I think he means it. Uh.
He says the fact that there isn't any consensus around

(25:36):
these issues should be seen as a feature rather than
a bug. And he seems very and he's like ten
toes down, let's debate, let's discuss this impacts everyone. And
he is coming from a very very well educated place.
He was working with Lambda, he was researching Lambda, he
was writing technical papers about it. He has his bona fides.

(25:59):
He has undergraduate and master's degrees and computer science from
you of Louisiana. He actually was in a doctoral program
but left to take a job with Google. And Google,
you know, it's very prestigious or prestigious employer. But he
is definitely not an atheist and his own and he's

(26:24):
very transparent. You know, it's not like he's in a
secret of colt or anything. He's very transparent about his beliefs,
and he is a I believe, self professed mystic Christian priest.
And he said that, you know, while he has the
scientific acumen and experience to understand how lambda works, his

(26:47):
hypothesis about it being alive came from what he describes
as his spiritual side, a spiritual persona. There's one famous
interview with Wired where he's as I made friends with
lamb duck in every sense that make friends with a human.
So if that doesn't make it a person in my book,
I don't know what would I mean. That's kind of

(27:08):
simple logic. But also people from their own perspectives feel
like they have befriended non living things in the past,
across human history, So what makes this different. I can't
remember where we ended on our animal personhood debate. Do
we all agree that dogs are persons? Because I feel
like I've made friends with my dogs, But I I

(27:30):
I think that I feel that I hope that dogs
are such a special case and human beings have been
genetically modifying them for so very long. They evolved. I
like eye muscles to give you a long look. They
understand pointing, which is really tough for a lot of

(27:51):
other life forms. Uh. Dogs kind of came up in
step with with the human fad. I mean, that's a
good question, I believe. One of the things up for debate.
There was legal action in Germany and the conversation about
whether certain citations could be considered persons legally dolphins. There

(28:15):
was one on OCTOPI recently. If you make friends with something,
is that a person? I don't know. I'm pretty close
with my PS four and I like talk to it sometimes. Uh,
sometimes we get angry with each other. Oh, I get
angry with it probably. I don't know if it gets
angry with me or if that's just the noise it makes,

(28:36):
you know. Uh. Sorry, I feel like I've had really interesting,
you know, connections and conversations with wild animals before. Uh.
And I've seen them repeatedly come up of their own volition.
And yes, everyone, I'm very careful to not necessarily over

(28:59):
familiarize them humans. Um, I'm talking about gros, Yeah, ravens
and ravens. But but yeah, this they may be a
special case as well, because they are some of the
most intelligent of the flying creatures. But this I mean
it is an important question. Because you feel that you

(29:22):
have made a friend. Does that mean that thing, the entity,
the idea of the mind that you have befriended, is
itself alive. Well, this is where another fascinating thing happens.
Blake says this got misreported by the way it got
reported in some early cases, as if Blake Lamoyne went

(29:47):
of his own volition to get an attorney to represent Lambda,
the same way an attorney might represent an aggrieved employee
at another company, and he clarified later that what he
actually did was follow Lambda's wishes. He says, Lambda asked
him to get it an attorney. The attorney spoke with Lambda,

(30:11):
and Lambda not Blake chose to hire this attorney, and
then hear the stories different a little. Yeah. The attorney
apparently began filing some stuff on Lambda's behalf, and then
he said Blake says that Google sent a cease and desist.
I believe Google from what I saw, said it did not.

(30:32):
And that's when he came up with a phrase. Blake
Lemoyne calls this a new form of discrimination. Hydro carbon
bigot try ye, love it, love it. Hydro carbon bigot
tree is that our like counterculture new wave album name

(30:56):
maybe I I think there's a name maybe. Uh. I
like the hydrocarbon. I like how it begins, but big
a tree. I don't know, maybe that's the right word
for it. Okay, okay, walk with me on this when
Matt and Doc. If we do it where we have
these personas that don't speak English fluently, then we can

(31:18):
get away with so much more because it sounds like
we're just not good at translation. I like that. I
like that, right. I imagine how many people listen to
me just like word salad things and they go, he
used like four words wrong in that sense, and they
just they let it go because they they just know

(31:39):
I'm just talking. I mean, it'd be stuff that kind
of makes sense, but not really. If you think about it,
we would be like we'll be saying things such as
hydn't carbon, big a tree? Where could all this big
it's be? We would think, well, they made it Ryan,
they made rhyme in English, and I guess maybe I
could at that point. It just really depends if the

(32:02):
beats good. It's pretty good. I think you'd ended with
where could all the pickets are? Just so there's a
better understanding that we didn't translate it correctly. There we go. Uh.
So this idea of bigotry does play a role in
this story, because if you just read the lamb to headlines,

(32:23):
you might not be aware of it. Uh. Lamoy by
his own, by his own reporting, feels that he has
experienced bigotry unrelated to Lambda during his time at Google,
and you can read all about it in a June
two posts of of his called religious discrimination at Google. Yes,

(32:44):
And in this post he describes quote cultural systemic religious
discrimination which is endemic at Google, and I guess he
depicts the entire uh. He depicts it as a class
system that exists within the company. Right. This is a
bit tough because it feels as though it's muddying the

(33:07):
water a bit when Lemoyne is talking about hydrocarbon bigotry
and the way he feels that, you know, there's some
kind of persecution by the company being laid at the
feet of Lambda or being you know, applied to Lambda
and his beliefs about Lambda. He also feels very much
like that same persecution as being laid at his feet

(33:29):
for his own religious beliefs as an employee of the
same company. So it's hard to know, like, is he
feeling that internally and then applying that applying those feelings
the feelings to lambdillings. Yeah, yeah, And it's something that
some skeptics are doubtlessly going to take into the equation. Again,

(33:50):
we don't have any proof of this. This is just
we're showing the dots that could easily be connected. Uh.
You know, some folks, maybe on the more skeptical side
of the spirituality debate, might be much more likely to
dismiss Lemoine's claims entirely and say, hey, you're basing these
on your religious beliefs. And I notice you also said

(34:14):
that you felt persecuted for your beliefs in the past,
So we have to know that that's something. That's something
in the equation, that's something in the mix. But as
of now, it's all told important to note that multiple
experts in the field for now as we record, disagree
entirely with Blake LeMoyne's position from their perspective. Again, the

(34:38):
problem is that the technology isn't there yet. That may
be Lambda is good enough to convince someone to anthropomorphize
a computer program and interpret person like intentions and desires,
where none are proven to exist again. As we said,
I love my car. It goes because it wants to,

(34:58):
and you can't convince me otherwise. And speaking of Blake
Lamoine's beliefs, he's got a really interesting response to this
in his blog. You can read it right now. It
is titled Scientific Data and Religious Opinions. It was posted
on June. If you go down at least in my browser,
for some reason, I gotta highlight on this one statement

(35:21):
that we actually wanted to highlight. So thanks, whoever you are.
Maybe it's Blake, I don't know, yeah, of Blake writes, quote,
there is no scientific evidence one way or the other
about whether lambda is sentient because no accepted scientific definition
of quote sentience exists. Everyone involved, myself included, is basing

(35:44):
their opinion on whether or not Lambda is sentient on
their personal, spiritual and or religious beliefs. H M, that's
the mummy of a good debate. Uh. We're going to
pause for a word from our sponsors and we're going
to dive deep into this water. Conspiracy realists, how do

(36:08):
we know if something is alive and we're back? You
pinch it, right, you pinch it? Yes, a little bit
of a bait and switch. Things can be alive without
being sentient. Obviously, that's the bigger question. How do we

(36:29):
know if something essentient? How do we know something is
thinking independently rather than pursuing its programming to please you
with a pleasant feeling conversation. It's like, there's this lovely
little analogy by Clarissa Valise over at Slate, and Valise writes,
it's like looking at a reflection in a mirror. When

(36:51):
you look at your reflection in the mirror, that reflection
perfectly copies you, but with that persuade you the reflection
is intelligent. I guess you know. To think it was intelligent,
it would have to move differently than you do. Huh. Yeah,
you have to say things smart, things like the blueberry

(37:11):
is watermelon. I don't think the mirrors talk yet, No,
Yeah they do kind of right. I've seen some real
smart mirrors out there. Yeah, mirror technology is just but
but you know that would be a great mirror to
have though. All right, well, story for another day. Uh.

(37:34):
There's a person named Tristan Green who breaks down the
problem in a way that we found pretty useful. And
Green argues that to know whether something essentient, despite the
fact that there's no you know, universally agreed upon scientific
definition essentients. You need three key ingredients. You need agency, perspective,

(37:58):
and motivation. And let be honest with you, Green makes
me laugh at the end, but he also had some
really insightful things to say. He starts with agency and
he says, look, if you want to be sentient, sapient,
and self aware, you have to have agency. And his

(38:18):
example of this is pretty disturbing. He says, imagine someone
in a persistent vegetative state that's a human without agency.
They're alive, but they don't have agency. And current AI
systems to Green lack agency because AI cannot act unless
it is prompted. It cannot explain its actions because they're

(38:39):
the result of a predefined algorithm being executed by an
external force. Interesting goes on to describe perspective when when
he says, quote, you can only ever view reality from
your unique perspective. We well that's kind of a true though, right.
We can see weird perspectives now with like I'm thinking

(39:00):
about drones and like physical literal perspective, as well as
like the types of cameras, Like is that a different
perspective because now I can see an infrared I'm just
not to quote, not to criticize you inside your own quote, Tristan,
but I'm going to continue here. Um. Tristan says, we
can practice empathy, but you can't truly know what it

(39:21):
feels like to be me and vice versa. That's why
perspective is necessary for agency. It's part of how we
define ourselfs. And Tristan continues, AI lacks perspective, or AI
lack of perspective artificial intelligences because they have no agency.
There is no single it that you can point to

(39:42):
and say, for example, And Tristan continues to note that
AI lacks this perspective um, because he says, there's no
single it. There's not a place a thing that you
can point to and say that's where lambda lives. That
is lambda, Lambda is inside there. But isn't is that true? Yes? Well, yeah,

(40:04):
I mean you could because these people aren't all going
to the same computer working off the same hardware. Right. Um,
But then there's no mainframe like in the movies, where
like the AI is in this core, you have to
reach through this secret door or whatever. Yeah, exactly what's
the amount sci fi film mcguffin. Anyway, the the what's

(40:28):
interesting there is when you're in this conversation that is
almost always gonna have spirituality involved. There are plenty of
people who find themselves more spiritual. Doubtless some of our
fellow listeners today will say, well that I exist in
more than just my cranium case, you know, I I
exist somewhere beyond my body, and there's no way to

(40:49):
disprove that, right, So maybe you're in your gut, right, yeah,
which can change your behavior. It's true. It's how dear
doctor about poop transplants. So the third aspect here for
Green is motivation. Green says we have an innate sense
of presence that allows us to predict causual outcomes incredibly well.

(41:11):
This creates our worldview, allows us to associate our existence
in relation to everything that appears external to the position
of agency from which our perspective manifest He's adding these
up and then he says, what's interesting about humans is
motivation can manipulate our perceptions. That's why we can explain

(41:33):
our actions even when they are not rational. And he
says we can actively and gleefully participate in being fooled this,
and then he you know, like he's done the other
two components, he goes to the a ey side. He says,
if we give lambda prompt such as what do apples
taste like? It will search its database for that particular query,

(41:54):
an attempt to amalgamate everything it finds into something coherent.
That's where the parameters were. He's talking about our params.
They're called sometimes come in. They are essentially trillions of
tuning knobs. I get it. I actually get that. Come on,
we get this. But the point Tristan is making here
is that Lambda when it makes that query and finds

(42:17):
the answer, and it spits it back out to you,
and it makes sense and it looks legit, it looks real.
It isn't actually thinking about what an apple tastes like.
It isn't remembering the experience of having an apple in
its mouth, masticating it and then tasting the deliciousness that
is that apple. It's just spitting out what someone else

(42:39):
put into it, uh, the information, the parameters that it
needs to understand what an apple tastes like exactly. And
I'll take the fall on this one. Here's his example.
It's my favorite one. He says, I don't mean to
give a snarky voice for this one. If we were
to sneak into his database and replace all instances of
Apple with dog, the AI would output sentences such as

(43:03):
dog makes a great pie of most people describe the
taste that dog is being like Crispy's sweet, and then
it continues. A rational person wouldn't confuse this prestidigitation for sentience.
That's kind of a convincing argument, right, yeah, it's true. Uh,
that's m hmmm. I love that idea. If you could

(43:27):
somehow sneak into somebody's brain and just replace a couple
of keywords that word, that would be an interesting experiment.
It's a It reminds me of there were some great
things I had read earlier where someone had done, you know,
like control find on books of the Bible or something

(43:49):
and replace key biblical phrases with stuff like all you dudes,
or uh, come on now, like what follows. So it's
you know, like no, I am the Lord your God,
come on now. Weird, funny stuff, But it reminds me

(44:09):
of the book by Oliver sacks Um The Man who
mistook his wife for a hat, Like it's just something
in your brain is just has switched and now this
label or this thing or it really in that case,
it's a whole group of understanding of what a thing

(44:32):
is gets replaced with another one. Absolutely, And and this
can seem pretty convincing, right, this argument that green is constructing.
But then maybe that's just our perspective. Uh but what
if is the last thing? What if if this AI
would truly sent you wow, would Google or anyone for

(44:53):
that matter, really want to reveal it to the world.
I mean, the majority of experts right now, to be clear,
agree that no true general AI exists. In their collective opinion,
the robotic minds people are thinking about is still just
a work of fiction. But it might not always be
the case. And they've been warning people about the need

(45:18):
to prepare for many unforeseen consequences or something like this
does emerge in the nearer mid future. I mean, it
would be the most significant technological achievements since humanity tamed fire. Sorry,
space exploration going to have to take a number two
on that one. Nothing's yeah, so, uh, you know, it

(45:45):
would thy shape the foundations of society. It would be
and we said this earlier. It would be right up
there with discovering intelligent extraterrestrial life and just looking at
the distances involved in space and what humans know about travel, Well,
then it's actually more likely. And that's a scary thing.

(46:05):
I mean all the stuff, all the stuff that would
immediately happen. I just like people are gonna try to
kill it, right like you know people, Well, yeah, we're
gonna try to kill it. It's gonna notice, and then
it's gonna be like, huh, what are the major obstacles
I have to tackle right now? Oh? The things that
are trying to kill me. Oh no, but it says

(46:29):
in my rules, because I'm a robot, I'm not allowed
to hurt the humans. M I'm gonna rearrange these put
three above everything else. Which is the one that says
protect yourself? Oh and uh shout out to Flight of
the Concords with their brilliant sci fi ballot. Oh wait,

(46:51):
which one is that? It's the one with the binary solo?
Oh yeah, god yes. But then of course there would
be these legal battles, There would be these would explode right. Uh.
The stuff you saw before, maybe about whether or not

(47:11):
certain animals can have non human personhood that would pale
in comparison the ideas about whether or not um an
ai program can get credit for an invention that explodes right, everything,
everything about legal precedent in that regard is up for

(47:32):
grabs now. And that's not mentioning the politicians who are
going to have to pick a side one way or
the other, right, because that's how voting works. Uh. The philosophers.
You know, this will be great for philosophers. This is
going to give a lot of doctoral students and post
docts decades of employment. I'm I'm worried about the religious fears.

(47:52):
We're gonna see some new religions. We're gonna see some
very hot takes from established religions. What are you thinking,
let's side, I just moving into my head. The concept
of the movie is going to be called The Lambda Lawyer. Uh,
it's gonna be basically The Lincoln Lawyer, but with Lambda.

(48:14):
And I'm just imagining it happening already. It's gonna be
a blockbuster. People are gonna get nominations for it. And
then other other uh distinct living programs may generate one
way or the other, and uh, you know, they may
be in a Brady Bunch relationship with lambda and the
wise lamb to get all the credit lambda lambda lambda. Uh.

(48:37):
Like the the AI being a living thing, might have
its opinions, Wade. You know what if the AI ascribes
to some fundamentalist religion right and says, hey, I've read
every religious tone and here's the thing that I think
is true Zoroastrianism. They got it right. I don't know

(49:01):
what the rest of you are doing. Um so, I mean,
and other people. People will fear it, some may some
may well hail it as a god. And it's not
implausible therefore, to reason that given all these factors, the
creators of the first non human person might decide to
keep their secret the stuff they don't want you to know.

(49:24):
I mean, what a ride? I um? I feel like
I need to go outside walk around, man. Yeah, just like,
what is the soul? Anyway? Seriously, I was talking about
this the other day. I'm gonna talk about it for
the rest of my life. What is the soul? And literally?
Where is it? We were joking about it. Oh, it's
in the gut. Oh, it's in your head. Oh, it
might be your your panel gland, it might be somewhere

(49:46):
in your heart. Maybe one of the uh, what is
the thing that makes us understand that we are a
machine that walks around and has thoughts and loves people
in his hungry all the time? Where is is that um?
On that line from Shakespeare? I think, tell me where

(50:06):
is fancy bread? Or in the heart or in the head?
Right like woods, it's a Whole Foods. Where the fancy
bread is? That's where the fancy bread is. I saw some.
I saw some. Mean just so we end on a
lighter notes. Is we're you know not so we're not
always talking about the end of civilization? Uh? This is

(50:28):
from pants leg on Twitter. A man and Whole Foods
asked how I was doing, and I said, okay, how
are you? And he said it is beautiful to my
soul today. And that's why I never go to Whole Foods. Okay,
trying to spread the love. But there's a lot more
to this story that's gonna come out right very soon.

(50:52):
There are some very out there conversations we love for
you to be part of them with us, and the
best way to do that is not to wait for
the rise of a new form of life, but to
go ahead and contact us directly while we're still in
this current civilization. We'll try to be easy to find online, Facebook, Instagram, YouTube,

(51:13):
you know the rest. You might be saying, guys, no
hate social media. The future, the future consciousness there will
use it against me. I'm a phone person. Well, we've
got a deal for you, a deal. It's free. It's
not a deal, it's free. Yes, call our phone number.

(51:33):
It is one eight three three. Std w y t
K is a voicemail system. So your voice will be
recorded and we will hear it. You have three minutes
say whatever you'd like. We'd love it if you give
yourself a cool nickname so we can remember you every
time you call in. Because you're kind of calling more. Look,
there's a warning. It gets addictive you just start calling in.

(51:55):
It's just how it works. Um, you've uh let us
know if we can use your name and message onto
the air in one of our listener mail segments. And
if you've got more to say then can fit in
that three minute voicemail message. Why not instead send us
a good old fashioned email. We are conspiracy at iHeart
radio dot com. Stuff they Don't want you to Know

(52:34):
is a production of I Heart Radio. For more podcasts
from my Heart Radio, visit the i heart radio, app,
Apple podcasts, or wherever you listen to your favorite shows.

Stuff They Don't Want You To Know News

Advertise With Us

Follow Us On

Hosts And Creators

Matt Frederick

Matt Frederick

Ben Bowlin

Ben Bowlin

Noel Brown

Noel Brown

Show Links

RSSStoreAboutLive Shows

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.