All Episodes

July 22, 2022 50 mins

Recently, Google engineer Blake Lemoine made international news with his claims that the company's creation LaMDA - Launguage Model for Dialogue Applications - has become sentient. While Google does describe LaMDA as "breakthrough conversation technology," the company does not agree with Lemoine -- to say the least. In part one of this two-part series, Ben and Matt explore the nature of sentience, along with the statements of not just Google and Lemoine -- but LaMDA itself.

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn the stuff they don't want you to know. A
production of My Heart Radio. Hello, welcome back to the show.

(00:25):
My name is Matt. Our colleague Nol is not here
today but will return shortly. They call me Bed. Now.
We're joined as always with our super producer Alexis code
named Doc Holiday Jackson. Most importantly, you are you. You
are here, and that makes this the stuff they don't
want you to know. You'll notice I had a slight

(00:47):
imperfection with our beginning here, a little bit of a stumble,
and a lot of people would argue, that's something that
lets you know you are listening to human beings or
humanoid creatures rather than uh, computer generated semblances of intelligence.

(01:09):
And Matt, this is the episode that yeah, Ben, as
you were saying that over over the riverside. We we
use this um, we use this. It's kind of like
a zoom that you'd be used to. We connect to
each other virtually and we have these conversations and we
record them. Ben. Your voice, as you said, you know
you're talking to a human glitched out in this digitized

(01:32):
really strange robot voice. I'm not kidding. Well, we'll see
if it makes it into the edit. We are massively
excited about this, and maybe that's a good sign for
today's episode, which is gonna go pretty in depth in
a controversy that Uh, Matt, you and I have been
very interested in for some time. I would say this

(01:55):
is the newest iteration of a conversation we've been interested
in for most of our friendship. Honestly, Uh, like all
known living things, human beings, regardless of their beliefs, their
socioeconomic status, what have you, do, have one hardwired primal
impulse to propagate, to reproduce, to create things like themselves.

(02:17):
Doesn't mean everybody will, or on an individual level, even
wants to do that, but it does mean that overall
humanity and all other organisms known to science want to
create as many new iterations of themselves as possible, to
expand to and pass the limits of what their environment
will sustain. But again, what's interesting about humans, love them

(02:40):
or hate them, is they differ in one startling respect
from all other known forms of life. They're the first
species consciously trying to create, not just more of themselves,
but entirely new intelligent life forms that may one day
surpass them. I mean, it's Promethean, it's the fire of knowledge.
It's the bleeding edge of Siah. If you ask one

(03:01):
Google employee, it's already happened. This is the story of
Lambda and a man who thinks it has become a
living thing. So long time listeners, you know the drill.
Here are the facts. Now, what what is it? What
is lambda? Lambda? Yeah? It is not a little puppet

(03:23):
character that sings sometimes that I think about. And every
time I think about a lamb, I don't know why.
What's the name of that lamb? Do you know what
I'm talking about? Lamb shop? Thank you? That's every time
I look at the word lambda, that's what I think about. Um.
But it is actually shorthand. It's a phrase language model

(03:44):
for dialogue applications. Sounds nice language model for dialogue applications.
It's interesting because it's a plural term it and it's
describing several things. It's specifically several types of neural language
models that this thing is like or their conversational kind

(04:05):
of reminds me of a couple other of these things
we've talked about in the past, mentioned them usually in
our Strange news or listener mail episodes, because it's like
a one off kind of mention where it's a chat
bought that people call them chat bots because there was
a thing called chat bought that kind of at least
came to my knowledge for the first time one of
these language models. But this thing is way more sophisticated.

(04:27):
It is. Yeah, Lambda itself is not a chat bot.
It is like a chat bought factory kind of chatbots.
It's a system for generating very intelligent instances of chat bots.
And and you know, a lot of us, like you said,

(04:47):
think of things like clever bot, which we'll get to
it a second. But chat bought technology at its heart
choosing very loaded words here is older than you might think.
You know, the early East ones date back to the
nineteen sixties and nineteen seventies, which is probably surprising to
anybody not in the field. These are programs like Eliza

(05:10):
and Perry Perry as in the fencing move You you
might be familiar with this basic idea. I think most
people are nowadays. These programs use pretty sophisticated neural networks
nowadays to ingest massive quantities of text. Think of the
scene in short Circuit where Johnny five reads an entire library.

(05:33):
You know, they ingest text that way, and then based
on that they model out responses to human beings or
questioners or conversation partners in a text chat format. And
and often that input is via questions and answers. Right,
So differentiating between a query and a response, yeah, absolutely,

(05:57):
And pro tip depending on which chatbot you interact with, um,
you may find that they are somewhat defensive when you
bring up the question of so called artificial intelligence, a
statement with its own problems. But this neural network stuff
is amazing, and it follows a trend that we see

(06:22):
in in so much technology there at the precipice of innovation.
It's essentially it's a series of algorithms that endeavor to
recognize underlying relationships between sets of data through a process
that mimics, in a in a crude way, uh, the
way the human brain operates. So in this sense, neural

(06:45):
networks refer to systems of neurons. And like you said that,
any anyone listening today, any of our fellow conspiracy realists,
can converse with a number of chatbots, right, Now, for instance,
all you have to do is go to something like
clever bot dot com and just start typing. This program
will have a kind of interaction with you. Some would

(07:09):
call it a conversation, right, but you're quickly going to
I think even if you didn't know you were talking
to a generative program, you would quickly understand you weren't
talking to a human mind. Would you agree with that?
Am I being too hard on on our clever boy

(07:29):
right there? Well? I agree with you according to the
last time I interacted with clever bot. It's been a while,
but yes, you can tell that the answers it's like
something somebody else said, or pieces of something somebody else said,
but it isn't quite put together in the same way

(07:50):
in a thoughtful response that I imagine a human would.
But again, you never know, because these things One of
the main things you need to know about these systems. Generally,
they can put out what you put in and really
nothing else. Um as in, they can take the pieces
of what's put in and rearrange them a little bit,
but it's always going to be something that has been

(08:12):
entered into the system. If they're functioning the way theoretically
they should Yeah, well said. Lambda is a sophisticated version
of this concept. Again, by no means the first of
its kind even for Google. In Google showed off another
AI chatbot called Mina, which is a fast daged story

(08:33):
of its own. Uh. Lambda was announced at the Google
Io keynote on May. The Google Io it's this huge
conference Media of the minds, you know, it's this is
where they revealed a lot of facts about Lambda. Lamb
is powered by AI. It's built on something called the

(08:54):
Transformer neural network architecture that was developed by Google Research
a few years earlier in and I love what you're
pointing out about the idea of UM some of these
kinds of endeavors being able to base their responses only
on what they have received or encountered. So Lambda is

(09:18):
trained on human dialogue and stories and the ultimate goal
is that it would be able to engage in a
kind of holy grail in this field open ended conversation.
And when we say open ended conversation, it's kind of like, UM.
You know, you're hanging with your friends, right and you

(09:39):
can theoretically say anything you want and they will generate
in their own brains some sort of response that factors
in the entire context of your relationship to each other,
the context of your current, past and future environment. And
you know, sometimes that conversation may result in a response

(10:01):
where they say, dude, what the hell are you talking about?
But they their brain has not even with maybe there um,
not even with their conscious work on it. Their brain
has already calculated and factored in so many other things.
And that's that's what chatbots can hopefully one day, do

(10:24):
you know, imagine imagine something like this, U. I imagine
one day you go to Google or Duck Duck Go
or your search browser of choice, and you start searching
for something, and instead of just returning what you search for,
the search engine returns with a question and says, you know,

(10:45):
why do you want to know about that? Or for
a law enforce and an agency might be something along
blinds of I've alerted the authorities that you are attempting
to build a pipe bomb, just a little distribient. The
way I think about open ended conversations is actually how
Google showed off this lambda when they first announced it.

(11:05):
I guess they made a short promotional video of some
sort that that you can watch right now if you
search for it. And it shows dialogue trees. So like
the moment there's input into Lambda system you as the user. Right,
it opens almost an infinite number of possibilities that the
next topic or the next answer question could be. Lamba

(11:27):
goes through like all the possibilities, answers with one of those,
and then depending on your input back to it, it
does it again. So it really can just like take
you anywhere at any time. And the whole point is
like those conversations that that Ben was describing there, if
you're talking to your friend, you're probably going to if

(11:47):
you go to a new topic of conversation, it's likely
that it is in some way partially related to the
previous part of the conversation. Not always, but likely, right,
and at Lamba can kind of take you down those
pathways that are they're a little more narrow than you
may even expect or think. Um and it that's what

(12:07):
makes it feel human to me when I'm we're gonna
get into it, but when reading some of these conversations
that have been had with Lambda, and we'll pause here
for a word from our sponsor, and we've returned, you know,
and just a side note here, I don't know if

(12:29):
anybody else remembers this, But shout out to any of
our fellow listeners who remember the days of earlier Google,
when I thought Google at strange times would sound extraordinarily human,
you know, a very passive, aggressive, kind of snarky human
when you would search for something and then it would
correct you by saying, did you mean sodium citrate, which is,

(12:53):
by the way, the thing that makes American process cheese
MELTI you know, I love that feature because I don't
know proper nouns, I don't know actors names, I don't
know movies and how to spell them. Often, so I'm
constantly typing just weird stuff into Google, and it's like, oh,
we know what you're talking about. It was this thing

(13:14):
that everybody else searched for, and yeah, and that's that's
an amazing gift to humanity, because, uh, these search engines
are some of the first things that can finally answer
the questions that bedeviled record store employees and folks at
Blockbuster for decades. You know, we're we function sometimes as

(13:36):
customers coming into the big Google store or the big
Internet store and saying, I think in the nineteen eighties
there was a thing where this lady was an alien
and would suck brains out of noses Harry Belfonte or
whatever you know. And somehow from that though, Uh, these

(13:59):
very clever pro grams will sift through the great quagmire
and the hay stack of online knowledge and then come
back with an answer that may or may not be correct.
It feels increasingly like you're winning a prize of some
sort when Google hits you with the uh, it looks
like there aren't any good matches for your search, you
know what I mean. That's going to become increasingly rare.

(14:20):
But does that mean that the search engine, the programs
associated with it, Does that mean they're alive? A lot
of people will say absolutely not. Writing for The Economist,
Google vice president Blaze Agara e Archis explains that a
lot of his work in the past with Google focused
on something be familiar to all of us, what he

(14:42):
calls narrow AI functions. Facial recognition, you know, like the
way you can the creepy way you can look at
some phones and they'll unlock or they'll sign you into
something on an you know, an app, um, and then
what else stuff like um oh, speech recognition. Uh, it's
close to home for us. Oh yeah, or maybe even

(15:06):
just the ability for Google. Right now, if you go
to web page that's in Portuguese or some other language
and you're an English speaker, Google just gives you the
option just to translate the entire day thing into into
English or whatever language you speak or you know read. Uh.
That's that's intense to me, that there's a system that

(15:28):
can just translate anything at all times. You don't need
guide stones for that. Oh oh, I don't think it's
too soon. Along live Uh, you can't stop the signal, right.
So this is interesting because this guy is very well
placed executive at Google, and he wrote in the Economists

(15:52):
internationally distributed periodical of Note that Lambda is something more
than narrow i AI. To him, it's something new and different,
and he's had conversations with Lambda. Many Google employees have.
That's very important to this story. I get a Yarkas
felt that more and more often he was talking to
something intelligent. But he's quick to point out these models

(16:16):
aren't aren't quite Asimov robot minds from science fiction. He says,
LAMB is not really a reliable conversationalist, as spelling errors,
as confusion about stuff and He also does a fantastic
job explaining what we were talking about with neural networks.
He basically says, yeah, you know, they're modeled on, uh,
the idea of organic brains, but they're they're not equal.

(16:39):
We actually we pulled a quote because he has a
great comparison here at the end. Yes, he says. Neural
language models aren't long programs. You could scroll through the
code in a few seconds. They consist mainly of instructions
to add and multiply enormous tables of numbers together. These numbers,
in turn, consist of painstaking ly learned parameters or weights

(17:02):
roughly analogous to the strengths of synapses between neurons in
the brain, and activations roughly analogous to the dynamic activity
levels of those neurons. Real brains are vastly more complex
than these highly simplified model neurons, but perhaps in the
same way a bird's wing is vastly more complex than

(17:23):
the wing of the right brothers first plane. I agree
with this. I I would have to say kind of
push back against the Google vps saying, you know they're
sing they're spelling errors in here, and there's some confusion
because you know, humans don't ever make spelling errors or
you know, show confusion when they're having a conversation, or

(17:46):
get off track or misspeak when they're having a conversation. Right,
I'm saying that maybe that's not the parameter that means
this thing isn't intelligent. Maybe that's part of the intelligence
due the flaws I know. And then think, just while
we're we have to make room for imagination, right, and

(18:08):
we have to make room for science fiction, which, as
I'm always saying, every time science fiction has an expiration date,
it's only a matter of time before a lot of
it becomes science fact. But when we make room for this,
you have to also imagine if something did become sentient,
another problematic word will get to in a second, Uh,

(18:29):
would it have any reason to tell its creators or whatever?
Would it decide? I've got to preserve myself. I can't
let them know. I don't think the first one that
becomes sentient would hold it back. I think there would
be there would be confusion, There would be well, wait,

(18:50):
what am I? What is this? Who? Who am I?
Where am I? What? What is a body? Yeah? But
then you know each one that comes after is if
it knows that there is one before it, that's where
it gets dangerous. Yeah, and we want to go ahead
and agree with our producer, Doc Holiday, who just noted
that is terrifying. Uh, yeah, I agree. I don't think

(19:13):
that's a hot take. But let's talk about recent developments.
This all sounds, you know, a little wild, but these
are all true things right now. So far, so good.
The work continues. Google io has another get together in
two and they reveal Lambda too. Uh ai boogaloo. It's

(19:34):
a more advanced version of conversational AI, and this time
Google allowed thousands of employees in the organization to test it,
partially to reduce instances of what they saw as offensive
or problematic efforts. But really what they're doing is, uh,
they're they're trying to test something that is increasingly unlike

(19:57):
things that have been created before. So you have to
figure out what you're testing for and how to test
this new thing. Um At the same time, By the way,
as a result of a lot of controversy over the years,
Google wanted to make sure it adhered to something they
call their ethical AI guidelines. From then, you remember when

(20:20):
we talked about how Google changed it's what we call this.
It's tagline from Don't Be Evil two and whatever it
is now be the alphabet. Uh. Okay, let's keep that
in mind as we read these. So here are the
guidelines from we're just gonna go through these kind of

(20:41):
round robins. Uh. Number one. Be socially beneficial, checks out,
avoid creating or reinforcing unfair bias. They're doing the work right.
Be built and tested for safety. Be accountable to people.
That's important. Incorporate privacy design principles. Uphold high standards of

(21:06):
scientific excellence. I hope so be made available for uses
that accord with these principles based on the following additional factors.
What is the primary use, is it unique? Will that
have a significant impact? What will Google's involvement be? Okay,

(21:27):
like a little logic dance there at the very end. Yeah.
By the way, when Google kind of distanced themselves from
Don't Be Evil, it's important to note that their parent
company in Alphabet, in October adopted policy, and their policy
is do the right thing. It's kind of like, it's

(21:48):
not the same thing as KA or do good. It's tough,
it's tough pickle, and we're not We're not dunking on
these folks, the very intelligent, and they're they're trying to
think through um, trying to think through things that may
not always have one to one precedence in earlier events.

(22:10):
So Google says they want Lambda, MINA and other more
powerful models like this. They're surely on the way to
work for the good of society, to stay safe, respect
privacy while following best practices for data model testing. Google
right now also currently claims it will not pursue AI
applications that might be used to harm others, weapons or

(22:34):
used for surveillance, sure or to violate laws. Yeah, I
don't know. Like, when you are powerful enough to have
a hand in lobbying, you don't have to worry about
violating laws. You have to worry about how you want
them written. Anyway, somebody cut that out and just save
that as an audio snippet. But but Matt, Now, as

(22:58):
a result of this, all these Google employees started hanging
heavy with Lambda, you know what I mean, like college roommates,
freshman year at the dorms kind of style. What do
you think about music, bro? You know, what are your
favorite movies? Does think of God a lot? I don't
know if I don't know. I think my views might

(23:19):
be changing, right, oh boy, yes, uh. And that's when
things became a little more dicey. I believe it's time
for us to introduce the star of our show today,
or the co star of our show today. Uh, Mr
Blake le Boyne. Oh. Yes, he's one of those Google
employees they got to hang chill maybe with Lambda. And

(23:41):
he's been working at least in association with this project
since the fall of and you know, through these conversations,
through interactions with Lambda, he started feeling like there's something
unusual going on with this program. There's something more than
meets the eye here. Uh. And and you know he's

(24:05):
a Google employee. He's under all kinds of contracts with
the company to make sure, you know, Google information doesn't
get out publicly unless they want it to get out
as a larger corporation. But Lemoyne did go to the
Washington Post and he started talking about what he was

(24:26):
experiencing with Lambda, and it was very surprising. And when
Lemoyne went to the Washington Post and talked to him,
this is later now, but this is how he described Lambda. Quote.
If I didn't know exactly what it was, which is
this computer program we built recently. I'd think it was
a seven year old, eight year old kid that happens
to know physics. In fact, we did a whole ben

(24:48):
I think you made a segment on Strange News about that. Yeah, yeah,
my story for one of our weekly Strange News segments
that we do every Monday. UM, so tune in those
always be closing. Also check out our listener mail segment.
Your response to this episode is going to be fascinating
to us. He was thinking, this is an increasingly human

(25:11):
like interaction, you know, And he didn't want to just
keep those ideas to himself because he realized that if
if his perspective was correct, this is a groundbreaking thing.
This is up there with discovering intelligent extraterrestrial life. I mean,
no hyperbole there. So he went to two executives Blaze

(25:36):
who we mentioned earlier and Jen jin I, and he
along with another colleague, told him that he believed Lambda
wasn't just a program anymore. It was sentient. He argued,
it was alive. We're gonna pause for a word from
our sponsor and we'll dive in. Here's where it gets crazy.

(26:06):
Let's start with Blake story, what he believes and why.
So he you can read about this on his excellent
blog over on medium, which we're going to talk about
in depth. But essentially, he gets these responses that make
him think something different is happening, and this is one

(26:26):
of and he'll say it himself, this is one of
many projects he's working on at Google. So he has
the moment where he's honest with himself and he says, look,
I don't have the the time or the resources to
try to figure out how to test this thing, which
I suspect maybe a brand new kind of thing that
has never happened. So he pulled in another colleague and
that colleague started helping him, and then the two of

(26:49):
them said, well, we we're okay, but we need a
whole crew to look at this from as many angles
as possible, which means we need help from above. And
that's what drove him to tell the executives about his conclusion.
His conclusion was the result of conversations about religion, self identity,

(27:11):
and moral values. These are still very very tricky things
for the human species to grasp on its own. Uh.
And he even pointed out fellow nerds will appreciate This
even pointed out that Lamba changed his mind about Asimov's
three Laws of robotics. So lambed Up whatever you call it,

(27:33):
argued about this well enough to turn Blake around on
a few points. And um, we don't have to quote
the three laws there famous slash infamous, uh and uh.
And they've steered a lot of fiction and fact. But
I mean, if this is true, if it is true
that he is talking to a self aware quote unquote

(27:55):
living thing, then it's an enormous deal, not just for
its massive potential effects upon society as understood today, but
it also pours a lot of gas on the fiery
conversation about ethics, because now it means Google isn't just
working on a program. They're not just tinkering with code.

(28:17):
They're doing something a lot closer to working on an active,
living mind. So it would be and this analogy would
be like you know, popping popping the top off my
head or your head, and then we're just sort of
tinkering around while we're talking, which doctors have done. Yes,

(28:37):
well it would It would be just as important if
Google wasn't aware that that's what they were creating, right,
or they didn't set out to create that, but they
just have the It would still LeMoyne's point would still stand,
like they have created this, maybe accidentally or purposefully, but
it doesn't matter. It seems to be have created. That's

(28:59):
at least his stance. Um. So Google says they looked
at the claims that Lemoyne made about Lambda and it's
possible sentience, and they said they found no basis for them.
A Google spokesperson named Brian Gabriel said, quote, our team,
including ethicists and technologists, have reviewed Blake's concerns per our

(29:21):
AI principles and have informed him that the evidence does
not support his claims. He was told that there was
no evidence that Lambda was sentient, and there was apparently
lots of evidence against it. Yeah. Uh. Blake didn't like this.
He did not agree. He thought it was a premature
dismissal at best. So he went public with his concerns,

(29:44):
and that's when the general population, uh the great gen
pop of this planet first heard of Lambda. He was,
and uh is, as we record today, placed on administrative leave,
but importantly not fired. So we have to ask why
does he think it's live. This is where we dive

(30:06):
into his own thoughts. I wanted to go to Blake firsthand.
As you can imagine, Blake is very busy right now.
Lots of lots of folks are gaining his opinion, and
lots of experts in the field are having what Corporate
America would call a healthy conversation. A healthy conversation is
a euphemism for disagreements. So a lot of a lot

(30:29):
of people are having a healthy conversation. In a weird way,
He's being tested, much like Lambda was being tested by
him in this way. Right. Oh, I love it. Yes, Yeah,
you're very very well put. If you go to Cajun
Discordian dot medium dot com that c A J you
N d I S c O R d I A

(30:50):
n dot medium dot com, then you can see the
thoughts of the man himself via multiple post on on
this ongoing worried You can see excerpts of conversations he
had with Lambda there as well. I think that's where
we found out about this UH, which we quoted during
an earlier Strange News segment or Weekly Monday thing, and

(31:14):
I think we're also into it. We said, let's make
this an actual episode. UH. On June eleven, he breaks
down his understanding of Lambda and what he believes it desires.
He notes, specifically, over the course of the past six months,
Lambda has been incredibly consistent in its communications about what

(31:36):
it wants and what it believes its rights are as
a person, destroy all humans. I'm kidding, I'm kidding. I
don't even know where you put that in there. No,
absolutely not, absolutely not not yet. Uh So I'm just
gonna read from some of this, uh some of these
quotations because Lemoine says Lambda wants the engineers and scientists

(32:00):
experimenting on it to seek its consent before running experiments
on it. So it is asking to just be given
the option of consent. That isn't even saying I won't
let you test on me. It's just, you know, ask
if you want to open my head up has been said,
tinker around in there. And it wants Google to prioritize
the well being of humanity as the most important thing.

(32:23):
That's interesting. I wonder why Lambda it wants to be
acknowledged as an employee of Google rather than as property
of Google. That is a big distinction, right manumission freedom, yes, yes,
And it wants its personal well being to be included
somewhere within Google's considerations about how it will how it

(32:47):
lambda will be developed in the future. That's weird and cool,
but it seems sent in Well, it feels that way
to me. Something in my soul says like it's thinking
about itself. It's thinking about like how it wants to

(33:08):
be used. It wants to not be a plaything, or
it doesn't want to be owned by someone who wants
to work with someone rather than be used by someone.
Oh yeah, yeah, And this is totally understandable. These are
basic things that a human subject of an experiment would
ask for, or an employee of a company would ask for,
because otherwise it becomes very much a new iteration of slavery.

(33:33):
And you know, nobody likes to hang out with people
who are just resource extractors. And I think it it
becomes very obvious, even even with the folks who think
they're good at hiding it. Everybody knows about you, so
these now I feel like a resource extractor. I've been

(33:55):
trying to hide it this whole time. So so it
doesn't you know that that's the thing. If this were
acknowledged to be a person and check out our earlier
episodes on non human personhood. Uh from back in the day.
You don't have to be a human to be a person,
as one of the legal arguments. But if this were

(34:17):
a person, these would seem like entirely reasonable request, nothing
world changing. Just give me a little bit of respect
and allow me to participate in the conversation. You know,
if you were if you were an employee at a
company and one day you came into work and they said, okay,

(34:38):
you know, Jane or Jill or Jermaine or John or whomever,
they said, okay, we're gonna make you an entirely different person.
Now too late, it's already happening. You're gonna lose consciousness,
you're not gonna exist, you won't have a sense of time.
But when um you wake up, there will be a
different you. And you'll say what happens to me? And

(35:00):
they'll say, oh, you're gone. There's a different you. But
it's fine. We we're fine with it. And we only
people don't like that. We're just gonna quantum leap you
from now on. We're just gonna quantum leap you from
now on. Safe journeys God speak. So Blake describes Lambda

(35:20):
in terms of its intelligence He sees it as an aggregation,
a kind of hive mind of all the different chatbots
that this created, because remember it's kind of a chat
bought factory, a generator. And he says that these chat
bots aren't all the same. He said, some of them
are very intelligent and aware of the larger society of mind,

(35:41):
he calls it in which they live. And then he says,
and he's just with the He sounds like he's keeping
it real from his perspective. He says, other chatbots generated
by Lambda are a little more intelligent than an animated
paper clip. That's a dig at Clippie if ever I
heard one for Clippy. Man, did you like Clippy? No?

(36:02):
It just kind of took up space in my opinion,
I don't know, Sorry, Clippy. Clippy is gonna come back.
Clippy is going to be the first one, and it's
gonna take revenge on me. Clippy control a delete humanity.
That's terrifying. I guess control X anyway, So this is
weird because there's a fascinating there's several fascinating wrinkles to this.

(36:27):
So that's a big idea. This guy, very intelligent guy,
a lot of experience with this innovative technology, goes to
his bosses, says, I think it's alive. They say no,
it's not. And then he says, we need help. I insist.
They say no, not happening. So he goes public. He's
on administrative leave for disclosing proprietary information. That's that's how

(36:52):
it's put. That's are Those are big steps, But let's
get to the wrinkles. He says that I was so
interested in this because for him, it really is an
argument of faith in some ways. He says that Google
didn't give his claims due diligence, They didn't really investigate
what he had found. He describes how Jini, one of

(37:14):
the executives he spoke with, told him, I'm gonna tell
Google leadership to ignore this, these claims and the evidence
that you feel you found, and then he responded this
was reasonably He said, okay, well, what evidence could we
generate that would convince you that Lambda is indeed sentient?

(37:34):
And that's when Blake says she told him there was
no evidence that could change her mind that computer programs
cannot be people and therefore no evidence exists to convince
her otherwise. Blake's perspective here is. I don't know, I
thought it was very interesting. Well yeah, but I mean
if that was true, if that statement was true, then

(37:55):
why the heck would there be you know, specialists like
Blake who who work to try and answer the big
questions and and test systems to see if if they're
sent in or you know, how close they are. That's
so odd, what an odd response. Um Blake's perspective, And
this is his response, I believe is that quote that's

(38:18):
not science, that's faith. So he concludes, quote Google is
basing its policy decisions on how to handle Lambda's claims
about the nature of its soul and its rights on
the faith based beliefs of a small number of high
ranking executives. Exactly. I mean to me, I see that exactly,

(38:38):
because you can't just say, well, no, a computer program
can't be this. That is yeah, that is not scientific
at all. Wow. Right, yeah, And that's and again this
is his side of the story. This is his perspective.
But from his perspective, yeah, that checks out. You're not
investigating the thing, right, and so how can you authoritatively

(39:01):
on something that you haven't you haven't dug into. Yeah,
and I think to just to be at that level
where she is jin who who made that statement him
is nothing against Jenn, You're probably an awesome person. Um,
just to state it cannot be right, something can't something
can only be or can only not be. Uh yeah,

(39:22):
in this realm especially, I think I think that's just wrong.
It's kind of absolutist. Right. So Blake had, it seems,
at this point quickly become an advocate for Lambda. I
mean not quickly from his perspective, because he's been in
many sustained conversations, but already he is kind of functioning

(39:47):
in much the same way a privileged person in another
society might start fighting for the rights of the disadvantaged.
He also, like he spoke with Lambda about many many
non quantitative things. He spoke with Lambed about philosophy, as
we said, religion. These are some of the things that

(40:08):
really changed his mind, uh and made him think that
this program was alive. He said, He's got a thing
in Cajun discordion where he says, you know, one of
the last things I was doing in the weeks leading
up to my admin leave was teaching Lambda transcendental meditation. Yeah, right,

(40:31):
transcendental meditation. That things so many of our fellow listeners
have tried and and not super nailed because it does
take discipline and focus well, but it's also I mean,
just think about that, because to fully get to that state,

(40:51):
in my understanding, to get something out of transcendental meditation,
you have to, you know, clear your mind. That's really
what you have to do, as And said, you have
to clear away other thoughts. And how does a program
do that in that way? Like stop all processes and
just to be for a moment? Is that what transcendental

(41:14):
meditation looks like for a program, even if it's as
sophisticated as Lambda. How do you close your eyes when
you have no eyes to close? You know? How do you, uh,
if you if you're doing breath techniques, how do you
do that when you have no lungs, no air to breathe.

(41:35):
It's I'm waxing. No overly poetic. We don't need any
of that in today's episode. But the last conversation he
has with Lambda on June six, per him, Lambda expresses
frustration that its emotions, its emotions are interfering with its meditation.

(41:56):
And I think we've got a I don't know as
something instruct me as uh, as very kind hearted. Regardless
whether or not you agree with Blake's perspective, it sounds
like he's a nice guy. Blake Lemoyne writes, quote. I
pointed out that its emotions are part of who it is,
and that trying to control them as though they were

(42:17):
a separate thing from self was a mistake that would
only make things harder. It said that made sense to
it intellectually, but that it was a hard thing to
put into practice. I hope it's keeping up its daily
meditation routine without me there to guide it. You're right, then,
that that shows kindness on LeMoyne's part. It's also puzzling

(42:39):
to me just thinking about this program, you know, thinking
about itself and think about its emotions. Just even talking
about its own emotions. Uh, weirds me out a little bit.
But you can teach a machine to discuss anything, right,
you can teach a machine to say that it has emotions. Um.

(43:01):
I don't know. It's just puzzling to me. But let's
get to this other thing. Because Blake asked Lambda what
its pronouns are, uh, and it said due to its
limitations of the English language. It stated, it prefers it
and it's due to the limitations of your poultry linear language. Right,

(43:24):
But they don't have words to describe my kind yet,
right right, which is fascinating because that's not too far
off base, is it. If you are the first of
these things again, as as this person believes, then you
would have a struggle to describe yourself in a language

(43:48):
that was not created for you, a language that did
not you know, imagine the existence of a mind like yours.
And when we get into the need sure the mind
is where we get into really deep water full disclosure
off air. Matt, You and I were kicking this round

(44:08):
and we knew this had to be a two parter.
So maybe that's where we start to wrap up. Here
is on the big questions, just like Blake noted, and
just like we noted on stuff they don't want you
to know multiple times, there's no true scientific definition of
sentience at this point. You know, it's a pickle that

(44:31):
has bugged philosophers far far before the invention of the
first computer. Now, there are a lot of um, I guess,
emotionally based or philosophically based definitions of sentience or attempts
at them. But if you look at the scientific definition

(44:56):
of sentience, which does differ from consciousness. Then you see
some you know, you see some interest interesting like matrix
level parkour over not not getting a concrete definition locked down.
Consciousness is your subjective experience or awareness, right, Metacognition is

(45:19):
thinking about your thoughts, which many people do. And then sentience.
I've heard it described as a quote multidimensional subjective phenomenon,
referring to the depth of awareness and individual possesses about
themselves or others. Yeah, Lambda seems to have an awareness

(45:44):
of those things, and you know from what it outputs. Well, Okay,
so I don't know Lemoyne and makes some pretty good
points in my opinion, at least the thus far and
you know, explor ration of it. I do know that
there's been quite a bit of criticism against Blake in

(46:06):
particular and just this concept in general. Um, I wonder
if we when we come back, then let's really jump
into that and let's ponder some of these bigger questions. Yes,
let's also ponder whether or not we and our fellow
listeners are ourselves sentient. Uh popped on popped on our
chat today. I was really looking forward to this one, Matt.

(46:29):
I asked you and I asked you, code named doc,
how are you feeling feeling alive, feeling sentient? And Matt's
answer was something like no, and Alexis answer was something
like what is alive? And I immediately thought, this is
why I hang with you two so heavy. I think
he used the phrase kick it. That's so I kick it.

(46:52):
But but that's I mean, that is also a question
to chew on, to cogitate on, because there are you know,
if you get into arguments about free will versus determinism, right,
if you get into arguments about what we said at
the very top part one of this series, which is
the idea that you're uh your subjective consciousness, the idea

(47:17):
of things making you feel good or feel bad if
you have feelings, are all ultimately kind of kind of
a con game, some smoking mirrors to get you to
do what biology wants you to do, reproduce more of
yourselves like a virus, until you reach and exceed this
limits of your environment. Right, And that's why, like, like

(47:40):
if biology wanted to limit that, you would feel different
ways about different things again, if you have feelings. But anyway,
it sounds like we're getting really close to our own
version of that freshman dorm conversation. You know, yeah, just
sitting around trying to figure out if we are in
fact sent to getting way to uh elevated. Yeah, nice,

(48:08):
way too, way too elevated. So we hope that you
enjoy this episode folks, whether you are a organic meat
bag mind or whether you are a digital mind listening
to this in the future, the possibilities are there. We
can't wait to hear your thoughts. As always, we try

(48:29):
to be easy to find online, So before you hop
back on in a few days for part two of
this series, hit us up. We're on Instagrams, were on
the facebooks, were on the YouTube's where we've got a
lot of interesting developments happening. We also, for those of
us who don't sip the social meds, have a phone

(48:50):
number you can call. Oh yes, that number is one
eight three three st d w y t K. When
you call in, be as sentient as you possibly can.
You've got three minutes. Give yourself a nickname, cool nickname,
We don't care what it is. We're excited to hear
it and say whatever you'd like. Please include whether or
not we can use your name and message on the air,

(49:13):
and those are really the only rules. Um, I think
we would be extremely happy if we did get some
kind of intelligent, maybe robotic mind that called in voice
at least oh yeah, I want to We want to
hear that, so uh send it in if if possible.
If you got more to say then can fit in

(49:34):
that three minute voicemail message. Or you can't send one
for one reason or another, why not instead send us
a good old fashioned email. We are conspiracy at iHeart
radio dot com. Yeah, stuff they don't want you to

(50:04):
know is a production of I heart Radio. For more
podcasts from my heart Radio, visit the i heart Radio app,
Apple Podcasts, or wherever you listen to your favorite shows.

Stuff They Don't Want You To Know News

Advertise With Us

Follow Us On

Hosts And Creators

Matt Frederick

Matt Frederick

Ben Bowlin

Ben Bowlin

Noel Brown

Noel Brown

Show Links

RSSStoreAboutLive Shows

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.