Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from how Stuff
Works dot com. Hey, welcome to Stuff to Blow your Mind.
My name is Robert Lamb and I'm Joe mccormicks, and uh,
I want to take you into the future here for
a minute. I want you to imagine this scenario. You've
(00:23):
been contacted by an artificial intelligence, an AI that identifies
itself only as Mind your Manners or MIM, and MIM
as a wonderful job opportunity for you. It needs an
echo borg as it attends an industry conference related to
the corporation it heads. In other words, it needs to
augment a human with a non invasive sensory array so
(00:43):
as to use them as its living avatar. Hold on
a second, this sounds like a creepy job. Now, what
exactly does this job involve? Essentially, MIM is going to
speak into your ears through this sensory array. It's gonna
pick up and everything going around in your surroundings, and
and you will repeat or shadow its words in conversation
(01:03):
with various unaugmented humans throughout the week at this conference.
So you're gonna be its mouth, it's face, it's every
expression as MIM attends several key meetings and networks with
industry leaders. You will be the human mask for all
its interactions. Now, you of course be required to sign
a standard nondisclosure agreement, and as Men's schedule for the
(01:25):
conference is fairly rigorous, you're gonna be swapping out duties
for the week with a second echo board selected for
more casual and interactions. But this is a huge opportunity
for you. Aies like them are known to establish a
harem of echo bots, each suited to a particular culture
or setting. This could be your big break. You could
become the pampered meat suit of a powerful machine brain. Yes,
(01:49):
and it's you know, a great gig if you can
get it, you know. Yeah. And now I'm sure that
this job, while it might be physically demanding, is it
probably doesn't require all that much skill, right, So you
just have to be able to repeat words pretty much
in real time and give some convincing facial expressions and
hand gestures. Yeah, you would need to bring life to
(02:10):
its words to a certain extent. I mean, that's part
of being the mask. Like when one example that comes
to mind, of course is Arrested Development. They had the
the the surrogate character that shows up while George sr.
Is the under house arrest. This character was just a
ball cap with a video camera on, and he gives
a very dead pan version of everything that George is saying,
(02:32):
and in that he would be you would be a
terrible echo borg or cylonohite, as we're going to discuss. Uh. Ideally,
the individual repeating the computer's words would would make it
make the words come alive. Okay, So what we're envisioning
here is sort of the exact opposite of what certain
sci fi writers have predicted with robot avatars. The idea
(02:54):
of a robot avatar, like in the movie Avatar, you
could probably say, Oh, though I don't know if that's
a robot, I don't know, it's in plenty of sci fi.
You you hook your brain up to a computer, and
through the computer, you control the actions and words and deeds,
all of the outward motion of some kind of physical
embodiment that's not really your body. Yeah, Like I've seen
(03:16):
it employed as a possibility for space exploration. Right, it's
too much for us to send a human, delicate human
body to this other world. But you send a robot
and then fat make that robot the avatar for the
human explorer, which is great for space exploration because it
combines the sort of reactiveness and ingenuity of the human
mind with the hardiness of a robot body and the
hardiness and expendability, let's be frank of the robot body. Um.
(03:39):
So yeah, So what we're envisioning here is the exact
opposite a computer mind controlling your body. Yes, I mean
a computer using a human It's kind of a meat
puppet to to give life to its its voice and
its will in human and our actions. All right, So
you mentioned the terms serenoid a minute ago, and I'm
gonna assume. Actually, I don't need to assume, because I
(04:02):
know that comes from Serrano de Bergerac. Yes, Serrano de Berjack,
Edmund Roston play. A lot of people may be familiar
with this, of course, from the Steve Martin movie Roxanne,
which is a retelling of the same story. My first
introduction to syran No was the Wishbone episode when I
was a kid peek behind the curtain. Robert did not
(04:24):
know what wishbone was. I I had to explain it
to him. I I don't know how I missed this.
It sounds the lifeful. Now, who who did the dog
play character? I think it was Syrano, right, so the dog.
If you're not familiar with this story, Serrano de Bergerac
is based on a real life character from history, but
in the play it's sort of dramatized, fictionalized made more exciting.
(04:45):
And the idea is that he is a very ugly
man with a big nose, so he has a hard
time wooing women. But he's also very clever and brave.
He has a great mind in the wrong kind of body.
But if he teams up was somebody who's very handsome
and very stupid, together they make the perfect package. So
(05:06):
all he needs to do is get a handsome man
to pairt every single word he tells him, and there
you've got the perfect suitor. Yeah. Yeah, it's uh, you know,
and it's often played for for comedy, right, because you
have especially unarrested development. You end up with with signals
getting crossed. Uh, you know, the individual who is who
is informing the surrogate says something that's not intended to
(05:29):
be transmitted and it ends up transmitted and then all
sorts of hilarity ensues. Yeah. I think it's played for
comedy and the rust and play also. But but it
raises some interesting questions about how we perceive other people
and how we perceive the will behind other people. Yeah. Well,
one thing that I think is certainly true is that
(05:50):
people are very sensitive to the outwardly visible source of information,
oftentimes more than they are sensitive to the content of
the informa issue like if somebody is making an argument
to you, it's very likely that you're judging the merits
of that argument more on what the person looks like
and what their voice sounds like than the actual merits
(06:13):
of the arguments they're making. And indeed, you have just
a whole communication array that is delivering this information. I mean,
it's it's the voice. It's also often the hands of
the individual, to body language, the expressions, the micro expressions,
the eye contact, all of the all of these features
that add that that additional level of engagement to any
sort of information. Yeah, the quality of the tuxedo. And
(06:36):
so this connects in a strange way with a question
that has often come up in artificial intelligence, which is
the idea of the touring test. And I think the
way it relates is if you are the tuxedo, if
you're the meat tuxedo for an artificial intelligence speaking through you.
(06:57):
Does that in any way influence how people receive the
messages coming from an artificial intelligence. So we should probably
explain a little bit the idea of the touring test
for people who aren't familiar. This is a standard often
referred to concept in in the progress of artificial intelligence,
and it comes from the computer scientist and sort of
(07:18):
a I pioneer Alan touring. And there's no actual one
touring tests. You can't buy the kit online and bring it,
bring it home and to start being pulling it against
every toaster in your vicinity. Right, It's more of a
general concept that's been applied in a lot of ways.
And the most basic, stripped down version of the test
(07:39):
is can a human chatting through text only tell if
the person they're chatting with is a real human being
or a computer program designed to talk like a real
human being. I mean, it basically comes down to two
tourings insistence that, uh, the question of whether a machine
can think is too meaningless to really waste time on,
(08:02):
so you have to instead think, well, am I am
I buying it? Am I am I fooled by it?
It is it is creating the the semblance of intelligence,
and and it deceives me, then that's what we need
to look for, exactly right. And I think I largely
agree with the point he's making, because how can you
tell that other humans possess real intelligence? I mean, come
(08:24):
up with a way of explaining how you know other
humans really think? You say, well, I mean listen to
the way they talk, look at the way they react
to what I say. It's a very complex kind of reaction. Well,
what if you could have a computer or robot that
does all of the same things, then would that not
be thinking? I mean, all we have to go by
(08:45):
in science is externally measurable phenomena. You can't get inside
someone else's sentience and judge whether or not they're thinking
by I don't know, just sort of like your phenomenal intuition.
I think it's in carry practice The hog Father, where
there's a essentially a thinking machine that's used by the
wizards there and uh and and when somebody asks the
(09:08):
wizard they're using the machine if if the machine thinks
for itself and uh and he says, uh, says, don't know,
it just it just appears, makes it has the appearance
of thinking for itself, and the oh the character says, well,
it's just like everyone else then all right, right, yeah,
if you want to be a solop syste you could say, well,
I'm actually the only object in the entire universe that thinks,
and I'm just surrounded by very convincing artificial intelligences. Yeah.
(09:31):
I mean, as we discussed in our Alien episode that
we did recently, it's when you start trying to it's
it's hard enough for us to to decide and quantify
what human consciousness is, what intelligence is, and when we
start looking for artificial versions of it, uh, it becomes difficult.
So you have to have some sort of standard to say,
all right, this is this is enough. And that's what
the Turing Test sets out to do. Right. It's the
(09:53):
idea not can computers think, but can they convincingly appear
to think? Yeah, And of course this shows up in
a lot of science fiction. I believe it. Uh, it's
in Blade Runner. It's been a while since I've seen
Blade Runner, but more recently, um in x Makina. Oh yeah,
I just saw that movie, and maybe we should talk
about that later in this episode. Yeah, but I'll go
(10:15):
ahead and give my endorsement. Now, I thought it was
pretty awesome. Yeah, it's it's a it's a very very
engaging film. I recommend it to anyone who is who's
a listener to the show. Okay, but let's describe a
turing test scenario. Like I said, there's no one test,
but lots of people try to put together some kind
of touring test type scenario to test their chat bot
(10:36):
to see how good it is. And the chat bot
is just a program that you have a conversation with
if you've ever engaged, if you've ever been in a website,
and that little, you know, text stuff screen comes up,
and there's some sort of little uh, you know, stock
art of an individual you might be talking to one
of these chatterpots. Yeah. Yeah, so let's paint a little picture.
Let's say you walk into a mostly empty warehouse and
(10:58):
right in the center of the warehouse us as a
card table and a folding chair and a computer terminal.
And you go and you sit down at the terminal
and there's a little blinking cursor and you type hello,
and it responds hello back, and then you type some
more things, and it types some more things back to you,
and you get to talk to it for some length
(11:20):
of time it was pre specified. Maybe you talk to
it for five minutes, maybe you talk to it for
twenty minutes. But at the end of the session, it's
your job to say, now, what was I just interacting with?
Was that a computer program or was that a person
sitting at a terminal like mine in the warehouse next door.
These days, most of the time, I think it's still
(11:42):
going to be pretty easy to tell, especially if you
have a limited amount of time to interact, and if
the chat bot program operates within some kind of I
don't know, borderline cheating kind of kind of conditions, Like
some of these bots might suggest they have a conver
sational limitation, like oh, I'm a real person who is
(12:04):
a child from another country and I don't natively speak
your language. That makes it a little easier to be convincing.
Or you could say that you suffer from some kind
of condition that makes you interact socially in a different
way than most people would. And in any of these cases,
you're sort of like putting dampeners on our judgment. You're
(12:26):
saying like, Okay, I shouldn't be expecting somebody who interacts
just like anybody I would meet, you know, at work
or at a party or something like that. This person
might very well be a human and still be acting
kind of strange. But once you say okay, you can
talk for four hours and this is just you know,
a regular person who doesn't have any kind of limitations
on their conversation you, you'll pretty much always be able
(12:49):
to tell these days, I would say, yeah, and you know,
and a lot of this is gonna, you know, at
least start off as just small talk, you know, like
you were saying, hello, what's your name, what do you what?
What do you what? What's your favorite band? You know,
that's something, and then that the the the AI or
the attempted in AI, the chatter bought attempts to answer
those in a way to fool you into thinking as
(13:09):
a real person. In fact, let's go ahead and just
roll through a short script from a chatter bot conversation. Yeah,
so we'll reveal the source of this in a bit.
But Hi, Robert, what kind of movies do you like? Um? Historical?
I prefer modern era films. Great, what are your favorites?
(13:31):
My favorite movie so far is The Imitation Game. It's
about my idol, Alan Turing. What did you like most
about it? I like fruit trees? Were there lots of
fruit trees in the movie. Maybe we can talk about
that later. Okay, shall I continue with gardening or do
(13:52):
you want me to move on? You can move on.
So I think it's pretty obvious who is the chatter
bought in that conversation. Yeah, so I was the chatter
bot there. That was coming from a chat bought named Rose,
which was created by Bruce Wilcox and documented. That was
verbat on a transcript of a video produced by some
(14:14):
researchers that we're going to talk about in a minute.
But there's a variation on that script we just read
because in the video that didn't take place at a
text terminal. All the lines that both of us said
were spoken out loud by human beings. Now, how could
that be. Well, that's going to tie into the concept
of echo borgs, which we brought up at the beginning.
(14:35):
So to get into echo borgs, we need to talk
about a favorite figure in the weird history of psychological
research in the United States, and that's Stanley Milgram. Yes,
Yale University psychologist Stanley Milgram, best known for his controversial
obedience experiments actually a series of social psychology experiments about
(14:59):
nineteen in all, all conducted by Milgram in the nineteen sixties. Yeah,
you probably have heard about these, if you're familiar with Milgram,
it's probably from the they're often just called the Milgram experiments,
and they sort of give him a bad name because
they were they were kind of nasty. Yeah, they're they're
generally anytime you see a list of like, you know,
top ten scariest or weirdest or most evil psychological experiments,
(15:22):
they tend they tend to throw this one in though, uh,
you know, it's it's really more troubling and what it
reveals about about human human nature. So what was the deal. Well,
it's important to note that the first one of these
took place in nineteen sixty one, just three months after
the start of the trial of German Nazi war criminal
Adolf Eichmann in Jerusalem, and so Milgram wanted to see
(15:43):
just how far we'd go in the name of a
band an authority figure of course, of course, because the
whole argument is are that these bad people were they
just simply following orders? Right. It was the idea that
the Germans are especially evil. Where the were the people
who became the you know, guards at Auschwitz just from birth,
truly evil people who were susceptible to that kind of behavior,
(16:04):
or would we behave the same way in the same circumstances. So, Yeah,
the experiment revolved around, you know, an individual in a
room and you hear the sounds of someone being shocked
in the next room whenever that individual pushes a button,
pulls a lever, or whatever on the command of an
authority figure. And so the question is how far will
you go to when will you stop shocking? Would you
(16:25):
ever stop shocking that individual in the next room if
an authority figure is telling you to do it and
telling you that it's okay. Yeah, And what Milgram claimed
to find through his experiments is, yeah, even you know,
your regular people, your next door neighbor Americans, if they've
got somebody in a white lab coat who's supposedly in
charge of the experiment saying please continue shocking them, they've
agreed to this in advance. Lots of people will continue
(16:48):
shocking even after the supposed victim of the shocking. Now
we should say that in this experiment, nobody was acting.
Nobody actually electrocuted in the next room. Yeah, there were
actors pretending to be in in immense pain from these
shocks that lots of people in the experiment would supposedly
continue shocking them. Yeah. And if you want to hear
more about that, that series of experiments and what some
(17:11):
of the ramifications of it, uh, stuff to bow your mind.
Did an episode earlier in the year titled The Power
of Polite, and I'll make sure to link to that
on the landing page for this episode. But Stanley Milgram
also had some other experiments going on, right, He wasn't
just doing the shocking people and Nazi experiments. He was
into other ways to make us feel a bit troubled
(17:33):
about our humanity. Where they all creepy? Did he specialize
only in creepy science? And I think he had some
less creepy ones, you know, I mean, you know, they
involve how we I think most of his work revolved
around how we view ourselves and how we vie your bodies,
et cetera. By texts of puppies and lollipops on our psyche. Yeah,
but you know, not everything was necessarily um you know,
(17:54):
people in the next room dying. Sure, but we referred
to a term at the beginning of this episode, which
is serra woid. And this also comes from Stanley Milgram,
I believe from unpublished results of some experiments he conducted,
right right, He he never published any of these. They
ended up, you know, putting some work into it, but
then going off in a different direction with his research. Yeah. So,
(18:14):
as we said, the term serenoid comes from Syrano to
bears your act. But what was the deal with Milgram's experiments?
So Milgram essentially wanted to say, hey, if you're that
the woman that is being wooed bye bye bye by
Syrano's meat puppet's handsome. It's handsome young man. I think
her name was Roxanne x. Roxane's on the balcony being
(18:34):
wooed by a handsome meat puppet that's being fedlines by Syrano. Yeah,
if you're Roxanne, would you be able to detect something
was weird? Would you would you encounter this young man
and say, hmm, he you know, he seems a little
more clever than than he should be, or there's a
delay in in what he's telling me, you know, would
there be something that would tip you tip you off
to the deception you know, with modern technology, I'd imagine
(18:57):
you can carry out that experiment pretty easily. Yeah, I
mean even at the time, the technology was good enough.
So in these unpublished experiments, he had a source speak
into a microphone and a shadower listened through a hidden earpiece.
Then he or she would repeat whatever they heard, you know,
basic you know, arrested development, surrogate kind of situation. But
(19:19):
here's the thing. You found that with practice, speech shadowing
becomes easier and easier, Like we really take to it
rather naturally. They say, it's not that hard at all.
I read in one of the sources we used for
this episode that sometimes it only takes a few milliseconds
of delay between the speech being fed in through the
earpiece and saying it. We're ready, willing and able to
not think for ourselves, and so yeah, he observed that
(19:43):
he set all this up, put people in motion with
these with a source and a shadower, and then a
test subject having to interact with this individual or is
it his individuals um and then give feedback about what
they thought. And he observed that people couldn't tell the
difference between a Syran wid again that person. Uh that
that that Shadower, who was informed by another source just
(20:05):
repeating words that are entering his or her ears, could
not tell the difference between syenoid and a normal human
being during interactions. Yeah, so that's what Milgram reported. And
I don't know, that's surprising and creepy. Yeah, I would
think that I would easily be able to tell the
difference between a person speaking of their own free will
and somebody who's just being fed lines in a near piece. Yeah.
(20:28):
And one experiment Milgram's sourced in other words, fed you know,
lines to a pair of eleven and twelve year olds.
So they the eleven and twelve year olds where the
shadows and Milgram himself was speaking correct. Uh Yeah, the
the children were the serenoids, if you will, and a
group of observing teachers never suspected that they were chatting
with anything other than a normal, though very bright child.
(20:50):
Uh and uh. And and it is experiments he found
that most people could have a lengthy twenty minute conversation
with a serenoid without noticing anything amiss. Wow. Yeah, but
I mean again, that's that's hard to believe. I mean,
it just seems like you'd be able to notice that
right off the bat. But well, but just think how
easily we we buy into an actor's presentation. I think
(21:12):
and think of all the interactions we have in the
course of of our lives where the individual we're talking
to is maybe not completely yeah, genuine, maybe that you know,
maybe you're interacting with with someone that's doing customer service
or they're trying to sell you something or can you, etcetera.
It's like, it's not always going to come off is oh,
(21:33):
this person is completely fake. It's gonna It's just an
aspect of human interaction. Well, it's funny you mentioned actors
because I had this thought when I was reading this research.
In a strange kind of way, all actors in movies
are seronoids. Like you have the writer coming up with
lines for many types of characters that aren't anything like
(21:54):
them outwardly at all. Like you can have, you know,
you know, a sixty five year old female writer writing
lines for a ten year old boy in a play,
and that boy effectively is a distanced serenoid. And should
we be able to tell the difference, like sometimes you can.
Sometimes you watch a movie, you know, and you're like,
kids wouldn't say that, that's not how kids talk, but
(22:16):
other times you buy it. Yeah. I think one of
the things to keep in mind about serenoids and ultimately
about echo borgs this we've moved forward is that we're
dealing with a hybrid personality. So there's the there, there
are the words and the personality of the individual who
is informing, and then the words of the shadower or
the serenoid. That individual is bringing their own delivery, their
(22:37):
own personality to it. Sure, because your personality, as you
present it outwardly, is way more than the words you say. Yeah, obviously,
it's your body language, it's your expressions, it's you know,
the way you carry yourself. I mean that that's all
part of the message you present. Like we can all
think of of movies or TV shows where there's a
particularly gifted actor who's able to bring lack lester lines
(23:01):
to life in a way that a less gifted actor
who just would not be able to achieve. I always
think of ral Julia in Overdrawn at the Memory Bank,
who you know, some people, you know, it's easy to
have a lot of fun with his performance. It's you know,
it's kind of cheesy and outlandish. But that man in
that movie which is like a you know, a low
budget PDS adaptation of a of a science fiction story.
(23:24):
He brings so much life to every line, so many
lines in that in that film, if they were delivered
by a lesser actor, would have just fallen flat. But
he makes even the most pointless line, uh, just really
land I agree, and I think this is a common
feature actually an older actors. I see this way more
commonly with actors who have been in the business for
(23:45):
a long time. I think of Star Wars Episode two,
which my opinion is that that is a horrible, horrible movie. Huh.
But when Christopher Lee shows up and starts talking, the
writing I think is just as bad as it's been
the entire time. But suddenly I'm okay, I'm listening Christopher Lee.
I'm buying it. He is really selling this horrible dialogue. Yeah, yeah, exactly.
(24:07):
So in that respect, Raw Julia, Christopher Lee, they would
make wonderful serenoids. And keep in the mind they might
have made excellent echo borgs. Okay, So Robert and think
we were talking about echo borgs being the natural extension
of serenoids. So the serenoid is a person being fed
lines by another person. The echo borg, as we talked
(24:29):
about at the beginning of the episode, would be a
person being fed lines by a computer program. Yes, I
want to know, are there any studies where people have
looked into this phenomena and if so, does a person
delivering the lines from a chat bot make the chat
bought any more convincing in the Turing test? Well, indeed,
(24:50):
that that's exactly what we're gonna look at in this
half of the podcast, particularly the work of two individuals
British social psychologists Kevin Cordy and Alex Gilefski. And Uh,
basically we have two studies that we're gonna we're gonna
analyze here, two thousand fourteen study where they essentially just
recreate some of Milgram's work and uh and look at
(25:13):
the at the serenoid, and then a two thousand fifteen
study that just came out, um where they take the serenoid,
apply it to chatter bots and give us the echo borg. Yeah.
So the script we read earlier from the conversation between
the eco borg and the regular interactor that came from
London School of Economics Research. It was a YouTube video
(25:35):
that they had put up, So that was their echo
Borg saying that it liked fruit Trees and that the
Imitation Game was its favorite movie, you know, And I
do want to throw in like that was the dead
giveaway that was a robot, because that that movie is
nobody's favorite movie, right, And I think Rose, the chatbot
in that example, is sort of programmed to be playful,
like not necessarily to be entirely convincing as a human,
(25:57):
because Rose gives other playful answers. Also in that same video,
there's a part where the human interactor says to the
human eco borg, do you like food? And the human
eco borg delivers the line yummy electricity. Yeah. I don't
know if the comedy needs to be a primary programming
of function. I mean, you know, humor is an important
(26:18):
part of any human interaction. Sure, I mean that's good
humor for a chat boton, but that's obviously the chatbot
who's not trying all that hard to hide it. Yeah,
being a little little two cooi. Alright, So let's look
at this the work of Accordion glevskie Um. In two
thousand fourteen, they poblished this paper in the Journal of
Social Psychology, and they essentially set out to to replicate
(26:41):
Milgram's work. Okay, So the basic set up here is
they they had a shadower set in an interrogation room.
So that's the seranoid type figure, and uh, they were
engaged in a conversation with an interactant. This is an
unsuspecting volunteer. Meanwhile, the source, that's the individual feeding the shadow,
we're feeding the serenoid. Um. They're in another room and
(27:06):
they're observing everything through video and audio links, and they
tell the shadow or exactly what they need to say
through a uh, you know, discreet FM radio transmitter set up. Okay,
So they have twenty volunteers that are engaged in ten
minute conversations with the serenoid and twenty additional volunteers that
are tested in a control setting. Okay. So the control
setting is just a regular person talking, not being fedlines.
(27:29):
And so this backed up exactly what Milgram had told
us decades earlier that when one human parents and other
humans words, we just totally buy it. Okay. So even
if you were skeptical of Millgram, and who knows, you
might have had reason to be, we've got to We've
got a new study backing that up now and the
implications of this are kind of creepy. It's that unless
(27:50):
you check in people's ears to make sure they don't
have an earpiece in there, anybody in your life could
be feeding you lines from somebody sitting in a van
outside the building and you might not know it. Yeah,
I mean, look at the movie The Exorcist, right, That's
why demons, when they possess people, they have to spit,
you know, green vomit across the room and spin their
(28:10):
heads around, because otherwise we're just not going to buy
into the fact that an individual is not uh functioning
of their own free will, right or you know, you've
got the problem of Okay, so the priest tries to
go to the authorities and say, I need to conduct
an exorcism. My evidence is this person was speaking in
a language they've never learned. It's like, well, how do
you know they're not being fed that language from somebody
in a van outside the building through a near piece.
(28:32):
You've got to have the pup well, you know, speaking
of possessed children. A second part of the same experiment,
according and collepski Um, use a twelve year old male
actor to serve as a shadow er while like thirty
seven year old male social social psychologist acts as the source,
and again no one suspected the illusion, though some thought
that children might have been coached prior, so there was
(28:54):
a little you know, trepidation there where they think, uh,
I think somebody was preparing the kid at this conversation,
so they picked up that something was off, but they
didn't pick up that the kid was being fedlines. Yeah,
and again that just kind of drives with with so
many interactions, most of us having our lives where you
you talk to somebody and you might think that person
(29:14):
seemed kind of fake, but you're not thinking that person
was serving as a meat puppet for an AI or
some dude in a van. You just think they're they're
a little phony there. You know, they have some sort
of an agenda, they're kind of playing the room, or
they're they're in just full on customer service mode, etcetera.
Or they might just be experiencing some anxiety and awkwardness.
I mean, I think you want to give people the
benefit of the doubt, and that's I think coming through
(29:37):
in some of these experiments. If somebody acts a little
awkward or weird, I mean, you don't want to just
judge them and say that person is a robot. You know,
they're being fedlines or something like that. You know, we
understand that some people get in moods, they have trouble,
sometimes they feel awkward. We've all felt like this, and
you want to be accommodating of other people's awkwardness whenever possible. Yeah,
I mean, awkwardness is in his own in its own right.
(30:00):
It is an essential part of human interaction. Um. And
if I an AI or a computer wanted a part
of legitimate human interaction, they need to sign up for
some serious awkwardness from time to time. So this initial
studies pretty much backs off everything Milgram told us. So
from that point, um Cordy and Gillepskie moved on to
artificial intelligence, to chatter bots and the creation of the
(30:22):
eco borg. And again, an echo borg is essentially the
same as a serenoid, except the shadow er is speaking
the words of a chatter box, so the source is
not human, it's a computer program, right, And again this
is what this is. It's important to keep in mind
that a serenoid is a hybrid personality, the individual behind
the speaker and then the personality of the speaker coming together.
(30:44):
So in eco borg, the borg aspect of that referring
to cyborg. It really is a new type of cyborg
that we're playing with here, uh, a machine and a
human as one. And the question presents itself, would we fooled?
Will we observe an echo borg in action? And and
suspect that it's a machine will behind the human face.
(31:08):
I'm gonna say that even before I read the results
of the study, when I knew what the subject was,
I thought, you know, I bet it depends heavily on
the sort of pre priming of the study participants, Like
are they expecting to be interacting with somebody who might
be an echo borg? Or are they or they just
(31:29):
going in blind and interacting with somebody who might be
a little weird. It's true, I mean, when you're presented
with the if you're presented with the possibility, hey, you
might be talking to a robot here, chance that's gonna
it's gonna it's gonna alter your perceptions of bit. Yeah.
So what did they find? First of all, this is
how the study rolled out. Forty one participants in the
experiment were shown a text messaging client with two windows
(31:52):
until they be interacting with two human shadowers at the
same time. One shadower is sourced by another human be
either source patch atter bot, and each interrogator asked questions
via text, and then the participants were told they'd be
asked which of the two agents was a real human. Now,
the researchers also wanted to compare human and text responses,
(32:13):
so they divided the group into two groups. Twenty of
the participants typed their half of the conversation and into
a text client, but got responses from the two echo
boards setting in the same room. The other twenty one
participants just had text chats the whole way through. So
this way they would have a you know, a base
comparison to the you know, the more established chatter bot
to rank test situation that we waked turtling into a terminal. Right,
(32:36):
So the results were that all the text only participants
correctly identified the real humans. So again dead giveaway based
on just the limitations of the chatter bot um, it's
pretty easy to tell that you're just talking to a program. Yeah,
if you if you have enough time to talk to them,
you can almost definitely trip them up. Yeah. Now, on
the other hand, in the other set up, only two
(32:59):
of the participants didn't identify the real humans, so in
the eco borg situations, they still weren't too terribly convincing, Right,
so they can tell the difference between a seranoid and
an eco borg. Everybody except two of them could, Right,
So it seems that human or not still rather easy
to identify the machine behind the flesh. You know, I
(33:21):
would chalk this up probably just to the poor quality
of today's best chat bots. Yeah, I mean, because even
even a raw Julia or or Christopher Lee, you know,
presented with the with the dialogue we were using earlier,
they're they're not They're just gonna come off as a
doll person at best, you know, Right, There's no way
(33:42):
you can sell the line I like fruit trees in
a way that makes people think you're you're being fedlines
by a human, no matter how good of an actor
you are. That's a dead giveaway that the machine does
not know what to do with what you just said. Yeah,
unless you're a toddler speaking that line, or or again,
if you're hiding it behind to say, some sort of
mental deficiency or or language gap um or some sort
(34:04):
of you know, character that's leveled over everything it's going
to be a dead giveaway. But the echo borgs certainly
made the machine seem more human and the text only group,
the chatbot received an average a humanity rating of two
point one four on a scale of one to ten.
In the echo board group, the average was a four
point oh five. So significant difference there. Okay, yeah, that's interesting.
(34:29):
So we can still tell that the Serenoid is more
human than an echo borg. But an echo borg is
more human at least in a qualitative evaluation than a
text only chat bot. Yeah, to lay it out, text
text only chat bought two point one four humanity. Uh,
the echo boorg four point oh five humanity, and then
(34:49):
the Serenoid had a humanity rating of eight point seven six,
eight point one five. So it sounds to me like
even a normal human interaction, like your score, your score
ring a nine, just straight up non augmented human interactions.
You're doing pretty good. Yeah, So it does prove that
there is a humanizing effect to the eco borg system.
(35:12):
It's just the limitations of existing chatter bots that they're
coming into play, right. It makes you think if there
were a really good chatbot out there, the eco borg
might be that finishing touch, that final push over the
edge of the cliff to sort of make the you know,
the perfect android. Yeah, kind of a bridge for the
uncanny valley, you know, like you just just all you
(35:32):
need is just a little acting on top of this
to really bring the script to life. Now. They they
also conducted a second experiment in this and they tested
to see if fifty eight new participants would guess whether
they were talking to an echo borg or a person
pretending to be an echo borg. So, you know, obviously
a person who's actually fed their lines by a chatter
bot or someone who's just playing around and pretending to
(35:55):
be speaking the words of a chatter bot. Right, so
this is a reverse touring test. Can you pretend to
be a machine? Right? But here's the catch. They're both
pure eco borgs. There, there's no there's no faco echo boorg.
But the but the individual being tested they think that
one of the two echo borgs is just a person pretending.
So seventy of the twenty eight participants who faced an
(36:16):
echo borg through I thought the person was just mimicking,
mimicking a chat bot. So they seemed fake but not inhuman.
So if there is you know, some reason to believe
that this is just a person being a little fake
and putting on a show versus being you know, a
complete echo borg. We're still going to give them the
benefit of a doubt. Interesting, now, I think we should
(36:37):
do our duty and hedge a little bit. Say, all
these sample sizes were pretty small, and it would be
interesting to see more research along these lines, like with
bigger sample sizes and trying to repeat these results. And
and and also to your point, is chatterbots improve, it's
going to be interesting to see echo borgs employed as
as a way to test them. And in fact, that's
(36:57):
something that's pointed out in the in the study, is
that like this is a great way moving forward to
continue to analyze the chatter bots. Yeah, but also, as
we mentioned, the people who were interacting with these with
these serenoids and echo borgs, we're in some of these
cases primed to expect something weird because they're in a
(37:18):
test environment. You can't hide the fact that you're in
a psychological test and you're you know, whenever you're part
of a test group, you're sort of like ready for
some weirdness, you know. Now, I wonder how this would
be if you sprung these shadowers, these serenoids, and these
echo borgs on people in a purely social scenario like
(37:40):
a like we talked about at the beginning of convention,
or you know, workplace meeting or a party where people
weren't expecting anything strange to be going on right until
they have to fill out of survey after they leave
the dinner party. So what did you think of Susie
and or antecdote exactly. Yeah, I don't know why she
(38:01):
was so into fruit trees. You wouldn't shut up about it.
I mean, gardening is okay. But yeah, Now, something that
really stood out to me when we were going over this,
especially when you start thinking about the eco borg and
what it would like to what it would be like
to be an echo borg, and the sort of pros
and cons of being an eco borg, of of of
(38:22):
giving life to this will behind your will. He reminded
me a lot of transactive memory, which comes into play
really in two key areas. First of all, it's the
method by which we've always stored information in other people. Okay,
so you know those facts that you never remember because
your spouse remembers them, or it's that, or it's a
(38:43):
particular spelling or a bit of trivia that you never
keep in your own head because you always look it
up on your smartphone. It's the same thing. So outsourcing
a memory. I think this, this must be one of
your favorite topics. You've come back to this a lot,
And I think it's really interesting that because I see
it every day in my own life, and as I
read about it's like, it's it's all I see. It's
like the things that I forget, the things that I
(39:04):
my mind refuses to learn because I've outsourced it to
the ubiquitous technology. Yeah. Yeah, I think the story is that,
you know, Socrates was worried about this, right that if
you know, if we teach everybody writing and we have
writing on scrolls all the time, nobody's going to be
able to remember anything. You're not priming your memory. There
might be some truth to that, But then again, are
(39:27):
are we not just, you know, by writing things down
and having Internet archives and things becoming cyborgs in a way? Yeah,
I mean, in a sense, the voice is already whispering
in our ear. Yeah. I mean you could look at
it as a weakening of the human mind, or you
could look at it as a technological upgrade of the
human mind. Yeah, to what extent, will we all become
(39:48):
echo boards of a type? You know where instead of
it being a situation where I'm just going to be
a conduit for a powerful artificial intelligence, what if it's
more like I want to augment my existing self, which
I think is pretty good with say, you know, an
artificial intelligence that'll feed me the right lines in like
business situations or social situations. So it's you know, less
(40:11):
focus on I'm just gonna be a meat puppet, but
rather can I merge with this AI, maybe even just
a small percentage you know five AI, and become a
better person and more effective person. Yeah, And these experiments,
all these examples are are word for word dictation. So
whether it's the shadower of the of the serenoid type
(40:33):
being fedlines by human or the shadower of the echo
borg type being fedlines by a computer, it's all lines.
You're getting full sentences and you're just trained to say
them as fast as you can. I mean, I wonder
if this setup could be more conceptual in nature, or
you know, feeding you facts or feeding you sort of
feedback on the progress of the conversation. It also makes
(40:57):
me think about the comedic possibilities because you imagine an
individual and sort of you know, a very updated Syrian
No kind of story story where individual goes into a
business situation and they thought they loaded uh you know,
business Helper for a one into their into their their
mind box, and instead they put Lothario four point one romance. Yeah,
(41:18):
so suddenly they're they're fed all these really effective lines.
Uh if you were you know, in a bar, but
instead you're you're throwing them out there in the business meeting. Yeah,
I think it could work. Walks on the beach have
to do with the new rollout, well, you know, it
comes to mind Black Mirror, the British television series that
(41:39):
does such a fantastic job looking at the ramifications near
future of our modern technology, often with very troubling results.
The Christmas episode they recently did, which I have not
seen yet. I think that's the only episode I haven't seen. Yeah,
I don't think it's made it to like Netflix here
in the States yet, And I'm not gonna spoil anything,
but the initial setup involves one character who who offers
(42:01):
a Syrano de Bergerac kind of service to individuals out
there who need a little help with their their pickup game.
I number one, that's super creepy. And number two, I
can totally see that being a real thing. Yeah, I
don't find that out outlandish at all. Yeah. I mean,
that's the great thing about Black Mirror is that, you
(42:22):
know that's it leans in the sci fi direction, but
not too far like it's and and and that it's
perfect science fiction because it speaks to the problems that
we have today and how we are viewing the problems
to come. Exactly. If you want to see some disturbingly
plausible dystopias, watch Black Mirror. But you know it ultimately
(42:44):
leads to the question echo bot, echo borgs? Is this
a dystopian idea or is it a pretty cool idea?
Is it? Is it ultimately a utopian idea where it
would allow us to to be better individuals? I don't know.
I mean, would the ancient for loss forers look at
our relationship with the contents of our computers and the
(43:05):
web as a horrible dystopia and feels fine to me?
But would they look at that and say, oh, you know,
react the same way as we do to a Black
Mirror episode? Problem I mean also they would look at
our pants and say, what are they doing? Where are
their telegets? Oh my god, yeah, a dressed like a persian.
I don't understand. I don't know. Ultimately, that's a question
throughout to the listeners. You know, what, how would you
(43:28):
accept that job offer that we laid out at the
beginning of the episode. And and furthermore, would you augment
yourself with some sort of mild echo borg system again,
like a you know, a romance one oh one or
a business business strategy one oh one kind of program
to to feed you the necessary lines or even just
ideas or facts that you might need to make it
(43:49):
through that business luncheon or dinner date. You know. In
another aspect of transactive memory that I want to drive
home that that plays in nicely with the hybrid per
sonality model of serenoids and echo boards, and that is
cross queuing. Okay. This is when, and I imagine a
number of you can can relate to. This is when
(44:09):
you're having a conversation with let's say you know, a
spouse or a partner, a close friend or family member,
and neither of you are quite able to remember something
on your own, but when you start queuing each other
the you're able to remember it together in a way
that you wouldn't, you know, kind of vultron style, you know,
bringing it together, and suddenly your combined power for call
that memory, right, you're sort of pulling the triggers on
(44:31):
each other's brains. Yeah, so I think it's interesting to
think about that, uh, that scenario cross queuing and transactive memory,
especially because if you imagine this AI scenario where you
know you've got some kind of on board artificial intelligence
that feeds you lines occasionally when you need them. You know,
you're not just a parrot for everything it says, but
it's it's an occasional helper. How does it know when
(44:52):
you need help? You would need some kind of some
kind of queuing in the conversation or even just in
your mind for this thing to know. Okay, I'm going
to step in. Like every time you go hmm, and
then it starts feeding you the line, it's like he's stalling,
throwing some throwing some good dust, some good winkow there
every time you say the word interesting, interesting, interesting. Yeah,
(45:14):
I mean it would it would certainly change the podcast game.
I'll tell you that. So I guess that wraps it
up for today, but we have some homework for you.
If you happen to be watching Benedict Cumberbatch in The
Imitation Game, I want to see if you can figure
out how the underlying gospel message of the machine cult
of fruit Trees is propagated in the cinematic subtleties of
(45:35):
the film. So if you're watching it, just give it
some thought. In the meantime, check out stuff to Blow
your Mind dot com. That's where you'll find all of
our podcast episodes. You'll find our videos are blog posts,
as well as links out to our social media accounts
such as Facebook, Twitter, and Tumbler. And if you want
to let us know whether you are interested in a
career as an echo borg or a shadower of some
(45:56):
other kind of strange alien intelligence, you can email us
at blow the Mind at how stuff works dot com.
For more on this and thousands of other topics, visit
how stuff works dot com