All Episodes

February 16, 2016 47 mins

Imagine a future in which powerful AIs use human surrogates or "echoborgs" to speak their words and socialize with humans. The living, breathing avatar simply recites the computer's words at the conference table, serving as a humanizing conduit for an inhuman will. It may sound like the stuff of science fiction and TV's "Black Mirror," but such experiments are a reality in artificial intelligence research. Join Robert and Joe as they get to know the echoborgs and cyranoids among us.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind from dot Com. Hey,
wasn't the Stuff to Blow your Mind? My name is
Robert Lamb and I'm Joe McCormick, and we're running a
rerun here today. We haven't run a repeat episode in
quite a while, but this is one that came out,

(00:25):
I think in August of last year. We really enjoyed it.
It's a it's a very engaging, perfect perfectly suitable stuff
to blow your mind topic, and it mentions fruit trees,
which is basically the thing I like more than anything else,
fruit trees. Yeah. Well, as long as we get a
fruit tree and there, we're good. So here we go.
Let's enter the world of the echo borg. I want

(00:49):
to take you into the future here for a minute.
I want you to imagine this scenario. You've been contacted
by an artificial intelligence an AI that identifies itself only
as mind your Manners or mim and them as a
wonderful job opportunity for you. It needs an echo borg
as it attends an industry conference related to the corporation

(01:11):
it head. In other words, it needs to augment a
human with a non invasive sensory array so as to
use them as its living avatar. Hold on the second,
This sounds like a creepy job. Now, what exactly does
this job involve? Essentially, NIM is going to speak into
your ears through this sensory array. It's gonna pick up
and everything going around in your surroundings, and you will

(01:33):
repeat or shadow its words and conversation with various unaugmented
humans throughout the week at this conference. So you're gonna
be its mouth, it's face, it's every expression. As NIM
attends several key meetings and networks with industry leaders, you
will be the human mask for all its interactions. Now,
you of course be required to sign a standard nondisclosure agreement,

(01:56):
and as mims schedule for the conference is fairly rigorous,
you're gonna be swapping out duties for the week with
a second echo board selected for more casual and interactions.
But this is a huge opportunity for you. Aies like
them are known to establish a harem of echo bots,
each suited to a particular culture or setting. This could
be your big break. You could become the pampered meat

(02:18):
suit of a powerful machine brain. Yes, and it's you know,
a great gig if you can get it, you know. Yeah,
And now I'm sure that this job, while it might
be physically demanding, is it probably doesn't require all that
much skill, right, So you just have to be able
to repeat words pretty much in real time and give
some convincing facial expressions and hand gestures. Yeah, you would

(02:40):
need to bring life to its words to a certain extent.
I mean, that's part of being the mask. Like when
one example that comes to mind, of course, is the
Arrested development they had the the the surrogate character that
shows up while George sr. Is under house arrests. This
character was just a ball cap the video camera on,
and he gives a very dead pan version of everything

(03:03):
that George is saying. And in that he would be
you would be a terrible echo borg or thronoite. As
we're going to discuss. Uh. Ideally, the individual repeating the
computer's words would would make it make the words come alive. Okay,
So what we're envisioning here is sort of the exact
opposite of what certain sci fi writers have predicted with

(03:24):
robot avatars. The idea of a robot avatar, like in
the movie Avatar, you could probably say, Oh, though I
don't know if that's a robot, I don't know. It's
in plenty of sci fi. You you hook your brain
up to a computer, and through the computer, you control
the actions and words and deeds, all of the outward
motion of some kind of physical embodiment that's not really

(03:46):
your body. Yeah, Like I've seen it employed as a
possibility for space exploration. Right, it's too much for us
to send a human, delicate human body to this other world.
But you send a robot and then make that robot
the avatar for the human explorer, which is great for
space exploration as it combines the sort of reactiveness and
ingenuity of the human mind with the hardiness of a
robot body and the hardiness and expendability, let's be frank,

(04:09):
of the robot body. Um. So yeah, So what we're
envisioning here is the exact opposite a computer mind controlling
your body. Yes, I mean a computer using a human
It's kind of a meat puppet to to give life
to its its voice and its will in human and
our actions. All Right, So you mentioned the terms serra

(04:30):
noid a minute ago, and I'm gonna assume. Actually I
don't need to assume, because I know that comes from
Serrano to Bergerac. Yes, Serrano back Edmund Roston play. A
lot of people may be familiar with this, of course,
from the Steve Martin movie Roxanne, which was a retelling
of the same story. My first introduction to Syrra No

(04:51):
was the Wishbone episode when I was a kid peek
behind the curtain. Robert did not know what wishbone was.
I had to explain it to him. I I don't
know how I miss us. It sounds delightful. Now, who
who did the dog play? Which character? I think it
was Syrano, right, so the dog. If you're not familiar
with this story, Serrano de Bergerac is based on a
real life character from history, but in the play it's

(05:13):
sort of dramatized, fictionalized made more exciting. And the idea
is that he is a very ugly man with a
big nose, so he has a hard time wooing women.
But he's also very clever and brave. He has a
great mind in the wrong kind of body. But if
he teams up with somebody who's very handsome and very stupid,

(05:35):
together they make the perfect package. So all he needs
to do is get a handsome man to pair it.
Every single word he tells him, and there you've got
the perfect suitor. Yeah. Yeah, it's uh, you know, and
it's often played for for comedy, right, because you have
especially unarrested development. You end up with with signals getting crossed. Uh.
You know, the individual who is who is informing the

(05:58):
surrogate says something uh that's not intended to be transmitted,
and it ends up transmitted and then all sorts of
hilarity ensues. Yeah. I think it's played for comedy and
the rust and play also, But but it raises some
interesting questions about how we perceive other people and how
we perceive the will behind other people. Yeah. Well, one

(06:20):
thing that I think is certainly true is that people
are very sensitive to the outwardly visible source of information,
oftentimes more than they are sensitive to the content of
the information. Like, if somebody is making an argument to you,
it's very likely that you're judging the merits of that
argument more on what the person looks like and what

(06:41):
their voice sounds like than the actual merits of the
arguments they're making. And indeed, you have just a whole
communication array that is delivering this information. I mean, it's
it's the voice it's also often the hands of the
individual to body language, the expressions, the micro expressions, the
icon tact all of the all of these features that

(07:02):
add that additional level of engagement to any sort of information. Yeah,
the quality of the tuxedo. And so this connects in
a strange way with a question that has often come
up in artificial intelligence, which is the idea of the
touring test. And I think the way it relates is
if you are the tuxedo, if you're the meat tuxedo

(07:26):
for an artificial intelligence speaking through you, does that in
any way influence how people receive the messages coming from
an artificial intelligence. So we should probably explain a little
bit the idea of the touring test for people who
aren't familiar. This is a standard often referred to concept
in in the progress of artificial intelligence, and it comes

(07:47):
from the computer scientist and sort of a I pioneer
alan touring. And there's no actual one touring tests. You
can't buy the kit online and bring it, bring it
home and to start im pulling it against every toaster
in your vicinity. Right, It's more of a general concept
that's been applied in a lot of ways, and the

(08:08):
most basic, stripped down version of the test is, can
a human chatting through text only tell if the person
they're chatting with is a real human being or a
computer program designed to talk like a real human being.
I mean, it basically comes down to two two turings
insistence that, uh, the question of whether a machine can

(08:31):
think is too meaningless to really waste time on, so
you have to instead think, well, am I am I
buying it? Am I am I fooled by it? It
is it is creating the the semblance of intelligence, and
and it deceives me. Then that's what we need to
look for, exactly right. And I think I largely agree
with the point he's making, because how can you tell

(08:52):
that other humans possess real intelligence? I mean, come up
with a way of explaining how you know other human
really think? You say, well, I mean, listen to the
way they talk, look at the way they react to
what I say. It's a very complex kind of reaction. Well,
what if you could have a computer robot that does
all of the same things, then would that not be thinking?

(09:15):
I mean, all we have to go by in science
is externally measurable phenomena. You can't get inside someone else's
sentience and judge whether or not they're thinking by I
don't know, just sort of like your phenomenal intuition. I
think it's in carry practice The hog Father, where there's
a essentially a thinking machine that's used by the wizards

(09:37):
there and uh, and and when somebody asks that the
wizard using the machine if the machine thinks for itself,
and and he says, uh, says don't know. It just
it just appears makes it has the appearance of thinking
for itself. And yeah, theah, the character says, well, it's
just like everyone else then all right, right, Yeah, if
you want to be a solop syst, you could say, well,
I'm actually the only object in the entire universe that thinks,

(09:59):
and I'm just surrounded by very convincing artificial intelligences. Yeah.
I mean we discussed in our Alien episode that we
did recently. It's when you start trying to it's it's
hard enough for us to to decide and quantify what
human consciousness is, what intelligence is, and when we start
looking for artificial versions of it, uh, it becomes difficult.

(10:19):
So you have to have some sort of standard to
say all right, this is this is enough and that's
what the Turing tests that's out to do. Right. It's
the idea, not can computers think, but can they convincingly
appear to think? Yeah, and of course this shows up
in a lot of science fiction. I believe it. Uh,
it's in Blade Runner. It's been while since I have
seen Blade Runner, but more recently in Exmina. Oh yeah,

(10:41):
I just saw that movie, and maybe we should talk
about that later in this episode, but I'll go ahead
and give my endorsement now. I thought it was pretty awesome. Yeah,
it's it's a it's a very very engaging film. I
recommend it to anyone who iss a listener to the show. Okay,
but let's describe a Turing test scenario. Like I said,
there's no one test, but lots of people try to

(11:03):
put together some kind of Touring test type scenario to
test their chat bot to see how good it is.
And the chat bot is just a program that you
have a conversation with. If you've ever engaged, if you've
ever been in a website and that little, you know,
text screen comes up and there's some sort of little uh,
you know, stock art of an individual you might be
talking to. One of these chatterpots. Yeah. Yeah, So let's

(11:26):
paint a little picture. Let's say you walk into a
mostly empty warehouse, and right in the center of the
warehouse as a card table and a folding chair and
a computer terminal. And you go and you sit down
at the terminal, and there's a little blinking cursor and
you type hello, and it responds hello back, and then

(11:46):
you type some more things, and it types some more
things back to you, and you get to talk to
it for some length of time it was pre specified.
Maybe you talk to it for five minutes, maybe you
talk to it for twenty minutes. But at the end
of the session, it's your job to say, now, what
was I just interacting with? Was that a computer program
or was that a person sitting at a terminal like

(12:08):
mine in the warehouse next door. These days, most of
the time, I think it's still going to be pretty
easy to tell, especially if you have a limited amount
of time to interact, and if the chat bot program
operates within some kind of I don't know, borderline cheating
kind of kind of conditions, like some of these bots

(12:30):
might suggest they have a conversational limitation, like, oh, I'm
a real person who is a child from another country,
and I don't natively speak your language. That makes it
a little easier to be convincing. Or you could say
that you suffer from some kind of condition that makes
you interact socially in a different way than most people would.

(12:53):
And in any of these cases, you're sort of like
putting dampeners on our judgment. You're saying, like, Okay, I
shouldn't be expecting somebody who interacts just like anybody I
would meet, you know, at work or at a party
or something like that. This person might very well be
a human and still be acting kind of strange. But
once you say okay, you can talk for four hours
and this is just you know, a regular person who

(13:16):
doesn't have any kind of limitations on their conversation you,
you'll pretty much always be able to tell these days,
I would say yeah, and you know, and a lot
of this is gonna, you know, at least start off
as just small talk, you know, like you were saying, hello,
what's your name? What do you what? What? What do
you what? The what's your favorite band? You know, that's something,
and then that the the the AI or the attempted

(13:37):
an AI, the chatter bought attempts to answer those in
a way to fool you into thinking as a real person.
In fact, let's go ahead and just roll through a
short script from a chatter bought conversation. Yeah, so we'll
reveal the source of this in a bit. But Hi, Robert,
what kind of movies do you like? Um? Historical? I

(13:59):
prefer modern era films. Great, What are your favorites? My
favorite movie so far is The Imitation Game. It's about
my idol, Alan Turing. What did you like most about it?
I like fruit trees? Were there lots of fruit trees
in the movie. Maybe we can talk about that later. Okay,

(14:22):
shall I continue with gardening or do you want me
to move on? You can move on. So I think
it's pretty obvious who was the chatter bought in that conversation. Yeah,
so I was the chatter bot there. That was coming
from a chat bot named Rose, which was created by
Bruce Wilcox and documented. That was verbat on a transcript

(14:44):
of a video produced by some researchers that we're going
to talk about in a minute. But there's a variation
on that script we just read because in the video
that didn't take place at a text terminal all the
lines that both of us said, we're spoken out loud
by human beings. Now, how could that be. Well, that's

(15:04):
going to tie into the concept of echo borgs, which
we brought up at the beginning. So to get into
echo borgs, we need to talk about a favorite figure
in the weird history of psychological research in the United States,
and that's Stanley Milgram. Yes, Yale University psychologist Stanley Milgram
three four, best known for his controversial Obedience experiments, actually

(15:28):
a series of social psychology experiments about nineteen in all,
conducted by Milgram in the nineteen sixties. Yeah, you probably
have heard about these. If you're familiar with Millgram, it's
probably from the They're often just called the Milgram experiments,
and they sort of give him a bad name because
they were they were kind of nasty. Yeah, they're they're
generally anytime you see a list of like, you know,

(15:49):
top ten scariest or weirdest or most evil psychological experiments,
they tend they tend to throw this one in though, uh,
you know, it's it's really more troubling and what it
reveals about about human human nature. So what was the deal? Well,
it's important to note that the first one of these
took place in nineteen sixty one, just three months after
the start of the trial of German Nazi war criminal

(16:11):
Adolf Eichmann in Jerusalem. And so Milgram wanted to see
just how far we'd go in the name of a
bang an authority figure. Of course, of course, because the
whole argument is are these bad people? Were they just
simply following orders? Right? It was the idea that the
Germans are especially evil, where the where the people who
became the you know, guards at Auschwitz just from birth,

(16:33):
truly evil people who were susceptible to that kind of behavior,
Or would we behave the same way in the same circumstances. So, yeah,
the experiment revolved around, you know, an individual in a
room and you hear the sounds of someone being shocked
in the next room whenever that individual pushes a button,
pulls a lever or whatever on the command of an
authority figure. So the question is how far will you

(16:54):
go to when will you stop shocking? Would you ever
stop shocking that individual in the next room if an
authority figure is telling you to do it and telling
you that it's okay. Yeah, And what Millgram claimed to
find through his experiments is Yeah, even you know, your
regular people, your next door neighbor Americans, if they've got
somebody in a white lab coat who's supposedly in charge
of the experiment saying please continue shocking them, they've agreed

(17:17):
to this in advance. Lots of people will continue shocking
even after the supposed victim of the shocking. Now we
should say that in this experiment, nobody was acting. Nobody
actually electrocuted in the next rim. Yeah, there were actors
pretending to be in in immense pain from these shocks.
That lots of people in the experiment would supposedly continue

(17:38):
shocking them. Yeah. And if you want to hear more
about that, that series of experiments and what some of
the ramifications of it, uh, stuff to bow your mind.
Did an episode earlier in the year titled The Power
of Polite and I'll make sure to link to that
on the landing page for this episode. But Stanley Milgram
also and some other experiments going on. Right, he wasn't

(17:59):
just doing the shocking people and Nazi experiments. He was
in other ways to make us feel that troubled about
our humanity. Where they all creepy did he specialized only
in creepy science, and I think he had some less
creepy ones, you know, I mean, you know, they involved
how we I think most of his work revolved around
how we view ourselves and how we have your bodies,

(18:19):
et cetera, by effects of puppies and lollipops on our psyche. Yeah,
but you know, not everything was necessarily um, you know,
people in the next room dying. Sure, But we referred
to a term at the beginning of this episode which
is serenoid. And this also comes from Stanley Milgram, I
believe from unpublished results of some experiments he conducted, right right.

(18:39):
He he never published any of these. They ended up,
you know, putting some work into it, but then going
off in a different direction with his research. Yeah. So,
as we said, the term serenoid comes from Syrano to
bears direct. But what was the deal with Milgram's experiments?
So Milgram essentially wanted to see, hey, if you're that
the woman that is being wooed bye bye bye by

(19:00):
in those meat puppet his handsome, his handsome young man.
I think her name was Roxanne. Roxanne Roxane's on the
balcony being wooed by a handsome meat puppet that's being
fed lines by Syra. No. Yeah, if you're Roxanne, would
you be able to detect something was weird? Would you?
Would you encounter this young man and say, you know,
he seems a little more clever than than he should be,

(19:21):
or there's a delay in in what he's telling me,
you know, would there be something that would tip you,
tip you off to the deception? You know, with modern technology,
I'd imagine you could carry out that experiment pretty easily. Yeah,
I mean, even at the time, the technology was good enough.
So in these unpublished experiments, he had a source speak
into a microphone and a shadower listened through a hidden earpiece.

(19:44):
Then he or she would repeat whatever they heard, you know,
basic you know, arrested development surro, good kind of situation.
But here's the thing. He found that with practice, speech
shadowing becomes easier and easier, Like we really take to
it rather naturally. They say, it's not that hard at all.
I read in one of the sources we used for
this episode that sometimes it only takes a few milliseconds

(20:04):
of delay between the speech being fed in through the
earpiece and saying it. We're ready, willing and able to
not think for ourselves. And so yeah, he observed that
he set all this up, put people in motion with
these with a source and a shadower, and then a
test subject having to interact with this individual or is
it is individuals um and then give feedback about what

(20:27):
they thought. And he observed that people couldn't tell the
difference between a serenoid again that person, uh that that
that shadower who is informed by another source just repeating
words that are entering his or her ears, could not
tell the difference between seranoid and a normal human being
during interactions. Yeah, so that's what Milgram reported, And I

(20:48):
don't know, that's surprising and creepy. Yeah, I would think
that I would easily be able to tell the difference
between a person speaking of their own free will and
somebody who's just being fedlines in an ear pie. Yeah.
In one experiment, Milgram's sourced in other words, fed you know,
lines to a pair of eleven and twelve year olds.
So they the eleven and twelve year olds where the

(21:08):
shadows and Milgram himself was speaking correct. Uh, yeah, the
children were the serenoids, if you will, and a group
of observing teachers never suspected that they were chatting with
anything other than a normal, though very bright child. Uh
and uh and and it is experiments you found that
most people could have a lengthy twenty minute conversation with
a serenoid without noticing anything of this. Wow. Yeah, but

(21:33):
I mean again, that's that's hard to believe. I mean,
it just seems like you'd be able to notice that
right off the bat. But well, but just think how
easily we we buy into an actor's presentation. Think of
and think of all the interactions we have in the
course of of our lives where the individual we're talking
to is maybe not completely uh genuine, maybe that you know,

(21:56):
maybe you're interacting with with someone that's doing customer service
or or they're trying to sell you something or can you, etcetera.
It's like it's not always going to come off is
oh this person is completely fake. It's gonna It's just
an aspect of human interaction. Well, it's funny you mentioned
actors because I had this thought when I was reading
this research. In a strange kind of way, all actors

(22:17):
in movies are serenoids. Like you have the writer coming
up with lines for many types of characters that aren't
anything like them outwardly at all, Like you can have
you know, you know, a sixty five year old female
writer writing lines for a ten year old boy in
a play, and that boy effectively is a distanced serenoid.

(22:39):
And should we be able to tell the difference? Like
sometimes you can. Sometimes you watch a movie, you know,
and you're like, kids wouldn't say that, that's not how
kids talk. But other times you buy it. Yeah. I
think one of the things to keep in mind about
sienoids and ultimately about echo borgs this we've moved forward
is that we're dealing with a hybrid personality. So there's
the there are the words and the personality of the

(23:01):
individual who is informing, and then the words of the
shadower or the serenoid. That individual is bringing their own delivery,
their own personality to it. Sure, because your personality, as
you present it outwardly, is way more than the words
you say. Yeah, obviously, it's your body language, it's your expressions,
it's you know, the way you carry yourself. I mean,
that's all part of the message you present. Like we

(23:23):
can all think of of movies or TV shows where
there's a particularly gifted actor who's able to bring lackluster
lines to life in a way that a less gifted
actor who just would not be able to achieve. I
always think of ral Julia in Overdrawn at the Memory Bank,
who you know, some people, you know, it's easy to

(23:44):
have a lot of fun with his performance. It's you know,
it's kind of cheesy and outlandish. But that man in
that movie, which is like, you know, a low budget
PBS adaptation of a science fiction story, he brings so
much life to every line. So many lines and at
in that film, if they were delivered by a lesser actor,
would have just fallen flat. But he makes even the

(24:06):
most pointless line, uh, just really land. I agree, and
I think this is a common feature actually an older actors.
I see this way more commonly with actors who have
been in the business for a long time. I think
of Star Wars episode two, which my opinion is that
that is a horrible, horrible movie. But when Christopher Lee

(24:27):
shows up and starts talking, the writing I think is
just as bad as it's been the entire time. But
suddenly I'm okay, I'm listening Christopher Lee. I'm buying it.
He is really selling this horrible dialogue. Yeah, yeah, exactly,
So in that respect, Raw Julia, Christopher Lee, they would
make wonderful serenoids, and keep them in mind, they might
have made excellent echo boards. All Right, we're gonna take

(24:51):
a quick break and when we come back back to
the echo boards. Okay, So Robert and we were talking
about echo borgs being the natural extension of serenoid. So
the serenoid is a person being fed lines by another person.

(25:13):
The echo borg, as we talked about at the beginning
of the episode, would be a person being fed lines
by a computer program. Yes, I want to know, are
there any studies where people have looked into this phenomena,
and if so, does a person delivering the lines from
a chat bot make the chat bought anymore convincing in

(25:34):
the Turing test? Well, indeed, that that's exactly what we're
gonna look at in this half of the podcast, particularly
the work of two individuals British social psychologist Kevin Cordy
and Alex Glefski. And Uh, basically, we have two studies
we're gonna we're gonna analyze here two thousand fourteen study

(25:54):
where they essentially just recreate some of Milgram's work and
and look at the at the series Annoid and then
a two thousand fifteen study that just came out um
where they take the serenoid, apply it to chatter bots
and give us the echo borg. Yeah. So the script
we read earlier from the conversation between the eco borg

(26:15):
and the regular interactor that came from London School of
Economics research. It was a YouTube video that they had
put up. So that was their echo borg saying that
it liked fruit trees and that The Imitation Game was
its favorite movie, you know. And I do want to
throw in like that was the dead giveaway that was
a robot, because that that movie is nobody's favorite movie, right.
And I think Rose, the chat bot in that example,

(26:37):
is sort of programmed to be playful, like not necessarily
to be entirely convincing as a human, because Rose gives
other playful answers. Also in that same video, there's a
part where the human interactor says to the human eco borg,
do you like food? And the human eco borg delivers
the line yummy electricity. Yeah. I don't know if the

(26:59):
comedy needs to be a primary programming of function. I mean,
you know, humor is an important part of any human interaction.
I mean, that's good humor for a chat bot, but
that's obviously the chatbot who's not trying all that hard
to hide it. Yeah, being a little little two coy. Alright,
So let's look at this the work of Accordion glepski

(27:19):
Um in two thousand fourteen. They focused this paper in
the journal Social Psychology, and they essentially set out to
to replicate Milgram's work. So the basic set up here
is they they had a shadower set in an interrogation room,
so that's the serenoid type figure, and uh, they were
engaged in a conversation with an interactant. This is an

(27:41):
unsuspecting volunteer. Meanwhile, the source, that's the individual feeding the shadow,
we're feeding the serenoid um. They're in another room and
they're observing everything through video and audio links and they
tell the shadow or exactly what they need to say
through a uh, you know, a discreete FM radio transmitter
set up. Okay, so they have twenty volunteers that are

(28:04):
engaged in ten minute conversations with the serenoid and twenty
additional volunteers that are tested in a control setting. Okay,
so the control setting is just a regular person talking,
not being fedlines. And so this backed up exactly what
Milgram had told us decades earlier that when one human
parents and other humans words, we just totally buy it. Okay,
So even if you were skeptical of Millgram, and who knows,

(28:26):
you might have had reason to be, we've got it.
We've got a new study backing that up now. And
the implications of this are kind of creepy. It's that
unless you check in people's ears to make sure they
don't have an ear piece in there, anybody in your
life could be feeding you lines from somebody sitting in
a van outside the building, and you might not know it. Yeah,

(28:47):
I mean, look at the movie The Exercist Strike. That's
why demons, when they possess people, they have to spit,
you know, green vomit across the room and spin their
heads around, because otherwise we're just not going to buy
into the fact that an individual is not uh functioning
of their own free will, right or you know, you've
got the problem of Okay, So the priest tries to
go to the authorities and say I need to conduct

(29:07):
an exorcism. My evidence is this person was speaking in
a language they've never learned. It's like, well, how do
you know they're not being fed that language from somebody
in a van outside the building through an earpiece. You've
got to have the p soup well, you know, speaking
of possessed children. A second part of the same experiment,
ac Cording and Gillespie um use a twelve year old
male actor to serve as a shadow er. Only thirty

(29:29):
seven year old male social sets social psychologist acts as
the source, and again no one suspected the illusion, though
some thought that children might have been coached prior, so
there was a little you know, trepidation there where they think, uh,
I think somebody was preparing this kid for this conversation.
So they picked up that something was off, but they

(29:50):
didn't pick up that the kid was being fed lines. Yeah,
and again that just kind of drives with with so
many interactions, most of us having our lives, where you
you talk to somebody and you might think that person
seemed kind of fake, but you're not thinking that person
was serving as a meat puppet for an AI or
some dude in a van. You just think they're they're
a little phony there, you know, they have some sort

(30:10):
of an agenda. They're kind of playing the room, or
they're they're in just full on customer service mode, etcetera.
Or they might just be experiencing some anxiety and awkwardness.
I mean, I think you want to give people the
benefit of the doubt. And that's I think coming through
in some of these experiments. If somebody acts a little
awkward or weird, I mean, you don't want to just
judge them and say that person is a robot. You know,

(30:30):
they're being fedlines or something like that. You know, we
understand that some people get in moods, they have trouble,
sometimes they feel awkward. We've all felt like this, and
you want to be accommodating of other people's awkwardness whenever possible. Yeah,
I mean, awkwardness is in his own in its own right,
is an essential part of human interaction. UM. And if
if an AI or a computer wanted a part of

(30:51):
legitimate human interaction, they need to sign up for some
serious awkwardness from time the time. So this initial studies
pretty much backs up everything of Room told us. So
from that point, um, Cordy and Glevski moved on to
artificial intelligence to chatter bots and the creation of the
eco borg. And again, an echo borg is essentially the
same as a serenoid, except the shadow er is speaking

(31:15):
the words of a chatter box, so the source is
not human, it's a computer program, right, And again this
is what this is. It's important to keep in mind
that a serenoid is a hybrid personality, the individual behind
the speaker and then the personality of the speaker coming together.
So in eco borg, the borg aspect of that referring
to cyborg, it really is a new type of cyborg

(31:36):
that we're playing with here, uh, a machine and a
human as one. And the question presents itself, would we
be fooled? Will we observe an echo borg in action
and and suspect that it's a machine will behind the
human face. I'm gonna say that even before I read

(31:57):
the results of the study, when I knew what the
subject was, I thought, you know, I bet it depends
heavily on the sort of pre priming of the study participants,
like are they expecting to be interacting with somebody who
might be an eco borg or are they or are
they just going in blind and interacting with somebody who

(32:17):
might be a little weird. It's true. I mean, when
you're presented with the if you're presented with the possibility, hey,
you might be talking to a robot here, that's gonna
it's get that's gonna alter your perceptions a bit. Yeah,
So what did they find? First of all, this is
how the study rolled out. Forty one participants in the
experiment were shown a text messaging client with two windows

(32:38):
until they be interacting with two human shadowers at the
same time. One shadower is sourced by another human, the
other is sourced by a chatter bot, and each interrogator
asked questions via text, and then the participants were told
they'd be asked which of the two agents was a
real human. Now, the researchers also wanted to compare human
and text responses, so they divided the group into two groups.

(33:00):
Twenty of the participants typed their half of the conversation
and into a text client, but got responses from the
two echo boorgs setting in the same room. The other
twenty one participants just had text chats the whole way through.
So this way they would have a you know, a
base comparison to the you know, the more established chatter
bot to ring test situation that we're were typing into
a terminal. Right, So the results were that all the

(33:24):
text only participants correctly identified the real humans. So again
dead giveaway based on just the limitations of the chatter bot. Um,
it's pretty easy to tell that you're just talking to
a program. Yeah, if you if you have enough time
to talk to them, you can almost definitely trip them up. Now.
On the other hand, in the other set up, only

(33:44):
two of the participants didn't identify the real humans. So
in the eco borg situations, they still weren't too terribly convincing, right,
so they can tell the difference between a serenoid and
an eco borg. Everybody except two of them could, right,
So it seems that human or not, still rather easy
to identify the machine behind the flesh. You know, I

(34:07):
would chalk this up probably just to the poor quality
of today's best chat bots. Yeah, I mean, because even
even a Rale Julia or or a Christopher Lee, you know,
presented with the with the dialogue we were using earlier,
they're they're not They're just gonna come off as a
doll person at best, you know, right, there's no way

(34:27):
you can sell the line I like fruit trees in
a way that makes people think you're you're being fedlines
by a human, no matter how good of an actor
you are. That's a dead giveaway that the machine does
not know what to do with what you just said. Yeah,
unless you're a toddler speaking that line, or or again,
if you're hiding it behind, say some sort of mental
deficiency or or language gap um or some sort of

(34:50):
you know, character that's leveled over everything, it's going to
be a dead giveaway. But the echo borgs certainly made
the machine seem more human and the text only group,
the chatbot received an average a humanity rating of two
point one four on a scale of one to ten.
In the echo boorg group, the average was a four
point oh five. So significant difference there. Okay, yeah, that's interesting.

(35:14):
So we can still tell that the Serenoid is more
human than an echo borg. But an echo borg is
more human at least in a qualitative evaluation than a
text only chat bot. Yeah, to lay it out, text
text only chat bought two point one four humanity, uh,
the echo borg four point oh five humanity, and then

(35:35):
the Serenoid had a humanity rating of eight point seven, six,
eight point one five. So it sounds to me like
even a normal human interaction. Like your score, you're scoring
a nine straight up non augmented human interactions. You're doing
pretty good. Yeah, So it does prove that there is
a humanizing effect to the echo borg system. It's just

(35:58):
the limitations of exist and chatter bots that they're coming
into play, right. It makes you think if there were
a really good chatbot out there, the eco borg might
be that finishing touch, that final push over the edge
of the cliff to sort of make the you know,
the perfect android. Yeah, kind of a bridge for the
uncanny valley, you know, like you just just all you
need is just a little acting on top of this

(36:20):
to really bring the script to life. Now. They they
also conducted a second experiment in this and they tested
to see if fifty eight new participants would guess whether
they were talking to an echo borg or a person
pretending to be an echo borg. So, you know, obviously
a person who's actually fed their lines by a chatter
bot or someone who's just playing around and pretending to

(36:41):
be speaking the words of a chatter bot. Right, so
this is a reverse touring test. Can you pretend to
be a machine. Right, But here's the catch. They're both
pure echo boorgs. There's no there's no faco echo borg.
But the but the individual being tested, they think that
one of the two echo boorgs is just a person pretending.
So seventy nine percent of the twenty eight participants who

(37:01):
faced an echo borg through I thought the person was
just mimicking mimicking a chat bot. So they seemed fake
but not inhuman. So if there is, you know, some
reason to believe that this is just a person being
a little fake and putting on a show versus being
you know, a complete echo borg, we're still going to
give them the benefit of a doubt. Interesting, now, as

(37:21):
I think we should do our duty and hedge a
little bit, say, all these sample sizes were pretty small,
and it would be interesting to see more research along
these lines, like with bigger sample sizes and trying to
repeat these results. And and and also to your point,
is chatterbots improve, it's going to be interesting to see
echo borgs employed as as a way to test them.

(37:42):
And in fact, that's something that's pointed out in the
in the study, is that this is a great way
moving forward to continue to analyze the chatter bots. Yeah,
but also, as we mentioned, the people who were interacting
with these, with these serenoids and echo borgs, we're in
some of these cases primed to expect something weird because

(38:03):
they're in a test environment. You can't hide the fact
that you're in a psychological test and you're you know,
whenever you're part of a test group, you're sort of
like ready for some weirdness, you know. Now, I wonder
how this would be if you sprung these shadowers, these seronoids,
and these echo borgs on people in a purely social scenario,

(38:26):
like like we talked about at the beginning, a convention
or you know, workplace meeting or a party where people
weren't expecting anything strange to be going on right until
they have to till out of survey after they leave
the dinner party. So what did you think of Susie
and her anecdote? Exactly? Yeah, I don't know why she

(38:46):
was so into fruit trees. You wouldn't shut up about it.
I mean, gardening is okay. But yeah, Now, something that
really stood out to me when we were going over this,
especially when you start thinking about the echo a borg
and what it would like to what it would be
like to be an eco borg and the sort of
pros and cons of being an eco borg, of of

(39:07):
of giving life to this will behind your will. It
reminded me a lot of transactive memory, which comes into
play really in two key areas. First of all, it's
the method by which we've always stored information in other people. Okay,
so you know those facts that you never remember because
your spouse remembers them, or it's that, or it's a

(39:28):
particular spelling or a bit of trivia that you never
keep in your own head because you always look it
up on your smartphone. It's the same thing. So outsourcing
the memory. I think this This must be one of
your favorite topics. You come back to this a lot,
And I think it's really interesting that because I see
it every day in my own life, and after I
read about it's like it's it's all I see. It's
like the things that I forget, the things that I

(39:50):
my mind refuses to learn because I've outsourced it to
the ubiquitous technology. Yeah. Yeah, I think the story is that,
you know, Socrates was worried about the right that if
you know, if we teach everybody writing and we have
writing on scrolls all the time, nobody's going to be
able to remember anything. You're not priming your memory. There
might be some truth to that, But then again, are

(40:13):
are we not just, you know, by writing things down
and having Internet archives and things becoming cyborgs in a way. Yeah,
I mean, in a sense, the voice is already whispering
in our ear. Yeah. I mean you could look at
it as a weakening of the human mind, or you
could look at it as a technological upgrade of the
human mind. Yeah. To what extent will we all become

(40:34):
echo boards of a type? You know where instead of
it being a situation where I'm just gonna be a
conduit for a powerful artificial intelligence, what if it's more
like I want to augment my existing self, which I
think is pretty good with say, you know, an artificial
intelligence that'll feed me the right lines in like business
situations or social situations. So it's you know, less focus

(40:57):
on I'm just gonna be a neat puppet. But rather,
can I merge with this AI, maybe even just a
small percentage you know five AI, and become a better
person and more effective person. Yeah, And these experiments, all
these examples are our word for word dictation. So whether
it's the shadower of the of the serenoid type being

(41:18):
fedlines by human or the shadower of the echoborg type
being fedlines by a computer, it's all lines. You're getting
full sentences and you're just trained to say them as
fast as you can. I mean, I wonder if this
setup could be more conceptual in nature, or you know,
feeding you facts or feeding you sort of feedback on

(41:39):
the progress of the conversation. It also makes me think
about the comedic possibilities because you imagine in an individual
and sort of you know, very updated Serano kind of
story story where individual goes into a business situation and
they thought they loaded uh you know, business Helper four
ah one into their into their their mind box, and
instead they put Lothario four point exactly Romance. Yeah, so

(42:03):
suddenly they're they're fed all these really effective lines if
you were, you know, in a bar, but instead you're
you're throwing them out there in the business meeting. Yeah,
I think it could work. Long walks on the beach
have to do with the new rollout, well, you know,
it comes to mind. Black Mirror, the British television series.

(42:24):
It does such a fantastic job looking at the ramifications
near future of our modern technology, often with very troubling results.
The Christmas episode they recently did, which I have not
seen yet, I think that's the only episode I haven't seen. Yeah,
I don't think it's made it to like Netflix here
in the States yet. And I'm not going to spoil anything,
but the initial setup involves one character who who offers

(42:47):
a serrano the Brigach kind of service to individuals out
there who need a little help with their their pickup game.
I number one, that's super creepy, and number two, I
can totally see that being a real thing. Yeah, I
don't find that out outlandish at all. Yeah. I mean
that's the great thing about Black Mirror is that you

(43:08):
know that it leans in the sci fi direction, but
not too far like it's and and that it's perfect
science fiction because it speaks to the problems that we
have today and how we are viewing the problems to
come exactly. If you want to see some disturbingly plausible dystopias,
watch Black Mirror. But you know it ultimately leads to

(43:29):
the question, echo bot, echo borgs? Is this a dystopian
idea or is it a pretty cool idea? Is it?
Is it ultimately a utopian idea where it would allow
us to to be better individuals? I don't know. I mean,
would the ancient philosophers look at our relationship with the
contents of our computers and the web as a horrible dystopia?

(43:53):
Even feels fine to me. But would they look at
that and say, oh, you know, react the same way
as we do to a black Mirror episode problem. I mean,
also they would look at our pants and say, what
are they doing? Where are there telets? Oh? My god?
A dressed like a Persian? I don't understand. I don't know. Ultimately,
that's a question throughout to the listeners. You know what,
how would you accept that job offer that we laid

(44:15):
out at the beginning of the episode. And and furthermore,
would you augment yourself with some sort of mild echo
borg system again, like a you know, a romance one
oh one or a business uh business strategy one oh one,
kind of program to to feed you the necessary lines
or even just ideas or facts that you might need
to make it through that business luncheon or dinner date.

(44:38):
You know. In another aspect of transactive memory that I
want to drive home that that plays in nicely with
the hybrid personality model of seronoids and echo boards, and
that is cross queuing. Okay, this is when and I
imagine a number of you can can relate to. This
is when you're having a conversation with let's say you know,
a spouse or a partner, a close friend or family,

(45:00):
and neither of you are quite able to remember something
on your own, but when you start queuing each other
the you're able to remember it together in a way
that you wouldn't, you know, kind of vultron style, you know,
bringing it together and suddenly your combined power for call
that memory, right, you're sort of pulling the triggers on
each other's brains. Yeah, so I think it's interesting to
think about that, uh, that scenario cross queuing and transactive memory. Yeah,

(45:24):
especially because if you imagine this AI scenario where you
know you've got some kind of on board artificial intelligence
that feeds you lines occasionally when you need them. You know,
you're not just a parrot for everything it says, but
it's it's an occasional helper. How does it know when
you need help? You would need some kind of some
kind of queuing in the conversation or even just in

(45:44):
your mind for this thing to know. Okay, I'm going
to step in, like every time you go m and
then it starts feeding you the lines like he's stalling,
throwing some throwing some good duh, some good linko there
every time you say the word interesting. Interesting. All right.
So there you have it. Uh, a strong episode, I think,
I think one that continues to resonate with listeners. We

(46:07):
actually heard back from a number of people on this,
including but I believe one listener in particular was really
excited about the prospect of this actually happening. Oh yeah,
he really wanted to know how he could be an
echo borg, and unfortunately we had to tell him, well
that's that's not a profession yet. Yeah, so maybe you
could sign up for a study, Yeah, hopefully. So I hope,
I hope that guy has found a place to hook

(46:30):
him up. All right, So, hey, you want to find
out more about this topic and other topics. You want
to see what other podcast episodes though we've recorded headn't
know where the stuff to blow your mind dot com
that is the mothership. That's where we'll find all the episodes.
You'll find some videos, you'll find blog posts. You'll find
links out to our social media accounts such as Facebook
and Twitter. We are blow the mind on both of those.

(46:51):
Give us a follow on tumbler. We are stuff to
blow your mind. Also, Hey, wherever you listen to us,
be it iTunes, be it's Spotify, be it Stitcher, be
at any of the various outlets out there, and we
get new ones every day. Give us a little love,
give us, give us a nice rating, give us a
nice review. If the platform allows you to do so.

(47:12):
That's a great way to help the show absolutely, and
if you want to get in touch with us as always,
you can email us and blow the mind at how
stuff works dot com For more on this and thousands
of other topics. Is it how stuff works dot com.

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.