All Episodes

June 3, 2015 46 mins

What is social intelligence and can a robot possess it? We take a look at an emerging field of robotics and how challenging it is to make a machine that can socially interact with humans.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking, pay there, and Welcome Beforeward Thinking, the podcast
that looks at the future and says people are strange
when you're a stranger. I'm Jonathan Strickland, I'm La, and

(00:20):
I'm Joe McCormick, and I've got a story for y'all. Alright,
let's gather around the campfire. It's more of an anecodote. Actually,
it's not even that anyway, here's what it is. You've
built it up so much and now I'm already bound
for disappointment. No please. So a while back I was
writing a video episode for Forward Thinking about whether robots

(00:42):
are going to take our jobs? And if you haven't
doing that video, yeah you should go watch it. It's
on YouTube. It's a great, great episode. It is. It is.
But we have a spoiler for that video. Yeah, there
is a spoiler. Actually this We say this in the
video pretty early on. It's not the end. Yes, they will.
Robots will take our jobs, will take your job, They'll
take our jobs. They'll they'll take all the jobs. But

(01:04):
in doing the research for the episode, I came across
plenty of good reasons for thinking that some jobs are
much much harder to automate than others. And one simple
rubric for separating jobs into robot friendly and robot unfriendly
is this is the job easily described in an explicit

(01:24):
list of instructions that can be executed over and over. So,
in other words, could it be something like pick up
blue box, open blue box, put orange sphere in blue box,
close blue box. That's easily explained, and it would be
something that you could program as a list of actions
for a robot that has very simple image recognition software.

(01:48):
It really just needs to know what a blue box
looks like and what an orange sphere looks like, and
then you're pretty much good. Hard to touch those things
that they don't shatter whatever, right, and it's it's just
gonna do the same things over and over again. There
aren't going to be a lot of changing external conditions,
or if there are, they might not matter necessarily. Jobs
like this are pretty easy to automate, and they they

(02:11):
if if they have not already gone to machines or
computer programs or robots, they probably will soon. So a
lot of examples of this might be jobs today, and
like data entry or data processing or jobs like telemarketing.
A lot of people think that these kind of jobs,
because they're so repetitive and you can make clear lists
of instructions, can pretty easily be done by machines. Other

(02:34):
jobs are very robot unfriendly. They're harder to automate because
they require things that are harder to predict, Like you
can't write a program that does unpredictable things. I mean,
programs are pretty much by the definition of predicting. And
we've talked about random number generators and how difficult it

(02:55):
is to have a truly random number generator. Yeah. Yeah.
If if you say to a robot, literally any person
who exists in the world could walk through that door,
how do you deal with that person going to go
like and just shut down? Yeah? Yeah, yeah, So these
jobs are the one The hard to automate jobs are
the ones that require creativity, strategizing, reacting to unpredictable circumstances

(03:20):
and stuff like that. And there are some jobs I
thought it was interesting that sound like they fit into
the first category, but they actually fit better into the
second well. And there are also some that sound like
they would fit in the second but seem to fit
just fine the first like you mentioned telemarketing. Well, telemarketing
involves having conversations with actual people, and you might think, oh,

(03:41):
well a robot would have different would there be some unpredictability,
But often in those cases there's a very specific script
to follow, whether you're human or robot, and you have
a very it's essentially kind of a it's a it's
a very simple decision tree, right, You're trying to get
the person on the line to say yes or or
no in some cases, and if as long as it continues,

(04:05):
it can keep going down that pathway. And we've even
seen some interesting clever uses of natural language recognition to
create situations where a person is not entirely sure if
the if they're actually speaking to a human being or
a robot telemarketer. Yeah, and that might be some cases
where even if it is really a human, they're they're
like trying to get the human to behave like a robot.

(04:27):
So these kind of robo calls like that, that's prime
territory for automation, especially also if it's low stakes, you know,
because the people the bosses like aren't terrified about what's
going to happen if the person being called has a
bad experience. You know, it's just that I don't care, right, right,

(04:48):
But let's take an example of something from our favorite movie,
Back to the Future Part two. Right, So, in the
opposite category is stuff that seems like it might be
easy to automate, but that actually it would probably be
really hard to automate, And like robot servers in a restaurant. Yeah,
so a robot server in a restaurant, what would it

(05:08):
have to do? It seems like you could basically come
up with a simple list of instructions like take drink orders,
bring drinks, take the food order, bring the food, check
the happiness levels at the table, bring the check, uh,
process payment, and then you're done. But actually being a
server in a restaurant requires hundreds of constant little improvisations

(05:32):
h plus more athletic nimbleness than any robot today is
even close to capable of. I mean, can you can
you imagine with how clumsy robots are now, something that
could like clean tables and move quickly back and forth
between a kitchen and a table without running into people.
Keep in mind also that most robots are very very
good at navigating static environments. So in other words, you

(05:55):
program in the knowledge of what the environment is, you
know how it is, and give the robot some ability
to sense its environment. It can then navigate around obstacles.
But in a place like a restaurant, you have a
constantly shifting environment. You've got people getting up and leaving,
You've got chairs that are moving. So you have a

(06:15):
robot that has to have really good path finding technology
to maneuver its way through a constantly shifting environment. That's
really hard to do. Sure, but there's another factor that
we haven't considered yet, which is, you know, in defensive servers,
we know that you guys are not robots. Yeah, there's
other stuff that you're doing than just taking the orders

(06:38):
and bringing the stuff exactly right. And I think this
might actually be the biggest problem with the idea of
a robot server, which is that servers need social intelligence
good ones do. Yeah, well, this might be the biggest
issue of all here, because a server in a restaurant
isn't just a food delivery machine. A good server makes

(06:58):
people feel welcome, makes and feel comfortable, uses charm to
like up sell drinks and specials and appetizers, can explain
things about the restaurant and answer questions about the menu,
explain things about the food, describe what something tastes like,
and then especially can sort of like calm complaints and

(07:20):
help people get what they want, understand what the people want,
and make them feel happy. Like can you imagine, let's
say you're out with your family and you know your
great aunt she has a complaint about her chicken peccata.
Can you imagine a robot server being able to make
her feel happy and like make her feel like her

(07:42):
complaint had been properly dealt with. I can't imagine anyone
making my grain on feel happy. Well well, but especially
not the same kind of software that like answers your
telephone call you call the insurance company, and like that
thing is what's trying to like make you fear good.
Or maybe it's the chatbot that only answers answers any

(08:04):
any message with a question I'm not happy. Why do
you think you're not happy? The doctor? Yeah. So, in
the context of robots taking our jobs, I think this
also means that, in addition to creativity and improvisation, another
type of job that will be safely human for a while,

(08:25):
our jobs that have a strong demand for good social intelligence.
So what does social intelligence mean, I mean, we we
sort of gave an idea there, but how would we
define it? I would say one thing is that it
means something along the lines of being able to read
social cues, to perceive and understand emotional states, and to

(08:46):
intentionally manage and trigger targeted emotions in others. Right, In
other words, in order that it has to be able
to navigate social situations in a way that appears to
be natural and does not make things worse, right, doesn't
make someone feel awkward or embarrassed or ill at ease

(09:07):
for any reason, whether because you are communicating with a
and obviously artificial construct, which already for many of us,
is uncomfortable. Right, Yeah, some of us We we in fact,
humans are pretty flexible creatures. We can adapt fairly quickly
if we're willing to, sure, sure, And and some of

(09:27):
us I mean, I suspect many people listening to this podcast,
for example, probably find it charming that the novelty of
that kind of interaction, although really like like after a while,
you'd be like, I just want not burned chicken cottage,
And some of us might actually prefer the interaction with
a robot as opposed to a human being. That's true.

(09:48):
That's true. I have days where I'm not sure that
I can do this list of things that we're talking
about here that has grouped under social intenation. That's true. Well,
but I mean we all have to practice our social
intelligence skills all the time. Oh sure, yeah, no, this
this is not something that we think. It's not something
even humans are always perfect at. Right now, there are
plenty of times where you might be wrapped up in

(10:08):
something that's going on in your life and you don't
pick up on social cues that otherwise you would notice
right away. Um, there are plenty of times. I mean
I certainly have been guilty of that because I'm I'm
always wrapped up in my own mind. Uh So, you know,
when we're when we're aware, and when we are capable
of seeing it, then that's one thing. But machines, they

(10:31):
don't have that innate ability at all. Right, you have
to build that into machines. And now there is an
entire emerging field of artificial intelligence that is dedicated to
this problem, to the problem of helping machines understand human
emotional states and manage them as best as possible, and

(10:52):
sort of all the other things we think of as
social intelligence, you know, impression management and and navigating complex
social scenarios, not being annoying, you know, all these weird
things that they don't come naturally to machine. Right. And
and you know, this is important not just for robots
that are specifically designed to have social interactions, but robots

(11:15):
that are just going to be around humans. You know,
they're especially consumers, right because you know, people robot industry
people are kind of used to having to program a
robot by typing code into a thing, you know. But
but consumers, Yeah, and it kind of and it may
be that the robots that you, as a consumer encounter,

(11:38):
it may be that there's no direct interaction, but they
still those robots still need to have that social intelligence
to understand how to navigate through while being as unobtrusive
as possible. It's it's one of the many things that
we have to take into consideration as robots take an
increasingly more prominent role in our lives, along with the

(11:58):
idea that the robots have to be design in such
a way that they are not likely to cause harm
to people through their normal operations. I mean, this is why,
like we've talked about before, with big industrial robots. Typically
there are huge safety rails all around them, because these
are just machines that do the same set of actions

(12:19):
over and over again. And if you let a team
full of preschoolers run around and you get a bunch
of preschoolers welded to a metal wall, it's not good.
You don't want that. So this we're we're talking about
the social equivalent of that same thing. You don't You
don't want the social equivalent of welding preschoolers to a wall.

(12:40):
I'm not sure what that looks like, but I kinda
I kind of want. Can someone please make us I'll
take you to my next family reunion. You will be
able to experience it. No here, I think here's the
the real world takeaway. We're gonna have robots in our homes. Yeah,
I mean, that's that's pretty clear. We aready. Yeah, we
already might have some roomba or something like that. More

(13:01):
and more we're gonna be incorporating robotics and artificial intelligence
into our home life. And it actually matters that these
things are not causing emotional stress and annoyance, like making
our lives more unhappy by not knowing how to behave themselves. Yeah,
it's a great point. And so there are entire research

(13:24):
departments dedicated to this particular aspect of artificial intelligence, And
in fact, there's one that's right down the road from it,
right here at Georgia Tech. They've got a socially Intelligent
Machines Lab. Yeah. We'll talk about one of the robots
that they've worked on a little bit later, but I've
had a chance to visit that particular lab. Yeah, it's
pretty darn cool. Guess. Don't mean to brag, but it's

(13:45):
pretty neat. Even as even as a University of Georgia graduate,
I can appreciate this particular Georgia Tech lab. Uh. So
let's talk a little bit about what it means for
robots to actually possess social intelligence. You touched on this, Joe,
you were talking about the ability to pick up on
those social cues. So I would I would say that

(14:07):
a socially intelligent machine has to be able to observe, analyze,
and respond to humans accurately and within the proper context
of the situation. So one example, and I've seen this
as particularly examples in Japan, robots that are part of
a hotel check in phase. You come in and you
you need to get your room. You would want that

(14:31):
experience to be efficient and pleasant and probably not reliant
on too many assumptions because that could get socially awkward
depending upon what the nature of the hotel visit could be. Right, Sure,
you wouldn't want, for example, you know, yeah, sir, that
you have a lovely daughter, this is my wife. That

(14:51):
could be awk kind of thing. Yes, that's that's one
of the team or examples I could get, but yes,
but in context is really important. So let's say here's
another example. I've actually, you know, you we've talked about.
I'm not sure if we've talked about on this podcast,
but there was a design for a flying drone that's

(15:11):
meant to encourage joggers to run. It was to keep
pace with joggers so that they have they feel like
they have a running buddy. So you know, Now, that's
a simple version of a robot. But imagine that we
have a more advanced one that actually gives verbal encouragement
so that you can continue to run. Now, let's imagine
that said robot veers off the path and into a
memorial service at a cemetery. I think saying you can

(15:35):
do it, get on up there, keep on moving at
a memorial service that, Yeah, that might might not go
over so well in the social situation. That is a
memorial service. So when we're talking about social intelligence, we're
talking about not just responding within the situation the robot
was intended for, but in broader situations. Sure. And I

(15:55):
think also side note, this means that we need to
teach all robots in the future about graveyards and funeral practices.
I think why they're cool. Yeah, now I'm just seeing
a bunch of robots listening to the Smiths, and the
robots says, hey, baby, let's go hang out in the graveyard.
I'm feeling better and better about the lyric I picked

(16:16):
for the beginning of this episode. Okay, So, in addition
to understanding context, I think another crucial part of social
intelligence is the ability to incorporate new information right because
to right, Like, when we interact with people, we don't
treat everybody the same way. I mean because they not

(16:38):
everybody wants to be treated the same way you treat
everybody else. You learn different ways to interact with different
people based on what your relationship with them is, and
what their preferences are. Some people like very jokey interactions,
some people are a little more business like. Yeah, there's
entire fields dedicated to this as well. And it's interesting

(16:58):
because the law of the earning behaviors I've seen have
concentrated first on something a little simpler, because when we
start getting into social interactions, that's a really complex and
and chaotic field. So, I mean, if you feel stress
in social situations like social anxiety, a lot of the
stress you're probably feeling is it has to do with

(17:21):
the complexity of the social interactions that we have. In
your fear that you're not meeting expectations, writing, unsure that
the choices that you're making are are good choices. Right, yeah, now,
and that's a great point. Now, obviously the robots aren't
going to be experiencing social anxiety, but they could end
up that we know of, but they could certainly encourage

(17:43):
it in other people if they the robots are not
behaving in a way that the people were expecting. If
they have said something and the robots seems to behave
in a in a completely counterintuitive way, that could really
cause some issue. Oh yeah, well, I was just trying
to communicate the complexity like, Yeah, us humans who are
pretty good at social interaction have enough trouble with it
that sometimes we can get anxiety about it. How So,

(18:06):
a lot of the learning I've seen has been oriented
to teaching a robot to perform a series of tasks
by performing them for the robot. Actually the one that
there's one in Georgia Tech that does this where you
could have a robot and it goes into observation mode
and watches what you do, so you could pick up
let's say that you've got, you know, kind of going

(18:28):
back to that example of the sphere in the box.
Let's say you've got a sphere on a desk in
a box in the desk, and you open up the box,
you pick up the sphere, you put the sphere in
the box, you close the box. The robot will observe
this and then be able to repeat those steps. And
even if if you've designed the robot properly, it might
be able to ask questions like does it matter what

(18:52):
how I pick up the this thing? Does that matter?
Does it matter how I close the box doesn't? If
it's four flaps, does it matter? Or which flaps are
closed first? Is there a sequence? Right, if it were
a cube that you were putting inside the box instead
of his sphere, doesn't matter which side of the cube
is facing up perfect exactly right. So that's that's a
building block, not to make it a weird, stupid pun,

(19:16):
but that's a building block for the kinds of interactions
where a robot starting to get into more complex situations,
such as, this person is exhibiting behaviors that indicate that
they are in a happy mood. So what is the
appropriate response for this particular person when they are in
a happy mood. Maybe it's playing that person's favorite upbeat

(19:39):
song because it's going to enhance that happy mood. And
then that's what the robot ends up doing. Uh, these
are you know, it's going to take these these basic
steps before you get into something so complex. Is all right,
Now we've got two people, two people the robot has
to interact with. If we're talking about a home robot,

(20:00):
it might have profiles for each member of the family,
for example, but it knows that person number two is
not really into music, so playing songs aren't going to
have that same emotional effect as for the first person,
So it may have to build an entirely different set
of interactions, just as we humans would kind of intuitively
know from our interactions, like what you know works and

(20:21):
doesn't work with the people in our lives. And you
start to realize, Wow, this is this is tough. This
is a non trivial challenge in artificial intelligence. Absolutely. Let's
talk about some of the teams and some of the
robots that are that are taking these challenges head on. Sure. Yeah,

(20:42):
so we've got a collection of different robots. These are
These are mainly robots designed specifically to have social interactions
with humans, and lots of we should say that lots
of computer programs and robots have some elements of this. Yeah. Um,
I mean typically you're not going to be interacting with
a robot that's just designed to be social, because what's

(21:04):
the point other than a novelty. I mean, typically social
intelligence is a feature of a robot that intended to
do something else. Yeah, except for I guess therapy, but yeah,
that's true. Therapy, butts would be a great example. Yeah.
I started to think that in some of these, some
of these cases, we're heading toward a future like in
future rama, where we have robots built to be hoboes

(21:28):
or gamblers or whatever, and you think, why would anyone
ever build a robot, But we are doing that. Yeah.
So one robot that I actually watched this adorable video
earlier today was this robot called Gibo. Had you heard
about this one before? I have? I had seen pictures.
I had not seen the video or read about Gibo,

(21:48):
but I had seen photos of Gibo before. Okay, so
this is nothing super complex. Gibo is just supposed to
be a social robot that's really designed as a kind
of around the house helper. So imagine kind of an
embodied serie in a way. It can track faces and
recognize individual household members, take pictures, record voice reminders, um

(22:11):
and do a lot of the stuff you might have
like apps on your smartphone. It can even do things
like um like sort of like a similar feature as FaceTime,
where you are in one place, one of your family
members is somewhere else they are using their smartphone, and
then Gibo just projects their face onto its face. So
you have a a direction to look in and it's

(22:32):
an actual physical thing. It's not something you're holding in
your hand. You're looking down at Gibos quote unquote face,
which is really kind of a round screen. Yeah, it's
this like it's this kind of friendly. I yeah. And
it has a camera in it so it can it
can quote unquote see that allows it to do the tracking.
Now it doesn't like walk around the house or anything.
It's sort of on a stationary base. But you could

(22:53):
pick it up and move it around with you a
little more than five pounds, I think. And the thing
about this is nothing about this robot is like crazy
mind blowing, you know. It's not like doing anything all
that weird. It's sort of a cuddly smartphone or cuddly
series kind of thing. It you can talk to it,

(23:14):
it'll talk back to you. It tries to, uh to
be friendly and fit in around the house. Right. So
it's supposed to kind of provide a lot of those
features that you would find in a smartphone, but in
a way that is more socially interactive. Sure. Sure that's
and that's a little bit more relatable because your smartphone.
Although I think that we all have certain attributes that

(23:36):
we give it, we don't really you know, name them.
And and no, I'll never name that phone. It did
phone phone. Uh so the next one we wanted to
talk about is Pepper Okay, So Pepper is According to
the company that makes it, Aldebron is the first humanoid

(24:00):
robot designed to live with humans. And it's a conversational robot,
and it's supposed to detect and react to emotions, kind
of like what we're talking about earlier. It can move
about in its own and that's about all I can
do is have conversations move around autonomously, so you don't
have to direct it. Um and uh. It watches your

(24:21):
facial expressions and your body language for social cues to
your emotional state, and it pays attention to your word
choice to try and get a handle on what kind
of mood you might be in. But so I guess
essentially builds an index of words that you are more
likely to use when you are angry, perhaps of the
four letter variety. Huh yeah. But it is meant to

(24:43):
have these social interactions and responses so that it has
these these appropriate responses for whatever mood you happen to
be in. You know, I hate to sounds cynical, but
I wonder to what extent, especially early versions of these
kinds of robots are going to be again, like Sirie,
not in what they do, but in how people treat them,

(25:04):
because I can't remember the figure, Like some significant percentage
of what people say to Siri is just profanity, Like
they're just they think it's funny, you know, like what
will series say if I tell her to do this
rude thing? You know? My favorite was serie where can
I Hide a body? And the response was the nearest

(25:25):
quarry is three? Like that is amazing. Yeah, well, you know,
it's that novelty factor before you get really used to
something and before something works extremely well. I think that
we're gonna be treating these robots like like robots, and
and you know, it's like playing a text based adventure

(25:45):
and just like typing in weird things just to see
if the programmers thought, yeah, to put inside. And then
we reach we'll reach the point where we'll get more
and more movies and this is already the case, but
we'll get more and more movies that give us an
emotional attachment to robots. So we'll see that more and

(26:05):
more in the pop culture, and then gradually the technology
will also reach that same level where we will start
to have emotional attachments to these robots beyond the what
you might call superficial ones that we have now, Like
there are people who if they're if their roomba were
to get damaged, would feel genuine emotional stress about that,

(26:27):
not not just related to I need to replace my room, Bob,
but more like a member of the family has been
hurt kind of thing. Yeah. Yeah, absolutely, And uh, I
think that Jarvis from the Iron Man movies is a
is a really good example of that. And there is
a robot called Emo Spark that was kind of reminding
us in the research of of that sort of Jarvis character. Yeah,

(26:48):
so this isn't really a robot. Yeah, this is more
like an artificial intelligence that lives inside a cube. So
because the cube is a cube, you could set it
on a shelf and but it's a cube, open it
up and let the cinambytes out. Cinnabonn have a little

(27:09):
home or Simpson moment. Uh. But no, it is like
Jarvison that it feels almost like it's a disembodied artificial
intelligence that can inhabit a space within your home. So,
so imagine you've got this cube. The cube is what
contains the you know, it's the hardware that contains the
software that runs this Emo Spark robot. Uh. The Emo

(27:29):
Spark robot can sense and interact with its environment through
things like Bluetooth and WiFi. It also uses webcams and
microphones and a connection to your smartphone. So this is
what gives it that that window to the world, and
it can recognize people. It can look at facial cues
again to determine moods. It actually maps your face against

(27:51):
eighty different points and tries to figure out your mood
based upon your expression. So, um, yeah, this is kind
of an interesting design. If you want to take a
look at the Emo Sparks, it does look like this
kind of you know, uh, Cino byte meets tron cube thing,

(28:11):
and uh yet you know, it's it's one of those
that could potentially have this sort of natural interaction. You know.
We we are seeing this kind of stuff in the
homes already with things like like the Xbox and PlayStation
controllers that allow for voice control. So this is kind
of the next step when where it goes beyond a
passive system that's trying to recognize voice commands and then

(28:35):
respond to them and have one that can actually have
some form of communication back to the person who is
issuing the commands. In the first place, so it becomes
not just a list of commands and then responses, but
in actual communication, a conversation that leads to whatever the
outcome the person hoped for, you know, was in mind. Yeah, uh,

(28:56):
And I wanted to talk for a second going back
to the stuff that they're working on at Georgia Tech
in their Socially Intelligent Machines Lab, because they're they're kind
of working on more the the building blocks of a
more practical household robot and how people are going to
be interacting with that, because part of creating and ease

(29:17):
of interaction between consumers and robots is going to be
making robots that can learn new tasks with a minimal
effort on behalf of the owner, right, right, So you
don't have to sit at a terminal and type lines
of code so it can open up a can of
cat food. Yeah. Yeah, And kind of the first half
of that was what we were talking about earlier with

(29:39):
with that unsupervised learning, wherein the robot is watching a
human completed task, and you know, so even if it
doesn't have any idea what the objects in this task are,
you know, it doesn't know what if it needs to
set a table. It doesn't know what cups and plates
and food are, but it can be taught to know

(30:01):
what those things look like where to place them in
a table setting. Um right, that that plates come first,
food comes later. Otherwise people are unhappy, right right, Sure,
exactly that kind of stuff. And when you break it
down into those kind of steps, it becomes a relatively
easy thing for robot to learn how to do. But
that's really only half of the equation, because the second

(30:22):
half of of how humans naturally learn how to do
stuff is being able to ask questions of their teacher
and so and that that's what they're calling active learning
in robot circles, and it's it's letting the robot ask
questions of the human teacher while it's being presented with
this new task that it's watching. But that's a trickier

(30:44):
question you might expect, like having a robot know how
to ask the question to figure out what it needs
to figure out. Right, it comes naturally to us, but
it doesn't necessarily come naturally to machine. Right, So sure
you have to you have to teach a robot when
it's actually permissible to ask a question. And and furthermore,
how to ask a good question, what does the question mean?

(31:06):
And then what does the answer mean? Like to to
ask a good question, you have to understand what relevant
information is to know that you need it right. Yeah,
and and in fact, the examples the robot were specifically
talking about is called Simon, the robot um But yeah,
and I got to see Simon. That Simon was the

(31:26):
one I was talking about. Specifically, I got to see Simon.
Uh didn't get to see Simon really in in full
um in full robot mode. But I had a great
conversation with Henrik Christensen, who is the head of robotics
over a Georgia Tech and he was telling me all
about this approach. And it's so interesting the idea that
the robot can start to build this kind of of database,

(31:51):
this index of information of the various things in its environment,
the relationships between those things. Uh we which ones are
meant to be used in certain ways. It's it's learning
in a similar way to the way humans learn, but
it's but it's already got a huge step up and

(32:11):
that it's got a basic grammar syntax vocabulary that it
can follow. It doesn't have to learn the language. It
just has to learn, you know, and has to be
able to learn how to interact with its environment. And granted,
even that is going to be limited based upon what
the programmers anticipate. Right there may be and in fact,
there will be situations that are unique to every home

(32:34):
that a programmer could not anticipate. And in in some cases,
I'm sure there will be times where it will be
very difficult to explain to the robot what something is for,
or how it's used, or why it's there. It will
be tricky and um, you know. The the question then
is how do you address those those outliers, things that

(32:57):
are outside the normal experience. How do you explain into
the robot how to use your furby? Why are you
explaining that to your robot? I mean, other than the
fact that that would be adorable. I just okay, I
just answered my own question. No, now I'm thinking about it,
and I totally want a robot. The robot in Ferbie

(33:17):
are plotting against you. They're having conversations deep into the night.
But but so yeah, this this lab at Georgia Tech
is laying some of this really cool groundwork, um for
for you know, it sounds like a lot of these
other robots are are kind of fun, but aside from
the interesting fact that they existed there there may be

(33:39):
moving towards something like being a therapy bot or being
able to interact with the human uh, doing household chores
and stuff like that. But I mean, yeah, yeah, yeah,
I mean all the pieces are kind of out there. Yeah,
I think all these are are sort of making steps
in the right direction. Though at the same time, as
impressive as all this is, we recognize how big the

(34:02):
gap is between this and like Rosie from the Jetsons. Yeah, yeah,
the best socially intelligent robots today are nothing like a human.
And another question is do we want them to be? Yes?
Like if we so you say yes, yes, I don't know.
I I think the question might be, um, should socially
intelligent robots be like well mannered humans or should they

(34:25):
be a different kind of entity? No? No, I I personally, Uh, well,
I don't think I would not define what a socially
intelligent robot should or should not really be like as
much as I joke about it. But well, no, I
mean I know we're not like setting the rules. I
guess what I'm asking is what do people what? What

(34:46):
do most people really want. Yeah, yeah, I mean I
think that I would be creeped out by something trying
to interact with me as a human or as it
thinks or was programmed to think that a human would
would interact. You know. I think that it may be
that we define like a subset of behaviors that we
expect from and are comfortable with as far as a

(35:07):
robotic entity is concerned. So things that you know, these
are the kind of interactions that I think will be fine.
They're very straightforward. Uh, you know, outside of things like
the robots that are meant for therapy or of of
one nature or another, those obviously you need to have
more of a kind of comforting approach. But for robots

(35:29):
that we're having interactions with on a daily basis, Like
if it's a robot that, for example, Uber has famously
invested a lot of money in autonomous cars, you would
want that interaction to be pretty smooth and probably pretty quiet.
If I know most people who have complained about Uber,
it's that. Listen, when I get a when I get

(35:51):
a newber car, I don't want you talking to me.
I just want you to take me to the place
where I wanted to go, and then I want to
get out of the car, and then I want to
be on my married little way. Um, don't remind you
that you're a human. See, I don't have that particular interaction.
I'll talk if they want to talk, but I don't
want to initiate conversation because I figure this is someone
who interacts. I can hardly believe that about you, Jonathan.

(36:14):
It's once in a while I'm able to be considerate
of other human beings. But all right, but but this
is a point, right that where where in this particular
context where Uber would have a Thomas Cars, which are
technically robots. I mean, that's essentially a robot. How do
how what kind of social intelligence do those robots need
in order to be able to uh do their their

(36:37):
job and do so in a way that is the
most satisfying experience to the customer. Well, I've got an
analogy for household robots and whether or not we should
expect their social intelligence to be like that of humans
or to be a different kind of thing. How about dogs?
I mean, we love dogs, but you don't. There's the
difference between a well behaved dog and an ill manner

(37:00):
badly behaved dog. But a well mannered dog doesn't act
like a human. It's just a different kind of vinity.
And I wonder if well mannered robots we would expect
to be just a different kind of thing than humans.
I think. I think it will also depend on the
form factor, Like, if the robot is a humanoid robot,
would we expect it to have more human like traits?

(37:22):
Oh yeah? Oh man, a humanoid robot that acted more
dog like, well not necessarily dog catlike would be pretty awful.
But well, well, but no, no. I I was thinking
about that too, and I'm I'm not sure. I'm not
sure how comfortable I would be giving a robot any

(37:43):
orders in my house, Like I I don't know, It's
it's a very basic thing that I'm not sure if
I could, in good conscience say, hey, robot, go make
me some toast. I've got this really important Facebook articles. Well,
when you put bread in a toaster, do you ask
the sister nicely to toast your bread or do you
just push the button and walk away through that that

(38:04):
toaster is just a slave to your demands. I'm just saying, like,
but I can see what you're saying in terms of
of the emotional situation, like if it were too human one,
I certainly wouldn't want a robot to do my bidding
that was indistinguishable from a human, because I would feel
like I was ordering a human around, which I mean

(38:25):
that would be creepy. Uh. To be more serious though,
going down this this pathway of of you know, let's
let's get a little philosophical, it's also possible that we
could have these very socially interactive robots that people would
feel comfortable talking to and confiding in in ways that

(38:49):
they might not with another person. Particularly if there's like
a problem that's weighing on their mind that they want
to express, but they don't feel comfortable talking about it
to anybody in their circle. They're they're support group, they
might want to talk to a robot. Which if that happens,
and I assume it will happen, the next logical thing

(39:10):
that will happen is people will design robots specifically to
get information from folks. Uh sure, sort of like a
monitor your children, Barbie, yeah, or or you know uh,
or spy upon your employees to find out who's actually
leaking corporate secrets to your So you plant a robot
employee among them that sits at the lunch table. Know,

(39:33):
all you have to do is you work at a
big company, and and everyone at that company is told, Hey,
guess what, we're making this awesome robot that's gonna help
people at home, and everyone who works here gets a
free one. Tell us what you really think about the boss.
I'm just saying, like free robots, Like if if if
we were offered a free robot, I'd likely be one
of the people saying, I will take that free robot.

(39:56):
Can this free robot carry me the way home? Because
I don't know that I can carry the robot home,
and one of us is going to be doing a
lot of the work, so I wanted to be the
robot in this case, you could call an uber. But
at anyway, Yeah, get that autonomous car come up and
let me my robot buddy, and then I'm outnumbered. I've
got two. But yeah, the point being that that, I

(40:20):
mean there are actual people who have have brought this up,
who have written papers on this subject. The idea that
if we get to a point with socially intelligent robots
where we feel comfortable enough to confide in them in
order to receive comfort. Then there is already the incentive
to build robots specifically to gather information that might you
might want to remain private. So, in other words, if

(40:43):
people do this behavior, and if there's a way to
to profit from that behavior, someone is going to act
upon that. That is perfectly logical in the way our
world works. It's not the bright, beautiful future that I
want where you know, you can have these interactions then
be be assured that they're between you and you're the

(41:05):
robot that you're talking to. Um, well, yeah, yeah, I
mean eventually you've just got you've just got do android
stream of electric sheet of scenario where you've got probably
a robot interrogating probably another robot. Right. That was this
turtle that I came across on my way here. I
just left it there in the middle of the belt line.

(41:25):
Well no, first you've turned it over on its back. Yeah.
It was odd too, because you would have thought it
would have been a tortoise, not a turtle. But it
was definitely a turtle. That was kind of strange. There's
pawns along the belt line, there are there are all right, So, uh,
socially intelligent robots is going to continue to be a thing.
I am curious. Do you guys have any guests like, like,

(41:49):
I know this is putting you on the spot, but
like a prediction of when we will get to a
point where a robot will be socially intelligent enough to
interact in a a typical social setting. Let's say I
don't know a cotillion, oh for years, let's play it.

(42:26):
Uh no, Um, I'm not sure. I think I honestly, Oh,
technology is changing so fast these days. Well, I'm tempted
to say some huge number, like like fifty to a
hundred years. I think my answer would play on the
same thing I was just talking about a few minutes ago.
It would be the difference between something that's very convincingly

(42:48):
human versus something that's just very socially pleasant and acceptable
but not necessarily human. It's its own paradigm, it's its
own thing. I think the latter is going to happen
much sooner. In fact, might argue that the latter already
exists and it's just going to be refined. Yeah, I'm
going with fifteen years. That's when we're gonna see robots

(43:11):
capable of interacting in social situations as well as your
typical human wow, I believe that it could happen. I mean,
I'm I'm dubious. It seems like a lot of I mean,
we're we're coming a long way in terms of natural
language processing and all of that, but it's it's so much.

(43:31):
Mostly I'm confident that by no one's going to listen
to this episode in fact check me. So I'll be
all right. I feel pretty confident about that. That's a
good way to play the game. It's a fairly it's
a fairly arbitrary number, not prices, right, You're just like,
as long as you don't yeah, I think, um, I
think yeah, I think that. Uh. I think that we

(43:53):
are making advances in the field of artificial intelligence at
an incredible rate, and that I think that rate is
likely to stay steady. Now, we should point out that
these advances aren't on a path like Moore's law, right
they are, they are far below the path of Moore's law.

(44:14):
But I do think that advances in various fields are
pointing the way to incredible achievements in artificial intelligence. And
I think that by having a robot capable of having
a social interaction that would be more or less indistinguishable
from a typical person. It's ambitious, but I think achievable. Uh.

(44:39):
And if rolls around and it turns out that we
had hit some obstacles we could not have anticipated. Demand
money I want. I wanted my socially my socially acceptable robot,
but I will accept money. It's place that's almost always
going to be the case, actually, Joe, I I like,

(45:01):
I like our range of of optimism. Yeah. Yeah, And
you know again, I think it's always important you always
have to acknowledge the fact that there are challenges. If
you ignore that there are challenges, you're pretty much guaranteed
to fail. You need to acknowledge the challenges so that
you can figure out ways to overcome them on your
quest to whatever your goal happens to be, in this case,

(45:21):
creating a socially intelligent robot. Joe, you came up with
the idea for this podcast. I think it was a
lot of fun. It was. It was entertaining and fun
to research the various types of robots that are in
this field. So thank you, Thank you guys. Now we
all start getting tons of great suggestions from you listeners

(45:42):
out there. We want those to keep coming in because
we really love the fact that you're guiding the conversation.
You're you're part of the conversation. Keep up the great works.
Send us your suggestions for future episodes or even comments
on past episodes to FW thinking at how stuff x
dot com, or drop us a line on Google Plus

(46:03):
or Twitter or Facebook. At Google Plus and Twitter, we
are FW thinking at Facebook. Just search FW thinking in
the search bar. We will pop right up. Leave us
a message, and we'll talk to you again really soon.
For more on this topic in the future of technology,
visit forward thinking dot com, brought to you by Toyota.

(46:35):
Let's Go Places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.