All Episodes

January 30, 2020 65 mins

Chances are, your face is already part of the database -- and AI is getting better and better at reading one face and finding it in the vast sea of digital images. What does this mean for the future of privacy? How did we get to this point in techno-history and where do we go from here? In this multi-episode look from Stuff to Blow Your Mind, Robert and Joe explore facial recognition technology.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Welcome to Stuff to Blow Your Mind, a production of
I Heart Radios How Stuff Works. Hey you, welcome to
Stuff to Blow your Mind. My name is Robert Lamb
and I'm Joe McCormick, and we're back today with part
two of our exploration of facial recognition machinery. Last time,

(00:23):
of course, we talked about uh some tech biz world
stuff that that may be highly relevant to your life,
especially in the near future. We talked about an artificial
intelligence company that was recently profiled in the New York
Times as uh selling a service to law enforcement that
would use while they stole your face right off your
head and scraped it from the Internet, and now they're

(00:46):
selling that to law enforcement as a tool supposedly for
identifying people with a high rate of accuracy, uh, linking
your anonymous face to all of the digital information that's
out there about you. Long story short, we're all boned
unless we, uh, you know, we actually you know, put
into place various laws and protections that that either keep

(01:08):
these technologies from fully coming online or make sure that
they are restricted from destroying the privacy at least of
you know, private individuals. And we'll talk more about that
aspect of the subject. I think in the next episode,
when we get more into the modern technology today, we
wanted to focus more on the biological world of facial recognition.

(01:31):
What's been learned in in recent decades in psychology and
neuroscience about the recognition of faces by animals like us, Right,
Because ultimately, I guess the counter argument is, Hey, we're
just trying to teach computers and phones to do what
humans do and what animals can do, and that is
look at a face and respond to it, identify the

(01:52):
individual behind that face. Right. And while that might be
something that scary as a capability for the machine to have,
it's something that's uh part of our survival history and
an important part of our social lives. Oh yeah, because
we go around every day, we're walking around, we're driving where,
you know, in an exercise class, etcetera. And our brain
is engaging in that exercise. Of which human is that?

(02:15):
Do I know that human weight? I think I know
that human weight? Further analysis reveals I do not. It's
a really funny thing actually, when you notice how much
your brain is just going who's that is that? Yeah,
it's like a it's a ridiculous amount of your your
processing power is eaten up with that narrative. Yeah. In fact,

(02:35):
I mean it makes that That's why solitude is sometimes nice,
because it just removes us from that exercise. Now, granted,
you could have too much solitude, and I guess maybe
the brain ends up using all that energy that it
would use towards identifying or trying to identify strangers towards
new and destructive things. But yeah, for the most part,
it is an important part of making your way around

(02:55):
human society. Now, at the risk of sounding like I'm
making excuses, I gotta say, hand, this is a complicated subject.
This is one of those where the deeper I dug
into it, the more and more it just seemed like
we were missing out on So I mean, I think
we just have to preface this by saying it's impossible
for us to do the whole subject of biological facial
recognition justice in this episode. We'll do our best in

(03:17):
a reasonable length of time. Yeah, I find like it's
easy to sort of glimpse the complexity of it when
you engage in exercises like say, attempting to draw a
face that you know. And granted that involves artistic ability
and talent. That is, sometimes the talented is underdeveloped, but still,
like even I find myself without having that talent, just
even the mental exercise of them trying to figure out, Okay,

(03:40):
if I was to draw Joe's face, Wait, what does
Joe look like? Again? Okay, I have to form the
picture in my mind and then I have to then
I second guess it. I'm like, is that really what
Joe looks like? Or it's it's even it's even harder
if I'm not physically in the room with that individual
to really have horns and pointed teeth like that. Yeah,
So that's that's That's one side, but also just the

(04:03):
the idea of recalling faces like and granted we're dragging
in the complexity of of memory when we're doing that,
but I think it also hints that the it hints
at how difficult this is to really unwrap what happens
when we look at another face and identify it much
less when we recall it from memory. Now, we've discussed

(04:23):
face perception in the brain before, for example, in our
episodes on face blindness and in an episode called the
Doppelganger Network h And in these previous episodes, something that
we definitely talked about was the history of how our
understanding of facial recognition in the brain was illuminated by
studying cases of people with with malfunctions of facial recognition

(04:45):
in one way or another. Uh, primarily with the condition
that we talked about in the face blindness episode. It
is known as face blindness or prosopagnosia, which is a
condition with a somewhat misleading name if you go with
face blindness, because people with face blindness, I think it
would be best explained by saying they actually see face

(05:05):
is just fine. The real issue is that people with
this condition have difficulty recognizing faces, not seeing them right. Like.
One example I always come back to, and I think
I've probably brought this up in the show before, is
there's there's an excellent episode of that that television series
Hannibal about Hannibal Lecter, in which there's a character that
also has face blindness, and when they behold Hannibal Lecter

(05:29):
and a key scene, all they see is like a
featureless flesh mask because it's like they can't see the
face at all. That is not based on any of
the material we've looked at in an accounts that we've
read that is not what face blindness is. It sounds
like face blindness. The experience of face blindness is more
akin to say, when I look at some vegetation and

(05:50):
I asked myself, is that poison ivy? I know, I've
looked at a picture of poison ivy. I'm not sure
if that's poison ivy or not. It's not like I
don't see a plant. I just cannot identify it compared
to other plants of similar form and function, And therefore
I have to fall back on on Okay, well, I'm
gonna try and remember what are the what are the features?
Three leaves? Let it be how many leaves does this have?

(06:11):
And I started to have to gauge in a more
and a different kind of cognitive exercise to try and
make a positive identification. Yeah, I mean, I think it
might be even more complicated a task than that. It's
like the people who have typical powers of facial recognition
don't even recognize what a superpower this is that comes effortlessly.
The point of comparison I've used before, and I think

(06:33):
I heard back from some people after this episode saying
it was a good one. Was the idea of holly bushes. Like,
if you look at one holly bush, you can see
it just fine. You can note all the colors and
the shapes and all that. But imagine you're walking down
the street and you happen to pass by a place
where that same holly bush you looked at earlier has
been like dug up and replaced somewhere else. Would you
notice it was the same bush? I mean, it looks

(06:56):
like just another bush, right, unless you were engaging a
far more tedious exercise of like counting the branches on
the first bush, you know, really getting in there, or
you know, marking it with a sharpie, that sort of thing, exactly. Yeah,
because our brains are not specially wired to casually notice
and remember minor visual differences in individual plants of the

(07:17):
same species. But it appears that typical human brains are
specially wired to notice and remember minor visual differences in
the hundreds of honestly pretty similar oblong orbs of meat
and teeth that we interact with every day. Yeah, I mean,
because a lot of faces are similar, you know. And uh,
and and that's often where we get that initial like

(07:39):
miss characterization, where we glance and we think we see
somebody we know, but then we realize we don't. And
occasionally you'll get that kind of like triple tech moment
where somebody, Oh, at first glance, it seems right, and
then a second glance it seems almost right, And then
you always know there's just a very similar looking um
person to someone that you know, someone I've encountered before.
But this is in fact a strange Do you have

(08:00):
that one person who you see doppelgangers of all the time,
like one specific friend or celebrity that you always think
you see somewhere? I guess. I mean there are certain
you know there, there's certain looks that are you know,
that are common, certain styles addressing that are common. Um,
I've got a very weird one. Do you want to

(08:22):
hear us? Hear it? Okay? So for some reason I
keep thinking that I see the American UH physician and
geneticist Francis Collins everywhere, the guy who worked on the
Human Genome project. Seen a few pictures. I've never met him.
I've just seen a few pictures of him around UH,
and I see like basically an older white guy with

(08:46):
a mustache and glasses, And I think, is that Francis Collins.
I don't know why interesting. I mean, I find it.
There are people that I'm on like heightened alertness for
mainly like for instance, your us. You know, like I
think this is true of everyone for the most part.
You don't want to run into your boss. That's say,

(09:06):
the grocery store, because the grocery store, first of all,
is an awkward place to run into anybody. I just
ran into a coworker or the grocery store the other day,
the worst, great coworker. Nothing against this person at all,
but when I saw them, I was like, ah, yeah,
because it's like, let's have this awkward uh exchange now,
and let's do it again in one and a half
minutes on the next aisle, and then let's do it

(09:28):
another time. And it's just it's a terrible exercise, Andy,
And then you know your boss. It brings in additional complexities,
no matter how wonderful your boss happens to be. So
it's like it results at least in my weird mind
of you know, me being like hyper alert, like are
they is? You know, is the is my boss? Here?
Is a coworker here? I must hide if I see
them because I want to spare us both the awkwardness

(09:50):
of running into each other. And that's just around the office, right.
So yeah, so telling one human apart from another is
obviously a relevant survivals gill. So it's something that our
primate brains developed a unique capacity for, especially by means
of recognizing the visual features of the face, and in
people who have face blindness or prosopagnosia, this recognition capacity

(10:13):
has broken down, often due to some kind of brain
injury or lesion, and uh to the person with severe prosopagnosia,
human faces can present a problem similar to what we're
talking about earlier, like looking at a plant you know,
or looking at similar holly bushes the person with prosopagnosia
can see. The person can see the face, but the
faces don't really distinguish themselves from one another in memory

(10:35):
because of damage to the special recognition power. And as
a side note, there's another interesting fact about face blindness,
which is that people who have it also very often,
not always, but pretty often have a kind of location
blindness as well. They can become easily lost because they
don't remember visual characteristics of even familiar locations like the

(10:56):
building where they work or their house. Yes, I seem
to have call Oliver Sacks writing about this, um totally. Yes,
the late author and psychologist who who had face blindness
as well. Yeah, yeah he did. He wrote about it autobiographically,
I believe, in a piece for The New Yorker that
was really good that we talked about in our face
blindness episode. Um So. Historically, autopsies on the brains of

(11:19):
people with acquired prosopagnosia were very informative because these brains
almost always showed lesions on the bottom of a brain
region known as the occipito temporal cortex. And if you
want to picture this, it's kind of the rear middle
underside of the brain, so you think, go down from

(11:39):
your temples and then back a little bit and on
the underside of the brain. Uh. This region of the
brain is also known as the fusiform gyrus, and brain
imaging like CT scans and m r I on living
people also confirmed this correlation. Lesions on the fuse form
gyrus on the underside of the occipito temporal cortex were
commonly associated with the inability to recognize faces. Meanwhile, real

(12:03):
time brain imaging like fm r I has also associated
face processing with increased activity in this part of the brain.
So if you look at a human face, your fusiform
gyros tends to get more blood flow, and for that reason,
this region of the brain has come to be known
as the fusiform face area. Now, it's really important to
note that multiple networks of the brain are involved in

(12:25):
face perception, and we'll talk about some more studies about
that as we go on, but it appears somehow the
fusiform gyrus is especially important and that damage to it
can tend to cause this another way, Uh, that I
wanted to complicate the idea we were talking about earlier
that you can usually see faces correctly with prosopagnosia, but

(12:45):
that you have trouble recognizing. A complication to that is
like one study I remember seeing video of where there
was a patient who had an electrode implanted directly to
stimulate his fuse form gyros, and he was awake and
could talk about in real time when there was a
current applied to this part of the brain. He said
that his vision remained normal except for people's faces, and

(13:08):
when the current was applied, people's faces would tend to
kind of metamorphose. That like their features would appear to
move around and stuff. Oh interesting, like more so than
just the experience of staring at somebody's face till the
information starts, you know, loses kind of consistency. Oh is
that a thing you experience? Uh, yeah, to a certain extent,

(13:28):
I mean, with even like saying a word until it
loses meaning. Yeah, yeah, I mean it's kind of the
effect to of of looking in a mirror too long,
you know, where you're not really presented with any new data,
Like you've you've absorbed all the data that is necessary
to to properly react and uh and situate yourself in reality,
but then you keep feeding on the same informational source,

(13:51):
which you know, is kind of like the road to madness,
especially in situations of sensory deprivation. I've certainly the experience
where I stare at somebody's face, or stare at say
a dog's face long enough that like it's it starts
to it doesn't look any different, but it starts to
decohere as like the seat of the soul, and instead

(14:12):
becomes textures of organs. Do dogs have faces? Of course
they do. I don't know, it don't really what is wrong?
I don't think. I mean, I don't think of cats
as having faces either. They just kind of faces. They
just kind of have the fronts of heads. You know, Um,
humans have faces. Where does the Cheshire Cats grin live
if not on its face? Well, it's a cartoon character.

(14:34):
Cartoon characters have faces because they are they're made in
at least partially in our likeness. I've just discovered something
very sinister about you. I guess a pug kind of
has a face, definitely, as we've we've reread the pug
enough to where it it is is close to having
a faces any any dog can really claim to. Now,
there's another interesting fact about biological face perception. I think

(14:56):
I mentioned this in the last episode, but just to reiterate,
the brain, it turns out processes familiar versus unfamiliar faces
very differently. Like, when I face is familiar, the brain
is extremely good at recognizing it accurately, even under difficult
viewing conditions bad light, weird angles, partial view and all that.
Less familiar faces fail to be recognized under these same conditions.

(15:19):
So what's going on with the brain here? Well, just
to reference one specific study by Sophia M. Landy and
Win rich A fry wall To published in Science in
two thousand seventeen, called two areas for familiar face recognition
and the primate brain UH. The authors found quote familiar
faces recruited two hitherto unknown face areas at anatomically conserved

(15:41):
locations within the perinal cortex and the temporal pole. So
in fMRI I, these two areas of the brain, but
not the rest of the face processing network, responded dramatically
to familiar faces emerging from a blur, but they didn't
show any special activity when presented with unfamiliar faces. So
sounds like the brain also recruits these special additional networks

(16:04):
in addition to the regular fusiform face area for identification
when it detects a more familiar face. Now, of course, historically, evolutionarily,
those familiar faces would be the faces of individuals that
we are, that are that are part of our society,
that are part of a close knit group um or
I guess potentially enemies that you've encountered physically in the past.

(16:27):
But the modern media version of that is that we
have all these additional faces as well, like all the
all the actors we've memorized from watching TV and movies
and surfing IMDb for example. Yeah, well, I think one
thing that's important is that when a face is familiar,
it tends to come with a very complex suite of
emotional reactions that are implied by the face. You know,

(16:49):
you see somebody and you know them to be an
adversary or you know them to be a family member
or friends that you've got all these complex emotions that
come out of this emotional response called familiarity. I'd imagine
the brain's response to unfamiliar faces or less familiar faces
tends to be more flat, probably, right, Like there's less
differentiation in the response, right, right, And there's probably a

(17:13):
lot to be said, And this may be an area
of separate study, like what happens when you encounter faces
in real life that you have thus far only encountered
via media, You know, I mean, it's a it's a
different scenario, if for nothing else, if nothing else, the
lighting and the makeup is going to be different. And
they're so short. Oh that's the thing I'm surprised we've
never looked into before. There's got to be research on that,

(17:34):
Like why every you assume that movie stars are seven
feet tall until you see them in person. Well, I
think it's because they're standing on apple boxes a lot
of the times. Um. Now, there's another interesting debate in
the history of face processing research that we've discussed on
the show once before. I wasn't able to find a
resolution here, but but it is sort of a dispute

(17:55):
among these researchers. So to look at a foundational kind
of study here, there was a study published in Nature
Neuroscience in two thousand by Isabel Gauthier. At all in
the background here was that research had already shown that
people who had been trained to have an expertise in
previously unfamiliar objects called greebles will come back to them

(18:17):
in a second. People who had that expertise would recruit
parts of the brain that are usually used in the
processing of faces, such as a fuse form gyrus and
the occipital lobe. And so greebles are these weird little
chess piece like objects with abstract kind of goblin ears
and spikes and stuff. I really like the greebels, you know,

(18:39):
I was reading about greeple's and uh, greebel's also another
definition are the and it's pretty closely related, I guess,
are also the little bits of plastic glued to the
tops of objects to make them seem more complex. Star Destroyer, Yeah,
the star Destroyer, I guess, so what the Death Star itself?
Or a great example of the background on Mystery Sense

(19:01):
Theater three thousand, at least for a number of seasons there,
you could if you look closely, you could recognize the
everyday objects that we're serving as Greebels, such as I
think a millennium falcon toy was back there as a Greebel.
But yeah, the more junk that is glued to it,
the more it looks like it has a lot of
surface complexity to it. That The board cube is another

(19:24):
example of this. It's not just a cube, which of
course it's a model ship, but then they have all
these little bits on the outside of it and it
looks even more complicated. Yeah, it's got texture that gives
it the illusion of functionality. In fact, it's just a
It's just a surface that hides nothing real behind it. Yes,
and a similar thing would be true of the greebel's
used in these studies. So they're like a little imagine

(19:45):
a little chess piece that's just got different kinds of
little spikes and features poken out of it. And so
you can train people on these things and say you
learn the name for this Greeble versus that greeble, and
and they'll get names for you know, a group of
them over time. If people train with objects like this,
they can learn the names of the different Greebels. They
look mostly indistinguishable if you haven't trained with them. Even

(20:08):
though this is again these are like just made for
the experiment. There's no like pre existing greebel set right, right,
but you can train people right And so what previous
research had found is that people who get trained on
these greebles look at the Greebel's and it seems to
recruit the parts of the brain that are usually used
for face processing. This study from two thousand I mentioned
extended this principle to other areas of visual expertise, including

(20:32):
birds and cars. So it found that when people had
acquired an expertise for birds and cars, the brain recruited
more of the face processing associated networks of the brain,
such as the fusiform gyros when looking at the objects
they were experts in. Interesting, okay, and so at some

(20:52):
points this two thousand study has been used to argue
that maybe the fusiform face area of the brain is
more of a visual expertise center than a face center.
But I think there's also a lot of evidence that's
going the other way that it has a natural and
somewhat dedicated role in face perception. Uh That this other
side saying that it's naturally dedicated to faces is known

(21:13):
as the domain specificity hypothesis. Uh. So there's stuff going
back and forth, but just decide. Another one that I
thought was an interesting follow up to that two thousand study.
This one was by Yaoda Zoo uh called Revisiting the
Role of the fusiform Face Area and Visual Expertise, published
in Cerebral Cortex in two thousand five. It followed up

(21:34):
from the two thousand study about birds and cars asking
a reasonable question. The author here says, Okay, if people
with expertise and birds and cars show increased activation of
the f f A when they look at birds and
cars specifically, what if this is quote due to experts
taking advantage of the faceness of the stimuli. After all,

(21:54):
birds have faces, and three quarter frontal views of cars
resemble face is which was funny, But I was like,
that's actually that's a good question. Well, I think the
faces of cars came up on a previous episode. We
were talking about like the our experience as a driver
of a car and identifying with cars about you know,
the headlights and the grill. It looks like a face.

(22:16):
I don't know about birds having faces. I think, I'm
I'm also I find it hard to believe that that
that I'm looking at a bird's face when I'm looking
at the front of its head. So cats, no faces,
Dogs no faces, Birds no faces. I mean, I guess
that chimpanzee has a face. Gorillas, you know, I would
give that. I would have attribute faces too, you know,
the primates, especially higher primates. I don't know about lesser

(22:40):
primates though, you know, uh, I have to think about that. Wow,
this is blowing my mind right here. I mean, does
a shark have a face, Yeah, the shark has got eyes, mouth, Yeah, okay.
Clams don't have faces, no, no, okay, Oysters don't have face?
All right. Well, I would be interested to hear from
listeners about this, they might alone and how I feel

(23:03):
about faces. We'll see. So. The author here mentions that
the effects could also be due to attentional modulation in
other words to differences in how experts versus non experts
paid attention to what they were looking at. That also
seems like a reasonable explanation. Uh, And so they ultimately
find here quote in this study, using both side view

(23:24):
car images that do not resemble faces and bird images
in an event related fMRI I design that minimizes attentional
modulation and expertise effect, and the right f A is
observed in both car and bird experts, although a baseline
bias makes the bird expertise effect less reliable. These results
are consistent with those of Gauthier at all, and suggests

(23:47):
that this suggests the involvement of the right f A
and processing non face expertise visual stimuli. Okay, so this
one seems to hold up the two thousand study. But
I said that, you know, there was a dispute and
that it's complicated. I found any of other sources saying that,
you know, there's all this independent evidence that the brain
has a dedicated role. Uh, this region of the brain

(24:07):
or these networks in the brain have dedicated roles in
face perception. The domain specificity hypothesis and other studies have
found conflicting results and argued against the expertise theory. For example,
there was one in two thousand seven in Cognition by
Rachel Robbins and Eleanor McCone uh that found basically, dog
experts showed no special face like processing for dogs in

(24:29):
non face identification tasks. Another thing I was reading is
some researchers arguing that the engagement of the fusiform face
area in areas of visual expertise was still somehow maybe
just an artifact of how attention was being stimulated in
those test conditions. UH So, I'm not sure if the
opinion of neuroscientists has shifted largely to one side or

(24:51):
the other of this debate in the years since. It
does seem like there's a very solid consensus that at
least some inherent domain specificity exists for the f A,
at least in some way it is naturally dedicated to faces.
But at least as far as I could tell, it
could be possible to split the difference here, like maybe
it could be that there's a face perception network of

(25:13):
the brain shaped by evolution quite specifically to recognize faces,
and maybe it also just happens to be a good
part of the brain to recruit for minute visual discrimination
in other areas that the brain becomes highly adapted to
through training. Yeah, either way to shake it. I mean
the take home is that faces are incredibly important, right, so,

(25:34):
and we see that reflected in the neural machinery devoted
to it. I think that's exactly right. It's a good point.
So either side of this debate, whichever one is right,
it's either that we've got this inbuilt recognition capacity for
faces that makes faces uniquely special, or we've got a
visual expertise center that in most people becomes most highly

(25:55):
attuned at looking at faces. And the only things that
really rival that engagement of the visi ual expertise center
is like when you get super into a subject, like
you're obsessed with birds, right, and it becomes the same
sort of visual experience too, where you know, you turn
to somebody say it's airplanes, um, where you're like, I
wonder what kind of airplane that is? You turn to
your buddy who's an aviation geek, and they're like they're

(26:17):
just a glance. They're like, oh yeah, that's a that's
a spit fire. In the same way that you might
turn and say, oh yeah, that's Doug, Right, Yeah, when
when somebody's got visual expertise and you asked them to
recognize something, you notice how they emotionally light up the
same way that like you or I do when we
suddenly recognize an actor in a B movie. You see
that comparison. Yeah, yeah, yeah, exactly Like it's um, I mean,

(26:40):
it's like, this is what I've been training for. Yeah,
that's Robert England out of the Freddy makeup? Is that
a more generalized reaction? Is that not just us that,
like people don't just look around for people who have
familiar faces and recognize them, but get really excited when
they suddenly recognize somebody. Yeah, I think so. I mean

(27:02):
I think I see it in other people. So I
presume that it is part of the you know, normal
experience or the you know, the traditional experience, because I
guess if you were to apply it back to again,
like a small society model, it would be recognizing a friend, right, Like,
on some level, the the actor that we associate with

(27:23):
films that we like, like we we we we value
them on some level, it's almost like they are a friend,
and spotting them in another film is like spotting a friend.
Again within the context of films. It might be different
if you saw him on the street, because you're like
I would be like, oh, it's that act. That's weird,
that's that actor from those be movies I've seen. Um,
you know, there's some like and then I I'll be

(27:44):
thinking about them covered in blood or something. But you know,
but within the context of the films, it's like, oh,
my friend is in this. I don't remember their name,
but they were in you know, a whole bunch of
old British TV shows and uh and I'm and I
and I feel, you know, the arousal of recognizing them. Well,
I think there is some evidence that there are extreme
similarities in the way the brain reacts to images of

(28:05):
celebrities and the way the brain reacts to images of
known friends. I mean, there's a lot of the same
stuff going on. So I think the brain when we
see the same face over and over again on a
TV the brain sort of treats it as if we're
seeing the same face over and over again next to
the fire. Yeah, like really that. I mean, that's why
they called the television show friends. That's why people watched

(28:27):
it religiously. While people I mean, there's articles today about
like how important the Netflix deal was to to have
friends on Netflix because the TV show off the concept
of friends both because I think they're the same. I
think based on the way they say people consume the show,
it is like the familiarity of it. It is encountering
these same people over and over again. Uh, it is

(28:49):
like they are your friends. And I mean, I I
never really watched that particular show, but I remember having
like a similar relationship with I think it was news
radio back in the day, and I would watch it
when I was in college, and it's like I could
turn it on and and uh, innocence, they were like
my TV friends. There is I think there's a lot
to that. I think that goes on with say The

(29:11):
Office Today we read about like how much people stream
The Office, and I think a lot of it's not
even I mean, they're not even like trying to see
how the plot plays out anymore. It might not even
necessarily be about the comedy. It's just like, you know,
it's a very comfortable, cozy kind of place you can
go with familiar faces. Of course, we'll have to leave

(29:32):
the details of that to the The Journal of Sitcom
study to be reviewed later on. Maybe we need to
take a break. Let's do it. Thank alright, we're back.
We're talking about facial recognition. More specifically, we're talking about
the facial recognition that occurs uh inside the human brain. Yeah, uh,
and in the brains of other animals, though there are

(29:54):
some obvious parallels there. So we discussed to the beginning
how this this story just gets more and more complic
hated the more you look at it. And I want
to complicate things further with a really interesting article that
I was reading in uh In in the journal Nature.
Their their news section, they had a news feature by
a writer named Alison Abbott which was about the work

(30:14):
of the Caltech neuroscientist Doris Sao, who studies facial recognition.
And so I'll try to give a brief summary of this.
So basically, in the late two thousands, uh, sal in
our colleagues, we're doing repeated brain imaging and targeted electrode
stimulation studies on the brains of macaques, a type of

(30:34):
Old World monkey, which allowed them to identify six different
patches of a part of the brain called the inferior
temporal cortex on each side of the macaque brain, which
would react specifically when the monkey saw a face of
a human or another monkey, but not when looking at
other objects like a spoon, And stimulation of one of

(30:57):
these patches would cause activation in all the others. They
were sort of chained together for simultaneous neural activity. And
what the researchers learned over time was that individual cells
in individual patches tended to be specialized to specific parts
of faces. So one spot in this matrix would respond

(31:18):
by firing faster consistently based on how far apart the
eyes were, like say, if the eyes are farther apart,
it fires faster. If they're closer together, at fire slower.
And then others would respond specifically to changes in other
features like the size of the nose or in the irises.
And they use this knowledge to create what has been

(31:40):
called now a face code, a kind of top level
system for sorting faces along these major dimensions that the
brain responds to in a specialized way. So you know,
kind of like if you're creating a character in a
wrestling video game, you've got like maybe sixty different values
that you can just the sliders on. And so it

(32:02):
turns out that the brain, at least according to this
research appears to have individual neurons dedicated to each of
those sliders, so like as the as the slider goes
from zero to one hundred, that individual neuron starts to
fire faster and faster. So you can see these like
coded regions of the brain that map to individual elements

(32:24):
within the face. Now, an interesting thing here was that
the outermost cells in the cortex seemed to respond to
the most obvious stimulis, such as like face shape with
you know, things like distance between the eyes or length
of the mouth, whereas deeper cells seem to focus more
on more minute data like things about the texture of

(32:46):
the skin and stuff like. I guess to some extent
that like lines up with our experience of of glimpsing
somebody and then maybe doing that second look or that
more detailed look to follow up on the initial impression. Yes, yeah,
I think that's exactly right. But then to read a
quote here quote, The research seemed to point to a

(33:07):
mechanism by which individual cells in the cortex interpret increasingly
complex visual information until at the deepest points individual cells
code for particular people. And this goes with a finding
by a researcher named Rodrigo Keion Kuroga, who earlier in
the two thousands discovered something that was called in the

(33:29):
media Jennifer Aniston cells uh to come back to friends,
because these were literally single neurons that appeared to respond
to pictures of specific famous or familiar people. Uh. And
it was also found that this so if you have
a cell for Jennifer Aniston in your brain, the Jennifer
Aniston cell would respond to the evocation of a concept

(33:52):
of that person as well as to the picture. So
it would respond to seeing a picture of of Jennifer Aniston,
or to like seeing her name written, or even to
seeing lists of movies that she appeared in. And am
I correct? And remember what Jennifer Anderson was one of
the friends, right, Yes, just going to be sure on
that that I wasn't making that up. Okay, you're talking

(34:14):
about friends, like you know, Well, I I was pretty sure,
but I wasn't one percent sure. Well, they could just
as easily have been David Schwimmer cells. Yeah. I mean
the thing is, I can definitely picture her in my mind,
and I can picture David Schwimmer like they're they're just
coded in there. I mean, there's no denying their faces.
It does make me wonder if you could conceivably, like,

(34:36):
knowing about these cells, that Jennifer Anderson cells, could you
remove Jennifer Jennifer Anderson from your mind? Oh, I wonder. Yeah,
I don't know exactly how that works theoretically speaking, obviously
not like do it yourself at home kind of a thing.
But uh, it's where I wonder if that would be
an interesting sort of eternal Sunshine in the Spotless Mind

(34:57):
kind of spin off idea, because of course that the
nature of that film was like completely removing a person
or experience from the brain, like wholesale memories at all.
But what if you could only remove the face of,
say an individual who brought you stress or grief, Like,
what would that alone do? How would that impact the
other information that is there if it itself is untouched.

(35:20):
I don't know. I mean, as usual, the things inside
the brain turn out to have a much more complicated
relationship to our you know, our phenomenal experience of the
world and our internal experience of thoughts usually than would
be implied just by like a single cell change in
the brain has this clear effect on life. But I

(35:42):
don't know, I mean, uh, I would suspect that changing
that one cell would not entirely eliminate this person from
the brain, because you have complex networks of memories and
emotions that will involve familiar people in celebrities. Yeah, you
know another thing. This is not something we came across
in the study. But it also makes me wonder about
the faces of individuals in literature. Then that one might

(36:06):
read like when you've never seen you've never seen them,
and but on some level it is probably not like
the detailed version of a face unless you're doing an
exercise that I would do almost religiously as a young
reader and still fall back on occasionally, and that is
subbing an actor into a role casting the book. I
would do that all the time when I was a kid,

(36:28):
and again I'll still sometimes fall into it today. But
then other times there will be of a kind of
there will be a face or an almost face in
my head. Maybe it's not super detailed, it's not as
detailed as a real person, but it's there, kind of
floating around in my head, and when I think of
that character, that face emerges. I think we've missed the
time window for this, But I'm now recasting Dune with

(36:49):
the cast of Friends. Right, so Duke Leto is David
Swimmer and uh and let's see Joey is is Paula Trades? Right?
I guess so? No, Actually, Hollywood people, if you're listening,
here's my pitch redo The Punisher starring David Swimmer as Punisher. Well,
I mean it's Swimmer has been good in something, so

(37:11):
I guess I can I can imagine him playing the Punisher.
I'll go ahead and go that far. It's only one
way to know. So Eventually, after doing all this research
about these sort of like neurons or patches of the
brain that are coding for individual variables that can vary
with the human face, sal and our colleagues began researching
um broader variables for visual recognition of objects that worked

(37:33):
very much along the same lines as the face variable neurons.
So some examples that were cited in Abbot's feature on
this neurons that appear to respond specifically to quote, whether
something is spiky like a camera tripod or stubby like
a USB stick. So you could have kind of a
slider in the brain that corresponds with a specific tiny

(37:54):
patch about whether it's got spikes or whether it's kind
of rounded or something the kiki bubba thing. But then
other things might correspond to whether something is animate like
a cat or inanimate like a spoon. Uh. And there
can be things in between that maybe a washing machine
is a little more a little more animate than a spoon,

(38:16):
but less animate than a cat. And again this would
be expressed by how rapidly that's that neuron fires when
viewing that particular stimuli. But Sal and her colleagues got
to the point where they could predict the appearance of
an object that a subject was looking at with reasonable
accuracy based on sampling the firing rate of just about

(38:38):
four hundred neurons. So you can get all these different
variables just by looking at how fast those neurons are firing.
And this suggests that there could be a feature based
coding system that may operate across the whole brain. Uh.
And so taking away from this research, Sal is talking
to the to the author here and and she says
that you know that the brain is not like quote,

(39:01):
a sequence of passive sieves fishing out faces, food or ducks. Instead,
she says, quote, it's a hallucinating engine that is generating
a version of reality. Based on the current best internal
model of the world. And I think this is a
really important and interesting way to think about visual perception

(39:21):
and recognition. There's so much going on in any image
of the world that you look at. It seems almost
impossible that your brain is actually registering all the information constantly,
simultaneously and updating based on you know, what is actually
taking place in the world. It seems more like your

(39:43):
brain is kind of creating an illusion that you are
looking at the world and then pretty frequently updating little
key bits of data about that illusion. Yeah. I mean,
it's kind of like in my experience to bring us
back to Dungeons and Dragons, it's like playing Dungeons and
Dragons and tell yourself, yeah, I know what all the
rules are, but then when individual rules come up, you're like, actually,

(40:04):
I need to check that rule again. Maybe I don't
know that rule. That's kind of what it's like to
walk around the world and and take in visual sense data.
Uh and and but but I love this idea of
the hallucinating engine of the of the brain because this
this does this description matches up so much of what
we discussed on the show that your memories are not reality,

(40:27):
that your perception is not reality, that your feelings are
not reality. Not to say that all three of those
things are lies. That they are based on they're based
on reality, but they themselves are not accurate. They are
not one. They're not a reflection of the world. They
are at best a distorted reflection of the absolute reality.

(40:51):
And even then, like it's hard to even say what
what that is, right, I mean, the vision. Your vision
is not a camera feed. It is not recording passively,
objectively everything that happens in front of your face. It
is instead sort of a hallucination that is quite frequently
updated with little bits of data. Right. And that's without

(41:12):
even getting into discussions of how our vision and other
senses match up against the other organic senses of various
creatures in our world, things with far sharper vision that
can see in different wavelengths, things was far sharper hearing,
and sent that that therefore live in what I've I've
often seen described as like a different sensory world. Um,

(41:34):
but you can't walk around the world reminding yourself of that.
But ultimately, like the version that you form in your
head has to be your working model of reality. And
you know, otherwise that you just go mad. Yeah, there's
a really interesting thing that gets pursued at the end
of Abbott's article here where she talks about the idea

(41:55):
of like what's the best model for sort of the
whole brain visual perception of what you're seeing in front
of you and uh, and she makes reference to this
idea of predictive processing. Quote. The brain operates by predicting
how its immediate surroundings will change millisecond by millisecond and
comparing that prediction with the information it receives through the

(42:17):
various senses. It uses any mismatch or prediction error to
update its model of the world. So maybe you know,
you're you're kind of simultaneously simulating the world in front
of you at the same time you're watching it, and
the watching could be there to note little ways in
which your prediction is turning out wrong and then trying

(42:37):
to fix it right, or being hypersensitive to the ways
that the that your sensory input matches your uh, your simulation,
which can be a great way of just wandering into delusion,
you know, or living in a state of paranoia because
you're just you're just looking for the the the the
sense data that will back up the version of reality

(42:57):
that you have stored in your mind that you're you're
cultivating in your mind. Yeah. Absolutely. Uh. There's one more
point of comparison that I thought was interesting because the
article makes reference to optical illusions. You know, there's this
question of so when you look at an optical illusions
one of those things that has like a double image valance,
it's the duck and it's the rabbit, you don't see

(43:18):
the duck in the rabbit halfway. You know, you don't
see it both at the same time or halfway between
you see it. I mean, most people tend to see
fully duck and then there's a flip in the brain
and the brain readjusts and then you see fully rabbit.
Isn't that interesting? Like, what's causing that flip? Nothing has
changed in terms of what you're looking at, But suddenly

(43:38):
the brain undergoes some kind of internal change and it
has completely reversed what you perceive yourself, what you perceive
in front of you. Like another example would be when
the the accidental face in a design is pointed out
to you and then you cannot unsee it um or
like one one for me is the double hangar that

(44:00):
it looks like a drunken octopus that wants to box
a second. Yes, yes, on the back of a door
that's got two little prongs. Yes. Before it was just
a code hanger, but then once that was pointed out,
that's all I can see, Like, that's how it's coded
in my brain. It's fighting Joe octopus. Yeah. Or if
you look at Edvard monks the scream. But if has

(44:20):
anybody ever told you to look at it and see
if you can see the Springer spaniel. No, I don't
think I've done that exercise. Okay, look at the head
on the screen next time and just think Springer spaniel
and then you won't be able to unsee it. So
there's another interesting development about facial recognition in the brain
that I was reading about in a article by a
couple of researchers named Anna K. Boback and Sarah Bate

(44:43):
who at the time we're conducting research on face perception
at Bournemouth University in England. Uh And so they point
out that one aspect of a typical human brains face
perception is the ability to engage what they call a
configure role or holistic strategy for visual processing, meaning that

(45:04):
these human brains are able to sort of see faces
as a whole rather than examining the independent features of
a face one at a time. And I've actually read
there was something similar going on with visual expertise that
like when people have visual expertise for cars, they're much
better able to get an idea of what a car
is with a holistic, sort of one glance view rather

(45:27):
than having to look at individual parts of the car.
And this ties into something I've read about people with prosopagnosia.
Oliver Sacks actually describes this process of of a sort
of hack or work around for their condition that basically
involves examining the elements of a face for special identifying
marks or features, the way you might look for, you know,

(45:48):
a known dint or bumper sticker to identify a familiar
car from others of the same model in color, or
a particular hairstyle I think is sometimes brought up right,
or style or mode of dress. So bo Back and
Bait describe some research they conducted on people with typical
face perception versus people with prosopagnosia versus people sometimes known

(46:09):
as super recognizers who were sort of the opposite end
of prosopagnosia. They have unusually high accuracy in remembering and
recognizing faces, even for people that haven't seen in a
long time. And the authors here right that they used
eye tracking software to see where these different groups of
people tended to look when they were examining a human face,

(46:31):
and there were some interesting differences here. So they found
on average, people with typical face perception would tend to
focus basically around the eyes most when trying to identify
a face. Um and they note some previous research on
people with acquired prosopagnosia, including a two thousand eight study
from the Journal of Neuropsychology by Orbon disagree at all,

(46:52):
and it found that people with face blindness tend to
look less at the eyes and at the upper area
of the face, and tend to look more at lower
regions of the face like the mouth when trying to
identify faces. And the authors note that their their recent
research again showed people with prosopagnosia we're looking less at
the eyes than typical subjects. Meanwhile, they note that super

(47:14):
recognizers in their studies tended to on average focus more
on the nose which was kind of strange, but they
had an idea about that, is, so, is it something
special about the nose itself as a feature of the face,
or is it, as they kind of propose, more of
a diagnostic center of the gaze, Uh, that that gravitates

(47:37):
towards the center of the face when we are better
at getting a holistic sense of a face from a
glance rather than trying to examine its individual features one
by one. And so the authors here argued that it
is the center of the face, rather than the eyes
in particular or any other feature that optimally engages the
brain's facial recognition systems. Interesting. I mean, one thing that

(47:59):
it brings to mind when you look. It's kind of
the old adage right to look someone in the eyes,
to to sort of engage in a more direct theory
of mind with them, right to try and sort of
it's like you're having a like a mind melt moment
right where it seems like if you're looking at someone's nose.
I mean that that reminds me of exercises people say, oh,

(48:19):
you know, if you want to, you know, cut down
on anxiety during a um like an interview, look at
the person's forehead, you know, don't look at them in
the eyes. So it feels like a holistic view of
the face is also an impersonal view of the face.
It feels, at least to me, it feels like if
you're looking somebody in the eyes, you're also engaging in
consideration of their mind, which might conceivably be distracting from

(48:43):
the identification process. Right, So maybe it's better to to
look at the nose, like, don't think of this person
as a person, think of them as a face that
matches up with a name. They didn't mention that, and
I hadn't thought about that, but I think that's a
very interesting point, Yes, that like, perhaps by focusing a
little bit less directly on the eyes, you are somewhat
deep personalizing the experience of the face recognition, and thus

(49:07):
you can you can cut out some emotional distraction. Now,
then maybe that's just my individual like social anxiety speaking there,
you know, I don't know, I mean, we don't know
that's the case. That's just like an interesting possibility, yeah,
because well, I mean, it reminds me how in the
last episode we were talking about technology for facial recognition,
of course being used by law enforcement. And one of
the things the author's note in this article here is

(49:29):
that human super recognizers are in many places now being
directly sought out and employed by law enforcement. So like,
you know, to be able to like look at video
feeds and try to match people with known photos of
of wanted criminals and stuff. Again, that kind of like
impersonal recognizing thing, especially you know, in a law enforcement context,

(49:51):
seems like it's possible it could be aided if you
are seeing less of a person's humanity when looking at
their face and just literally trying to make the most
act you're a match of features. Has this been exploited
in a network crime solving series yet? Oh, like the
dexter of super recognizing. Yeah, and I would be shocked
if it has not happened or at least been pitched

(50:12):
to a major studio super recognized heer i'd given an episode.
But you know, this also makes me think about the
different types of machine face recognition systems out there, of which,
of course, you know, we know there are many. Some
are more oriented around specific details of the face. For example,
I've seen the idea of distance between the eyes. Again,

(50:34):
this is something that humans and macaques apparently used as
a major metric for face evaluation, but it's also a
common thing used by machines. Uh. But but others probably
take a more holistic approach. I'm not an expert in AI,
but I imagine that the neural net based facial recognition
algorithms trained on wild photo data might be more reasonably

(50:55):
compared to the face as a whole biological process. That
makes sense. All right, On that note, we're gonna take
a quick break, but we'll be right back. Than alright,
we're back. So we've been talking about facial recognition largely
in this episode. We've been talking about the complexity of

(51:16):
organic facial recognition, the kind of facial recognition is going
on inside the human brain and in the brains of
animals as opposed to UH that going on with AI
right now. One of the things I know we talked
about it in the last episode was among our many
concerns about artificial intelligence for facial recognition, where there are

(51:37):
various types of bias that have been documented to show
up in in UH computer based AI for facial recognition. Yeah, specifically,
we're talking about issues involving problems with these AI programs
recognizing black and or Asian faces because this also this
is interesting because it also forces us to confront not
only racial bias in the creation of programs and AI,

(52:00):
it also mirrors our organic issues with facial recognition for
races other than our own. UM. There there's been a
lot written about this topic. There's been a number of studies,
but just in UH, just last year from July, there
was an article in The Guardian title of a perception
of other races look alike rooted in visual process says study.

(52:23):
And this looks at a Stanford University study on this
oft researched issue. At one point that the researcher stressed
it was something we were just talking about earlier. What
are human senses pick up on is not necessarily an
accurate representation of reality and if as we've discussed before,
there's a lot of consolidation involved, the loose stitching of
things together based on actual perceived details, on memories, on

(52:46):
preconceived notions, on fears, suggestions, and more. And this is
a particular m r I assisted study. UH. It only
involved twenty white individuals evaluating the faces of black faces
and white faces, but it showed greater activation of of
of face recognition regions in the brain. When when a
white test subject looked at white faces compared to black faces, now,

(53:10):
dissimilar faces, that being you know, phases that are no
matter what you know, the race of the individual might
be stand out more. Um, dissimilar faces resulted in a spike,
but apparently the spike was still greater in cases of
dissimilar white faces. Now, to be clear, this is not
a case of oh, we as humans do this because look,

(53:31):
here's our brains doing it. You know. A lot uh,
you know, a lot was was not taken into account
with the studies such as the social backgrounds of the
individuals and all. As always, one assumes an interplay of
of neural software and socio cultural conditioning. But above all
they want to drive them. It's also not proof that
racial prejudice is to be dismissed as being just a

(53:53):
a neurological reality. Well why would that mean it should
be dismissed. I mean, even if it is a neurological reality,
that doesn't make it okay, right absolutely Uh. Here's the
quote from doctor Brent Hughes, a co author um of
of of the paper from University of California, riverside quote,
individuals should not be let off the hook for their
prejudicial attitudes just because we see evidence of race biases

(54:16):
in perception. To the contrary, these race biases and perception
are malleable and subject to individual motivations and goals. So
again coming back to the interplay between software and hardware,
but I think I do think there's a lot to
contemplate here. The way are organic and and currently our
technological facial recognition systems are subject to racial bias. But

(54:37):
then in both cases they are malleable. There are ways
to tweak and improve, just as there's a there's room
to allow these imperfect perceptions of reality to color what
we believe about the world. Probably one of the most
important things is for people not to be lulled in
by the misperception that because something is a computer algorithm,
or that it's a machine, that it's impossible for it

(54:57):
to have a bias. I mean, clearly, we just know
that that's not true. I mean, obviously the machine isn't
motivated emotionally. The machine doesn't say hate people or care
about people in whatever way, but it's guided by rules
that are created by training based on data sets that
are in the real world, which might incorporate racial biases,
or it can be trained, you know, on explicit rules

(55:19):
generated by people, whether by malice or just by mistake,
have some kind of racial bias incorporated in them. Yeah,
and and on the human side of things, this is
only a glimpse at very broad facial perception because also
consider how cute into facial expressions we are and how
this too can be biased. I was looking at what's

(55:39):
in a face, how face gender and current effect influence
perceived emotion from two thousands sixteens was in the front
Frontiers and Psychology, and the findings included a a bias
to perceive male faces as more negative and the perceptions
of female faces depended on current mood. So to summarize
both cases, the male face that an individual perceived and

(56:01):
needs to be happier looking compared to a female face
to elect an interpretation of even just neutral emotion. So
just male faces in general are interpreted as having more
negative emotion in them. Yes. And then meanwhile, the happier
a given male observer is, the more inclined they are
to see a female's face as happy, which is which

(56:22):
is kind of completely but that comes back again to
like what is my emotional state? That is then uh,
that is then affecting the emotional state I perceive in
other people. And all of this is adding to my
perception of what's going on in reality. Oh, this is
the classic like, oh, yeah, she thought the joke was funny.
I was laughing. Yeah. Now, this is just one study

(56:44):
I'm referring to here and should be taken as the
gold standard or anything. But it does provide a glimpse
and it just again how complex and unreal our perception
of reality is. And I think, you know, it makes
sense because we are we are such social creatures that
the social reality uh a human is of tremendous importance.
But of course, reading the social reality of a person
is rooted in various conscious and subconscious processes. It also

(57:08):
depends on theory of mind. It's highways susceptible to to
to buy us based on conditioning, culture and more. Now
now currently mostly what we've talked about with UM, with
AI and facial recognition software, it is concerning just the
measurements of the face, the appearance of the face, and
not so much emotional states. But that's uh, that's not
to say that that the programmers of these these AI

(57:31):
are not interested in reading that information as well, or
at least the marketers right, But well no, I mean
I guess both because yeah, to do a little more
on faces and emotion. I think some of the same
problems with human perception of emotion in other people's faces
are translated now to technology, say, except made even more
blunt and inaccurate. Um. So many technology companies in recent years,

(57:55):
including IBM, Amazon, Google, Microsoft, etcetera, have all been advertising
AI that can read human emotions by inferring them from
facial expressions. And there are some cases where even companies
that have shied away from doing facial recognition, as in like,
you know this is Jeff's face, have still said it's
okay to try to just look at a face anonymously

(58:18):
and judge what its emotional state is. And this is
being advertised as useful for evaluating candidates in a job interview,
or analyzing emotional states of customers in a retail environment
you know you want happy customers, or assessing potential threats
from people trying to conceal anger all kinds of stuff.
Even saw one that was trying to sell it as

(58:39):
a as like a driving safety feature. You know, I'm
detecting like road rage on the face. So just one example.
In August twenty nineteen piece I was reading in Wired
discussing Amazon's image analysis software known as Recognition with a
k uh yeah, just the spelling of that is terrible.

(58:59):
But uh so, at the time, this was claiming to
be able to assess emotions in faces, including happiness, sadness, anger, surprise, discussed, calmness, confusion,
and then the newest one they had just added to
the list when this article came out was fear. Okay,
well that's a big one. Was at last, I don't know,

(59:20):
that's the one they brought online last. That makes me
think of the end of Starship Troopers. It's afraid. Uh So,
what is the scientific research tell us about how well
these algorithms should be expected to do in reading emotions?
I was looking at a paper by Lisa Feldman, Barrett,
Ralph Adolph's, Stacy, Marcella, alex In Martinez, and Seth D.

(59:44):
Pollock in Psychological Science in the Public Interest published in
twenty nineteen called Emotional Expressions Reconsidered Challenges to inferring emotion
from human facial movements, and they looked at you know,
like a ton of I think, like over a thousand studies.
It was huge to review, and they conclude that the
whole premise on which these algorithms is based is close

(01:00:06):
to worthless because, shocker, there is a little bit of
information about emotional states encoded in human faces, but it's
not nearly enough to give you a very accurate picture
of internal states. People's faces reflect all kinds of strange, complicated,
fleeting emotions back and forth. They might be faking emotions

(01:00:26):
with their faces, and they even when humans read each
other's emotions, which we were just talking about, you know,
they're not always totally good at doing, but when humans
do it, they incorporate way more than just the face.
They incorporate body language, tone, all kinds of things to
read emotion. And the the AI s are not even
that good. They're just going off the face. And the

(01:00:47):
researchers say that, you know, the evidence concludes that looking
at the face alone is completely insufficient to get an
accurate picture of internal emotional states. And it's kind of
dangerous to suggest that you can get an accurate picture
of emotional states just with facial analysis. To read a quote.
Scientists agree that facial movements convey a range of information

(01:01:09):
are important for social communication, emotional or otherwise, but our
review suggests an urgent need for research that examines how
people actually move their faces to express emotions and other
social information in the variety of contexts that make up
everyday life, as well as a careful study of the
mechanisms by which people perceive instances of emotion in one another.

(01:01:32):
Uh So, the way to read their conclusion is these,
these facial recognition algorithms might be able to predict emotion
with a rate slightly better than chance based on faces.
You know. So they read your face and they see
a smile on it, and they say, this person is happy.
And that's a little bit better than guessing your emotional
state at random, but not a lot better. You have

(01:01:54):
these these programmers never heard tracks of my tears. They
not know how how smiles. But it does sound like
we could get to the point where we could be
driving automobiles that tell us to smile more to to
because you know, we already have them. That they try
and sort of judge what are like state of wakefulness
is based on our driving performance. You know, where they'll say,

(01:02:16):
do you need a break, and they'll be like a
coffee cup symbol, a little pop up on the dash. Uh.
It's it's not that difficult to imagine a scenario where
one will will you know, pick up on some very
broad signs of displeasure and start chiming in with some advice.
I don't know why I'm but just thinking about this
is making me mad. I want to say, go download

(01:02:38):
some malware computer, you don't know me get broken. What well,
what if it was more subtle than that. What if
if if your car picked up on some very you know,
overt signs of displeasure. What if your cards just told
you that it loved you. I think I would fall
for that. I would, you know, if it was presented appropriately,
I would be like, yes, thank you. Finally, wrap your

(01:03:01):
hands across my engines. All right, that's enough, Bruce. We
are we ready to wrap up for today? Yes, okay,
but I think we will be back with at least
one more part right where we're going to talk about
the history of facial recognition technology and a little more
about the modern implications, possible regulation schemes and stuff like that. Absolutely,
in the meantime, certainly, we'd love to hear from anyone

(01:03:23):
out there because we all have faces, we have some
experience with with with facial recognition and varying levels of
facial recognition. I know we've heard from listeners who have
who have you know, varying degrees of difficulty or I'd
love to hear from someone who thinks they might be
a super recognizer or is like a what a verified
super recognizer. In the meantime, if you want to listen

(01:03:45):
to other episodes of the show, you can find them
wherever you get your podcasts. If you go to stuff
to Blow your Mind dot com, that will shoot you
over to the I heart listening for this show. But
wherever you get the show, make sure that you rate, review,
and subscribe. Those are the ways you can help us out.
And don't forget about Invention. That's our other show. That
is a journey through human techno history and what right now,

(01:04:05):
we're talking about fire technology over there, we're talking about
matches and also just the the ability, the massive step
forward in human technology that enabled us to not only
capture fire, but to re create it. Huge thanks as
always to our excellent audio producer Seth Nicholas Johnson. If
you'd like to get in touch with us with feedback

(01:04:26):
on this episode or any other, to suggest a topic
for the future, or just to say hi, you can
email us at contact at stuff to Blow your Mind
dot com. Stuff to Blow Your Mind is production of
iHeart Radio's How Stuff Works. For more podcasts from my
heart Radio is at the iHeart Radio app, Apple Podcasts,

(01:04:49):
or wherever you listen to your favorite shows. Bids Witty
Problem

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.