Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
My REGI welcome Stuff to Blow Your Mind, a production
of I Heart Radios How Stuff Works. Hey, welcome to
Stuff to Blow Your Mind. My name is Robert Lamb
and I'm Joe McCormick, and we're back with part three
of our Journey through Facial Recognition. In the first episode,
(00:23):
we talked a lot about the sort of the current
tech landscape of a company focusing on facial recognition, some
issues with that. In the last episode we focused primarily
on the biological domain of facial recognition, and now we're
bringing it back to technology to uh to finish up today.
So one of the things that we were looking at
coming into today's episode was an article that I thought
(00:45):
was really good in Wired magazine again from this month
from January of twenty by Seaun Revieve called the Secret
History of Facial Recognition, And I will say I was
surprised to find out how far back facial recognition projects go.
This this goes into the nineteen sixties. Yeah, yeah, this,
this this is a really good read. This article. I mean,
(01:07):
it's it's extremely well written. It almost has I would
say it has a very narrative flow to a beginning
with this scene in which Uh, an elderly researcher who
is you know at this point, I believe you can
find to a wheelchair is instructing his son to to
unlock some old, rotting files from the sixties and burn
(01:29):
them in front of him and the garbage can in
the garage or something. Yeah, with you know, with and
you can you can tell that there are things about
classified information, top secret or what have you on the documents.
So it's a wonder wonderful, um you know, ominous start
to this article, which of course deals predominantly with the
the origins of facial recognition and touches on other at
(01:52):
times inspiring and other times creepy and devastating scientific programs
that were going on or in like in full swing
during the sixties. Well. Yeah, one of the things that
really comes home in this article is that the creepiness
of facial recognition technology is not new. That's sort of
been there since the very beginning. It's not one of
(02:13):
these things where um, like even the author here mentions like,
you know, social media, where it seems great at first,
and it's not until it's quote in the wild that
we begin to realize, oh yeah, this is um civilization
wrecking awfulness. Uh. And not just a fun way to
share photos. No. At the time, there was a realization
(02:33):
that this was potentially problematic. Yeah. And so it might
not come as a big surprise, especially given the story
we mentioned about burning documents that some of the earliest
funding for facial recognition technology research clearly came from the
c i A and front companies set up to funnel
c i A money. Yeah, this was this was super interesting,
(02:54):
the CIA funding through these various phony companies, and and
due to the CI funding, some of the you know,
stuff was was secret. Some of the image material has
only come out, you know, due to Freedom of Information
Act filings, and and some of the work was just
never published. A lot of the work was never published. Uh.
To to drive home the creepiness though, one of the
(03:16):
companies involved was this company, Panoramic, which which was also
tied to other programs including UH the Project mk Ultra. Uh.
It was one of eighty organizations that worked on Project
mk Ultra in particular quote subprojects and ninety four on
the study of bacterial and fungal toxins and the remote
(03:36):
directional control of activities of selected species of animals Animal control. Yeah,
I was like, if I can, if I want to
sick tigers at you from the other side of the world. Yeah.
MK Ultra, just to remind everybody, was a c I
project that explored the potential for mind control using psychedelics
(03:57):
UH and other tactics like basically looking at ways to
take these mind expanding um UH agents and use them
to break down the human psyche and then inevitably built
something back up that it could be tightly control. Now,
the evidence today is that the mind control experiments of
m K culture didn't really work, but they were really
(04:19):
good at destroying the human mind. I mean because basically
the project was responsible for psychological torture. UH. Just a
horrible program in a real blight on the scientific history
of the United States. UM. You know, not the only blight,
but but I think an appalling one. They were not good.
(04:39):
They did not figure out a way how to read
and you know, to rebuild say an ideal sleeper agent
out of the psychological destruction that they wrought. Yeah. So
a lot of this article focuses on this one main
figure named Woody Bledso, who was a leader at this company.
A founder and leader at this company, Panoramic Research, Incorporated,
which you mentioned earlier, which got a lot of business
(05:01):
from c I A and c I A front organization
funding in the sixties to study things like facial recognition.
But if you put yourself back in the context to
the nineteen sixties, I think one thing that's kind of
funny is people then might not yet have realized how
difficult of a project recognizing a face would be for
(05:21):
a machine, because we'd have tons of sci fi goes
back decades before that, where of course robots, computers, whatever,
just recognize people easily. Yeah, And I think that's mainly
because we would just we generally would just have a
rough idea that a robot can do everything a person
can do. A robot is a mechanical person, and you
didn't have to think too hard about all the complexities
involved there. I mean, even like the best example of
(05:45):
nineteen sixties science fiction and really one of the twenty
and twenty one centuries best examples of science fiction two
thou one of Space Odyssey, it does reference facial recognition
capabilities for how but it doesn't I wouldn't say that
it really it goes in depth about what that means.
But but how does is able to recognize faces, and
(06:06):
even like can recognize faces when it is a sketch
as opposed to video feed or a photo. Yeah, and
I think if you're not well versed in the computer
technology world, it might not be immediately apparent what's so
difficult about making a computer recognize a face? But you know,
our facial recognition systems, the things going on in our
brain have amazing capabilities and their analog right. You know,
(06:28):
a face, face has all kinds of variables that move
around all the time. It can be extremely difficult to
reduce a face to a set of nu miracle values,
which are what you need to do in order to
have a computer recognize a face. Yeah, as as I
think we've explored in the previous two episodes, it's um yeah,
there's there's a lot going on in facial recognition. There
(06:50):
are a number of challenges to it, and it is
still not a perfected technology by any means. Yeah, that's true,
And so I think it's reasonable to think of what
he bled So and his colleague as legitimate AI pioneers,
even with their work in the nineteen sixties here. Oh absolutely,
but I'm Bledso was working with a number of you know,
highly talented individuals, sometimes on projects like involving atomic weaponry,
(07:14):
for instance, before he became more focused on AI. Another
individual that he worked on concerning facial recognition was a
Helen shan Wolf. Um Wolf was involved in the development
of Shaky, which is a robot which DARPA describes as
quote the first mobile robot with enough artificial intelligence to
navigate on its own through a set of rooms. I
(07:36):
think somebody at the company also worked on a robot
called Mobot, which mode lawns and a random and unattended pattern.
I'm not joking, by the way, where that is the
thing Revieve mentioned. Yeah, it just always that That makes
me think of the the old Gumby short where the
Gumby family have the robots that are doing lawn care
and home repair and they just go perserk and the
(07:58):
Gumby family has to as to at them down. I
don't think I know that one. Oh, it's good. There
was an MST three k riff five eight years back.
That sounds horrible. It is. It's horrifying. At the end,
there's like a robot's head on the wall. It's it's
terrifying material. But one of the things that ended up
being the case if if you're trying to think, okay,
in the sixties, how would you even begin to get
(08:19):
a computer to recognize a face across different images? Uh.
One of the problems is, of course, the lack of
existing digitized imagery at the time, right. I mean, we
live in a world where digital imagery is ubiquitous. It
was not at all back then. There was almost none
of it. Yeah. Yeah, Today when you read about official
recognition research that are often working off of digital databases
(08:41):
containing hundreds and not thousands of photographs. Uh. And at
the time, yeah, how many digital photographs were there in
the world, right, So they actually had to have some
kind of digitization process, Like they had to be able
to take a photo and turn that photo into some
no numbers that could then be interpreted by the machine
(09:03):
to try to recognize a person or you know, say that, yeah,
that's the same person or not. And so a lot
of these methods involved number one, recognition rules based on
explicit measures, not machine learning. They didn't have the machine
learning methods we have today back then. They would have
to have explicit rules like the distance between randomly selected
features of the face. So in this image, what's the
(09:26):
difference between the left eyebrow and the right ear and
the you know, left eye and the corner the right
corner of the mouth or something. And furthermore, to get
those measures, they often had to resort to what they
called a man machine approach, which was it would pair
initial measurements made by humans. Uh. It would take those measurements,
(09:47):
turn them into numerical values, and then train the computer
to match based on those values, which is quite laborious. UH.
And the man machine approach was apparently necessary to input
the measures based really until the seventies, until um Review
writes quote. In nineteen seventy three, a Japanese computer scientists
named Takeo Kannate made a major leap in facial recognition technology.
(10:11):
Using what was then a very rare commodity, a database
of eight hundred and fifty digitized photographs eight hundred and
fifty taken from the nineteen seventy World's Fair in sweet To, Japan,
Kannada developed a program that could extract facial features such
as the nose, mouth, and eyes without human input, and
(10:31):
so by doing that, Cannadi was able to finally eliminate
the man part of the man machine approach the human
input of the measuring and input of the values. But
throughout all this period, I mean there were still huge
problems with machine recognition of faces. Like they would sometimes
get to the point where the system developed by by
Panoramic would be it would be more efficient than humans
(10:55):
at matching faces under ideal conditions. So if you could
get all the faces like oriented the same way, looking
right into the camera and all that, the machine would
be better than humans trying to match photos. But if
you just pollute the imagery a little bit and make
people like turn their heads or change the lighting, etcetera. Yeah,
any kind of problems like that, suddenly the machine loses
(11:17):
all its advantages and the humans are better again. Right,
And some of the other problems that are still issues
today we're problems at the time, like like depending too
much on say, uh an all lighte male database, you know,
or something to that effect. You know, where you just
do not have you don't have a broad enough sample
of of human appearances to to really have a robust
(11:40):
facial recognition system. Right. If it's not trained on humans generally,
it's not gonna work on humans generally, right, Uh. And
so yeah, like the racial bias problems that show up
an existing facial recognition technologies today, we're basically they're right
from the beginning. But towards the end of his article,
Revieve writes, quote only in the past ten years or so,
as face show recognition started to become capable of dealing
(12:03):
with real world imperfection, says A. Neil K. Jane, a
computer scientist at a Michigan State University and co editor
of handbook on Face Recognition, nearly all of the obstacles
that would he encountered, in fact, have fallen away. And
one of the big ones they point to is not
just the fact that you can get some digital images now,
(12:23):
but you can get so many of them that you know,
it provides these vast databases for for machine learning and
data sets for training of neural networks and stuff. That's right,
you can just basically crawl, um, you know, any given
social media side. And well, we mentioned a few already.
We'll mention a few different ones as we proceed here. Well,
maybe we should pivot to talk about the ways that
(12:43):
facial recognition technology is being used today. Uh And there
was one thing that I was looking at, uh So
it was a piece in the New York Times. It
was an opinion piece a guy named Bruce Schneier, who
is a fellow at the Harvard Kennedy School, that was
making a point about facial recognition that I'm not sure
I fully agree with the framing of. But within making this,
(13:05):
within writing this article, he articulates something that I think
is important and clarifying for us to remember as we
go on. So to first acknowledge his main point, Uh,
he writes, quote, facial recognition bands are the wrong way
to fight against modern surveillance. Focusing on one particular identification
method misconstrues the nature of the surveillance society we're in
(13:25):
the process of building. So he's arguing facial recognition, you know,
it's just one technology among many, and banning it doesn't
necessarily stop all of these other surveillance technologies from doing
effectively the same thing. Uh. And we can talk later
about why. I'm probably not really convinced by this argument,
But Uh, the point that I think is clarifying is
(13:47):
that he, you know, he says what we call facial
recognition is not just one act, but it's at least
three different acts, each coming with their own challenges. Quote
in all cases. Modern mass r valance has three broad components, identification, correlation,
and discrimination. So of course identification is just the first step.
(14:09):
It recognizes who you are. Correlation then associates that ide
with other information about you, and then finally discrimination treats
you differently because of that I D or because of
the associated information. Now, this in and of itself, of course,
is something that that humans are. Humans are perfectly capable
of carrying out all three of these tasks without the
(14:29):
aigive machines. But what we're talking about here is that
it would be automated. It would be something that would
be happening by default to everybody, as opposed to something
that might take place uh and perhaps even laboriously uh
in specific scenarios such as, you know, once you've been
flagged at a security checkpoint or something like that, they
(14:51):
they ask you to pull out your I D and
then look you up, to correlate you with other information
and then maybe treat you differently based on whatever they
find out. That we're dealing with the scenario here where
that this would all be happening in real time and
would happen perhaps with very little delay. And I think
most importantly would be the scale and pervasiveness. It would
be happening everywhere all the time. We know from experience
(15:14):
how quickly new digital technology pervades all spaces, so basically,
you know the one of his arguments is that these
are three broad components that it might be easier to
regulate individually, as opposed to saying, let's not do facial recognition. Well, instead,
let's maybe we go after each of these three things.
I think that's not necessarily a bad idea to to
(15:37):
think about a framework for overall regulation of surveillance and
preservation of privacy. I mean, ultimately, I think I agree
that it's important to better understand and regulate the entire
process recognizing the three different components of identification, correlation, and
discrimination individually. I just don't think it makes a lot
of sense to frame this is an argument against banning
(16:00):
a regulating facial recognition, because to me, that sounds kind
of like saying, well, international treaties banning the use of
smallpox virus as a weapon of war miss the point
that we need to rethink our entire concept of war
and defense, and we need to regulate international conflict in
a more comprehensive way. Um. I mean, like, yes, that
would be true. But if you don't know when and
(16:21):
whether that full comprehensive thing is going to be accomplished,
and you do currently have a consensus to ban the
use of germ warfare, why wouldn't you do it? Absolutely? Um,
here's another quote from that that article though that I
that I was really taken by quote. The point is
that it doesn't matter which technology is used to identify
people that they're currently is. No comprehensive database of heartbeats
(16:42):
or gates doesn't make the technologies that gather them any
less effective, And most of the time it doesn't matter
if identification isn't tied to a real name. What's important
is that we can be consistently identified over time. And
then on on top of that though, you know, once
your real name is then attached to that information, there's
no turning back, right. The system is building a picture
(17:03):
of you, move by move, purchased by purchase, search by search,
over the course of years, if not decades. Yeah. Uh.
And of course, because it's the Internet, what kind of
system are we talking about? I mean, I would argue
that it is a dystopia A little bit less like
nineteen eighty four and more like Terry Gilliam's Brazil, where
you know, there's no one person in charge that you
(17:24):
can rebel against. No one person seems to understand how
it all works or call all the shots. It is
a terrible will emanating from the void that is expressed
through millions of antlike functionaries, each doing it's bidding without
being bidden. I mean, isn't the oppression scarier when there's
no boss in charge of it all that you can
(17:45):
rebel against? Absolutely, And especially when it's uh when it's
a case like facial recognition where a lot of the
a lot of the the advancements that have been made,
and then more specifically a lot of the cases where
it has been or is being um rolled out. You know,
it's often it's it's not as a situation where people
are getting to vote on it or even necessarily having
(18:07):
any kind of really broad discussion about it before it
takes place. It just sort of happens and then here
we are. You know, So not only is there no
key individual in place that you can blame and rebell against,
there's like they're not even necessarily a set point in
time where you could say, we have to go back
and change this, you know, getting a time machine and
(18:27):
try and prevent facial recognition from being rolled out. And
where do you go, Who do you try and stop?
Who do you try and speak to except the whole world?
You know? Another thing I would say is to go
back to my germ warfare analogy. I think it's possible
that that he's not quite right, that the different methods
of ideaing you and tracking you are indistinguishable from each other.
(18:48):
I think it's possible that facial recognition is an especially
insidious and corrupting type of automated identification compared to some
other methods, like you know, ideas of credit card numbers
or math addresses on your phone. Because our lives are
built around faces, our social existence and our cultures are
built around interfacing with faces. I mean, it seems like
(19:11):
somehow a more dangerous kind of well of information to
poison in the culture than to say, like, well, you know,
your phone knows where you are. I mean, you could
potentially smash your phone with a hammer. Yeah, and again,
I think I've mentioned this already. So many of the
things involving facial recognition on the surface, it doesn't seem threatening.
(19:31):
It seems like, well, we wanted machines to do what
people could do, so we we've created a way for
them to do it. That's what we always wanted. How
is that that terrifying? And I think part of it
is haven't you always wanted a person who is a
boss who can watch you every minute of the day. Well,
I mean, yes, when you get do specific examples of it.
But but I think I think one of the keys
(19:51):
is that when we're talking about automating this sort of thing,
and when we're talking about the machine version of it,
we're talking about a version of it that exists on
scale and on a scope that is beyond human. And
that's that's the thing. It's like, we're not talking about
a human capability anymore. We're talking about an inhuman capability
the same way that we've you know, we've talked about
(20:12):
consciousness before and we sort of post the question, well,
is part of consciousness? The limitation of what we can
focus our attention on? Is there is part of the
key to being human? Uh? Is part of it? The
limitations on what our brains can do, on what our
senses can do. And we're talking about models that are
not limited by those senses or those uh, you know,
(20:34):
those computational resources as beautifully put. And I think you're
exactly right. I mean, if you could be conscious of everything,
I'm not sure you would be conscious anymore. I'm not
sure you'd be a person. Yeah, yeah, I don't think.
You know, this is something that that far more specialized
people than myself probably have better arguments about. But yeah,
to to what extents extent would a super consciousness not
(20:56):
be a consciousness? It would be beyond what we think
of a consciousness. Okay, we need to take a quick break,
but when we come back, we will discuss how facial
recognition technology is already being used in several countries around
the world. Alright, we're back, all right, So let's let's
look at just this is just going to be kind
of a snapshot at a few different a few different
(21:19):
stories covering facial recognition as it is being rolled out
right now as of this recording in at the tail
end of January. So the first a couple of sources
of looking at uh. There's a an opinion piece by
Frederic califooner UH in The Guardian titled Facial Recognition cameras
will put us all in an identity parade. And this
(21:41):
piece also refers in places to reporting from the same
month and publication by Vicram Dodd, a police and crime
correspondent for The Guardian. So basically, London's Metro Police announced
their intention to launch live facial recognition cameras in London,
a city already known for a mass surveillance survey ence
via their CCTV system. It's a move condemned by civil
(22:04):
liberty groups as as a quote breathtaking assault on human rights. Uh.
The police, however, claimed that eight of people surveyed back
the move and they would only be used to catch
violent criminals and find missing people. Uh. They also stressed
that it will be properly posted and it will be
rolled out with clarity and transparency. Uh and and only
you know, following outreach in affected communities. Now, you know,
(22:27):
we've kind of talked about about this already. If you
if you stand by the notion that, Okay, I trust
the government as as it exists now, and I trust
whatever model it will take in the future. I trust
the keepers of this information and my personal information and uh,
and I don't have anything to hide anyway. Then okay,
I guess you can easily get on board with something
like that, but a lot of folks have have issues
(22:49):
with this rollout in particular, as well as with facial
recognition technology in general. So for starters, this comes as
the European Union is consider during a temporary ban on
facial recognition, and of course this is also occurring as
the UK continues to extract itself from the EU. Also,
the only independent review of the Metro facial recognition public trials,
(23:14):
by one Pete to Fussy of Essex University, found that
it was only verifiably accurate in just nineteen percent of
the cases that opposed to the uh TOO. I think
it was something like a seventy success rate that the
Metro police were claiming. Yeah, I mean we saw in
the first episode we talked about how the company clear
(23:34):
View AI claimed that they found correct matches up to
seventy percent of the time. But we've definitely read reviews
that that placed correct to numbers way way lower. Right,
And and the other thing is there's no there's no
opt out here. This is not even though they're talking
about targeted uses of it, the technology is not targeted.
All faces are scanned that are at all scannable, and
(23:57):
therefore everyone is in a virtual criminal line up whenever
they're in the sights of the camera. Um. This is
a quote from that opinion piece quote. Given that the
system inevitably processes the biometric data of everyone live, facial
recognition has the potential to fundamentally change the power relationship
between people and the police, and even alter the very
(24:20):
meaning of public space. So the you know, the standard
criticisms come up as well, including misidentification, but not only misidentification,
but also automated misidentification, which the author charges shifts the
burden of proof onto the falsely recognized individual because there's
no human accuser, right, it's just the machine said you
(24:41):
did this, and it's it's up to you to prove
that you are not that face, that that face is
not yours. Again and and and this is and I
want to stress that this is problematic even if that
even if the success rate were not so low, like
because obviously the technology is improving and that is not
the ants or that is part of the problem. No,
I'm more scared when the success rate gets verifiably higher,
(25:04):
because then if the trust and it just keeps going
up and up. The fewer cases where people still get misidentified,
which always is going to continue to happen to some degree,
are are going to get more and more like, you know, terrified.
They're going to be more and more under the knife, right,
I mean you end up with it again, it changes
what is a public space. Well, a public space then
I can't go to the park without being in a
(25:25):
virtual uh like a lot of crime line up, Like
I am always going to be profiled as to be
as a potential criminal. So yeah, it's a it's a
good piece, worth a worth reading. So Caul Throner, you know,
sums this up by saying, you know, look, the stakes
are very high with this kind of invasive technology. And
while these rollouts are not without their critics. Obviously we've
(25:48):
been talking about the various criticisms of it, but they
argue that we still haven't really seen them subjected to
true public debate. The technology is happening faster than the
awareness of it is exactly. Yeah. Yeah, And I would
point out another thing about it, which is just that, Okay,
if you suddenly put a pause on this kind of
technology and say we can't use it, and then you
(26:11):
go under a review, when you review and review and review,
and you're sure in the end, okay, it definitely the
benefits outweigh the harms. Then you could still release it
in the future. But I think you can't really go
the other way, right, once you live in the facial
recognition world, that animals not getting back in the cage, right, Yeah,
(26:31):
once you've sort of eroded the norms of of privacy,
like how do you go back? You know, you and
quickly enter an age where people who don't want to
be a part of the database are going to be
seen as the outsiders in the fringe cases and the
you know, the people who live out in the hut
in the woods. And perhaps that would be also because
(26:52):
they would be effectively ostracized from so many of these systems. Yeah. Uh,
And again we're mainly talking about like state uses or
lease uses or whatever, but there's a whole other world
to consider of just like, uh, distributed public use of
facial recognition systems, which could be hugely socially important, could
(27:12):
lead to new types of social ostracism and stuff. Yeah,
or situations where oh well, I'm I'm opting out of
facial recognition technology. Oops. Looks like it's gonna make it
really difficult for me to pay my bills because I
need my face for identification. On that, we'll get to
some examples of of that currently in use here in
a bit. Or how about I mean this might sound petty,
(27:34):
but imagine a world where, uh, anytime somebody invites you
to do something and you say no, I can't, they
can also very likely find out wherever you were at
the time you couldn't do the thing they invited you to. Die. Yeah,
I was just having a conversation with my wife about
smartphone technology, about how I had recently been to a
(27:55):
concert venue and uh, and I'm not going to name
the organization that was handling a concert, but you had
to have a mobile entry ticket, which, as far as
I know, that meant it had to be I could
be wrong on this, but my understanding was you had
to have like a smartphone version of your ticket to
get in, which I just kind of blew me agg
And I'll what if you don't have a smartphone? Not
(28:15):
everyone has one? Does this mean like only smartphone people
are getting into this thing? And my wife had a
similar situation with a parking deck where you had to
have an app on your phone to get and and
so you can easily imagine this kind of scenario extrapolated
to to facial recognition, like, oh, you're not opting in
on facial recognition, Well, I'm sorry. You can't come to
this this this concert because you have to have a
(28:37):
face scandy get in and your face will be scanned
of course while you're in the venue by the security systems.
But the UK, of course is by far not the
only place that's already trying to roll out some kind
of public form of facial recognition technology. I've read about
uses in China. I've read about uses in Russia. Yeah,
even in Russia, though, civil rights activists are criticizing facial
recognition technology is a threat to privacy and human rights,
(29:00):
specifically for the technology's ability to say, identify individuals at protests,
store them a database, and then track them. Right, you
could easily be put on the you know, the government's
undesirables list. Here's a quote from Natalia's Virgina, Amnesty International
Russia's director. She's quoted in the January BBC article Russia's
use of facial recognition challenged in court. Quote, facial recognition
(29:24):
technology is by nature deeply intrusive, as it enables the
widespread and bulk monitoring, collection, storage, and analysis of sensitive
personal data without individualized reasonable suspicion. Again, everybody's in the
criminal lineup. Yeah, you don't have to do anything in
particular to arouse suspicion of yourself just by being in public,
(29:45):
that's enough. According to the article that PBC article, Moscow
has about a hundred and sixty thousand CCTV cameras in
operation in the city and this month month they expanded
the number using facial recognition with no x a nation
of how privacy and human rights would be insured or
I mean it seems if they would being insured, because
(30:07):
it seems like the basic system would would be an
athema to that. The BBC article also pointed out that
the Moscow depart Department of Information Technology has reportedly signed
a deal with the Russian firm intech Lab to provide
the need to pride the needed technology, and this firm
had previously rolled out the the Fine Face app, which
(30:27):
used data from a website that is often referred to
as Russia's Facebook. It is not an actual Facebook website,
but is often held up as the equivalent. Yeah, there
are a lot of different sort of other countries facebooks,
and then there's China to consider. So China has also
rolled out facial recognition, and there are a number of
different uses and implicatement implementations that you can find just
(30:50):
all over the place. But you can find it in
train stations, airports, stores, hotels, gated communities, and more. I
think I had seen allegations that it had been used
in um in attacks on the weaker communities. Basically, yes,
that's a big one, the tracking of the alleged tracking
of ethnic minorities in China using these systems. But you
(31:13):
also you also find it another place, like it's used
for things that are almost comical but also troubling by
just how tedious and small steaks they seem, such as
you know, using it to catch toilet tissue thiefs at
bathroom um as. Some of what I was reading about
it came from a New York Times article by any
(31:33):
chin Uh that's about q I in it. In case
you want to look around, I'm not gonna name I'm
not gonna mention the title of the article because it
will spoil the fun of what I'm about to share.
But anyway, she in this she described China as a
quote a country accustomed to surveillance, which I think is
a is an interesting way of looking at because I
wonder if it is if China's case in some way
(31:55):
provides a model of where some of these other countries
we've mentioned could be head it very soon, Like it
is just given a a culture that is in very
broadly um predisposed to to being okay with surveillance. Uh,
it might provide a future look at where where we're
going in these other countries. Uh. For for example, you
(32:18):
see quite a bit of facial recognition used to do
stuff like open your phone or make a payment, something
that the South China Morning Post points out has been
disrupted by the current coronavirus outbreak. So in the wake
of this current global health threat which has impacted China
the most thus far, lots of people are wearing surgical masks,
and they were already in many cases wearing these masks
(32:39):
and high numbers due to pollution concerns. But the software
that is used in many of these instances is apparently
struggling to deal with such little facial information, requiring other
biometric data uh, in order to still I D and individual.
So so Chen's article goes into this about like people
basically have to make a choice between uh, being able
(33:02):
to easily make payments at stores using facial recognition or
feeling like they are adequately protected against a dangerous illness.
This is a great example of something that I was
searching for an example of later in the episode. So
we'll have to remember this later. Now I mentioned something
kind of fun but also still scary, and that it
seems small steaks and uh. And that is the central
(33:25):
point of Chen's recent article, and that's the use of
facial recognition technology to root out quote uncivilized behavior in
in an Hua Province in eastern China. So using facial
recognition data, she writes, the urban management the Urban Management
Department of SUSO published surveillance photos of individuals engaged in
(33:47):
said uncivilized behavior with partial exposure of their names. So
like a public shaming or public outing, kind of like saying,
look at these people, her her name is, you know,
Amy or whatever. Uh, here they are in gauged in
uncivilized behavior. They should not do this. What was the
uncivilized behavior? I know you you would think Okay, is
it defacing you know, a public place, stealing the toilet
(34:11):
paper even is low stakes? Is that same? No, it's
even lower stakes. It's the wearing of full length pajamas
in public. What yeah, So, I mean I've seen people
wear pajamas in public. I am currently wearing clothes as
I record this. I'm wearing clothes that I also sleep in.
So I think the last time you came to my
house to let me borrow a book, I answered the
(34:32):
door in pajamas and didn't realize until you were there.
Because I'm sorry, man, No, no, because you don't have
to apologize, because this is the universal truth. Chen also
describes it thus in her article, pajamas are comfortable. They're
very comfortable, and public pajama wearing is common in China,
especially among older women and especially in cold months and regions.
(34:54):
And to add another level to the injustice is the
sort of thing that is praised as fashionable when celebrities
do it, but to this sort of backlash when it's
common people. And it's been a point of contention um
for for a little while here, with government officials trying
to ban it in some places, but the people pushing
back and say no, you know this, this is a
bridge too far. I want to wear my comfy pajamas
(35:15):
and if I want to wear them outside of my home,
I would do so. It's not uncivilized to do this.
And in this case, it's a rare it's a rare
example of of people in China pushing back against facial
recognition technology in a place again where the technology has
already been highly established and is is recognized and appreciated
(35:35):
for the things that it makes easier in life, such
as making payments. So when you're purchasing something at the store, yeah,
I mean the process there is so clear suddenly, how
like you are lured in with convenience, you know, you're
lured in with immediate concrete benefits, and then there are
these consequences that just come later. Right, Yeah, And again
(35:57):
it shows also it should shows you what happens when
the government or the police, you know, change their mind
about how specific they need to be with the laws
that they readily enforce. You know. It's it's like, it's
it's similar case to you know that we often encounter
with say traffic cameras, like how are we supposed to
feel about like, yes, we don't want people just you know,
(36:17):
going through the traffic running every red light. There needs
to be order, uh, And I guess you know, there
needs to be there needs to be this idea that, yes,
if you break these rules, you will you know, you
could be pulled over, you get a ticket, there could
be some sort of a punishment in place. But when
you have this situation where any any transgression, uh, you know,
for you know, for even the smallest thing can be
(36:37):
met with an automated fine. Uh. You know, that gets
into an area that feels more like dystopia than order. Yeah,
it certainly can. I mean again, it's one of those
things where it starts to get harder to argue with,
Like how do you argue with the machine? The camera
said you were speeding and you're like, I wasn't speeding.
What do you do? Yeah? So anyway, these I think
these these examples, I think they just they just helped
(37:00):
provide a more a more nuanced idea of like what's
going on in the world right now with facial recognition, uh,
and what the many pain points are and and and
what people are where people are fighting back against him. Yeah.
So I think now that we better understand how the
technology is being deployed. Of course, in the first episode
we talked about how it's being deployed even in the
(37:21):
United States. The question is how should we react, Like,
what can we do? Uh? And I think maybe it
would be best to first talk about individual countermeasures and
then come back to broader action after that. Alright, so
individual countermeasures so something that an individual person can do
in the face of facial recognition technology. Yes. So. A
(37:42):
bunch of the existing knowledge about how to confuse and
confound facial recognition tech was summarized in a really good
article I was reading and wired by at least Thomas
from February of nineteen called how to Hack Your Face
and Dodge The Rise of Facial Recognition Tech, And unfortunately,
Thomas points out that the best way to foil a
facial recognition system is to know what method of facial
(38:06):
recognition is being used. Thomas quotes a privacy advocate named
Lily Ryan, who says, quote, you really need to know
what's under the hood to know what's most likely to work,
And it can be very hard for the average person
to know what kind of facial recognition is being used
on them at any particular time. So it's worth pausing
to look at different kinds of rebellion against the recognizing machine.
(38:29):
I think the first one is the simpler one. That
the first would be an anonymity defense, some method of
making your actual identity unrecognizable and presenting as an unknown,
unscannable person. This would essentially trying to become faceless, yeah, exactly.
The second path would go beyond that into what are
called presentation attacks in some of the literature. For example,
(38:53):
there's an article that's linked by Thomas by Raga, Vendra
Rama Chandra and Kristoff Bush called Presentation Attack Detection Methods
for Face Recognition Systems. A comprehensive survey published in in
a c M Computing Surveys and so this is a
survey of known presentation attacks, also known as direct attacks
(39:13):
or spoof attacks the author's right quote. The goal of
a presentation attack is to subvert the face recognition system
by presenting a facial biometric artifact. So if you go
in front of a facial recognition scanner and you hold
up a picture of Nicolas Cage, or you wear a
Richard Nixon mask or something, you are conducting a presentation attack.
(39:36):
You're not just trying not to be recognized as yourself,
but actively trying to be recognized as someone else. Common
methods here would include like presenting a photo of someone
to a scanner that has actually worked in a lot
of cases, uh, playing a video of the target face,
or wearing a three D mask of somebody else's head.
(39:57):
All of these methods have had some success us, but
the researchers here describe presentation attack detection algorithms or p
A D algorithms that are countermeasures against the countermeasures. Now,
a question you might be wondering is like, well, why
would you want to present as someone else instead of
just being anonymous? Well, I mean there there are all
(40:17):
sorts of reasons for that. I mean, there's certainly nefarious
reasons for that. If you get into a situation where
saying facial recognition is required to interrogated community, well, then
if you wanted to break into sedigated community, it would, uh,
it would behoove you to have another person's face to wear.
Perhaps you know, print it out exactly right, That could
be the direct reason. Maybe you want access to a
location or a device, and access is granted to specific
(40:41):
people based on facial recognition, So you use an authorized
person's face in order to get in. But I can
also see another idea, which is perhaps rampant presentation attacks
could be an effective method for fighting the facial recognition
reign of terror. Beca is it would not deny these
(41:01):
data collecting systems the data they want, not just do that.
It would go further and come up the databases with
lots of confusing, incorrect information, which might in fact make
them less useful overall. Yeah. I mean another application here
that is awful to think about is the use of
(41:21):
you We're talking about sort of the automated guilt machine
that could exist with facial recognition technology. Say you had
it in for somebody you know you're mad at, you know,
a coworker or a schoolmate. Then you go and you
do something illegal with with a mask of that person's face.
You know, not enough to say send them away forever,
but enough to say, uh, you know, to to cause
(41:45):
them a lot of grief in the short term, in
the very least. I mean, you don't know how well
their their defense would be. I mean, maybe it would
send them away forever. I mean, I don't know how
much faith the criminal justice system is going to end
up putting in the verdicts of these machines. I wouldn't
be surprised if it's too much. We've seen that before.
We've certainly seen models of that with some of the
past episodes where where we've discussed a forensic science. Yeah, exactly, Yeah,
(42:09):
there are a lot of methods of forensic science that
have been vastly overestimated in their in their confidence. Now, finally,
another reason I was thinking it might be useful to
present as another human instead of just trying to make
yourself anonymous is it's not hard at all to imagine
scenarios were being unscannable or being anonymous, will itself be
(42:31):
a problem, will restrict your rights, make you a target, etcetera. Like,
the anonymity could attract attention rather than discouraging it. So
the alternative would be to appear as a real scannable person,
but not yourself. Right, Wearing a faceless mask in public
is gonna draw more attention than looking like someone else. Yes,
(42:53):
even even if it wasn't like that great a mask
you know, uh it would, it would still it was
still potentially draw less attention. Now back to Thomas's article,
Thomas writes, that. Of course, you know, the simplest method
of fighting facial recognition is what would normally be called occlusion,
hiding all or part of your face. But again, this
is more difficult and more complicated than it sounds. So
(43:14):
let's say you want to walk around in public with
your face completely hidden behind a cloth or a zip
up hood or something. You know that there are there
are actually people selling basically backwards hoodies, you know, front
zip hoodies that cover your entire head. First of all,
is that legal in a lot of places? No, in
a lot of places in for example, Europe and Canada.
(43:34):
In the US, it's illegal to cover your entire face
in public. But even if these laws were changed or
you're in a place where that's not illegal, is this
socially feasible. We'll come back to that in a minute.
But okay, let's say you decided it is not practical
to completely occlude your entire face. What if you just
cover part of your face? Unfortunately, Thomas writes that a
(43:57):
lot of facial recognition software is good enough now to
make partial covering of the face ineffective as a defense.
Uh quote. For example, a bala clava, which leaves the
most important facial features exposed the eyes, the mouth, the
nose may not actually do much to prevent a person
from being identified. Researchers have found that by using a
(44:18):
deep learning framework trained on fourteen key facial points, they
were able to accurately identify partially occluded faces most of
the time. This includes wearing glasses, scarves, hats, or fake beards. Yeah.
I mean when you're getting down to things like like
the measurement between your eyes, you know, stuff like that.
I mean, it's it's probably going to be visible. And
most of these facial inclusion methods and it's it's not
(44:42):
something you can easily mess with, you know, I mean
short of like massive facial injury. Uh, I can't think
of anything much that's going to alter that measurement. Yeah,
and they're apparently multiple measures like that of a face.
You know, as long as you want to have your
eyes and your mouth and stuff exposed, is there are
probably going to be systems, especially in the near future,
(45:04):
that will be fairly accurate identifying you. Anyway. Now, another thing,
as I mentioned a minute ago, there's some evidence that
three D printed masks based on other people's faces can
be pretty effective, but they might not remain effective for long. Remember,
of course, we've got these pa D algorithms, the presentation
attack detections, and they're developing quickly. And also, I mean,
(45:25):
is that anything close to practical for regular people? Like
it seems more like an option that might be available
to a professional spy, but not just as somebody trying
to live their life. Now, there's another interesting method that
at least Thomas mentions in her article, which is confusing
the computer into believing it is not looking at a
face at all, attacking not the facial recognition stage, but
(45:47):
the facial detection stage. I hadn't so much thought about that,
and I think that's a really good point. One solution
along these lines is widely known as c V dazzle,
which stands for Computer Vision Dazzle dazzle kind of in
the same sense it's used in like a military context.
You would put dazzle on the side of a ship
in a context of warfare, which are like different lines
(46:09):
going in different ways that apparently make it harder to
look at the ship and determine its speed and it's bearing,
like like some of the more like lightning bolty camouflage designs.
One sees not not necessarily be dazzled, which is the
first thing that came to my mind, like just bedazzle
your face. Well that it's funny because that does kind
of come in but but yeah, I think it's based
(46:31):
on the idea of a visual dazzle, and so what
it involves for people is altering the appearance of your
head to stop facial detection algorithms from flagging it as
a face. And the most common ways to do this
are with makeup, with hair styling, and coloring with facial
accessories like hair clips and stick on rhinestone. So there
(46:51):
is some bedazzling going on. Robert I included a few
examples here for you to look at. These are from
a website cv dazzle dot com. We offers some explicit
style guides that people can use. Oh wow, these are great.
I mean these look like futuristic hair and makeup designs
that you might see in like Blade Runner or something. Yes,
now again here, clearly, the the purpose of these designs
(47:15):
is not just to make your face look different. It
is to try to make your face look like something
other than a face, which could but I can see
proving potentially difficult coming back again to that that blog
post that we discussed in one of the earlier episodes,
um about about trying to to fool the Skype facial
recognition software. Yeah, the background blurring thing. Yeah yeah, so
(47:38):
like you know, even the stuff giraffe was getting recognized
as mostly a face. Uh you. So it's it's a
more difficult challenge than than one might think. But but again,
these visual examples from c v Dazzle dot com I
think will be very informative if you can't quite picture it,
look these up, yeah totally. Um. Now, a lot of
them involved things like, um, sort of different streaks of
(48:00):
light and dark in the hair and in lines across
the face, and makeup hair partially covering the face in
a lot of cases, things stuck on the face that
kind of make it make the shape look different. Thomas
quotes Kristoff Bush, one of the authors of that study
we mentioned earlier, and he says, quote from an academic
research perspective, the makeup attack is gaining more attention. However,
(48:24):
this kind of attack demands good makeup skills in order
to be successful, and that's a really good point. In
other words, it's not good enough just to do some
unusual things to your hair and put different colored makeup
on at random. The face design needs to be specially
tailored to obscure and break up specific like lines and
shapes and points on the face in order to make
(48:47):
it sabotage the detection stage. So if if you want
to know what these specific techniques are, you can find
guides online from people who study the issue. Now, I
think this is really cool, and you know, being being
somebody who's into I guess weirdness myself, like I like
these styles. Like if I saw somebody wearing these styles,
I would think it was cool. But I I think
(49:08):
we should be real and say these styles are in
daily life simply not socially feasible for a lot of people,
maybe most people. Well, for one thing, socially, they're going
to have the opposite effect. Instead, you're going to draw
attention to yourself from humans, even if you are obscuring
the attention of the machine. Yeah, that's exactly right. And
in addition to that, a lot of people's friends, families,
(49:29):
especially workplaces will probably not be okay with them styling
their hair and makeup deliberately to make their face look
as unlike a face as possible. I'm okay with it
around here. Yeah, me too. I don't know if the
bosses would be okay, right, You can easily imagine like
a very conservative or a governmental kind of employer, who
would you know, we're gonna have a very firm idea
(49:52):
of what your hair and your your makeup should consist of.
And again, if you're not allowed to wear, say, pajamas
in public, I can't imagine this would lie either. And
there's another complication actually that that makes this even more difficult.
So the CV dazzle method is only effective at fighting
face detection technologies that rely on visible light, and not
(50:13):
all do Thomas sites. For example, Apple's Face i D
which act which actually uses infrared light. The system detects
sort of more about the underlying contours and like bone
structure of your face, and it is not easily thrown
off by unusual patterns of light and dark colors. The
CV dazzle wouldn't necessarily affect a system like that, Uh,
(50:36):
though there are other methods you could maybe use. Apparently,
you might be able to protect your face from infrared
detection by, for instance, wearing a hat that projects infrared
light on your face in weird patterns, as demonstrated by
one study from Chinese and American researchers in But again, like,
is that realistic that that people would be able to
do that? Thomas mentions another method that I like overwhelming
(50:59):
to re action. Uh, kind of like a visual denial
of service attack on facial recognition. The solution here is
pretty simple. You cover yourself in lots of images of faces,
shirt scarf, ear rings, all with pictures of faces on them.
Would this always work? Probably not, but it will be
effective with some systems. And she ends her article by
(51:21):
mentioning again like major problems with existing facial recognition technology
that we've already alluded to, like huge numbers of false matches,
errors that skew along race and gender lines, all kinds
of problems like that. She ends up saying, quote, the
real solution to issues around facial recognition the tech community
working with other industries and sectors to strike an appropriate
(51:42):
balance between security and privacy in public spaces. Uh, Which
that may be an answer, but I mean I wonder
will that be good enough. In the first episode of
the series, we discussed several figures who have called for
either strong regulations or even outright bands on face recognition technology.
So I think next we should look at that as
(52:03):
a solution, maybe after we come back from a break.
All right, we'll be right back, thank thank alright, we're back.
So we were just talking about the the individual approaches
to fighting facial recognition, like things you could do to
disrupt your own image and or potentially full facial recognition device.
(52:23):
But now we're getting more into bands and regulations, the
broader governmental legislative moves that could be made to keep
this kind of technology from getting out of control. Yeah,
And there was one article I was reading that I
thought was pretty straightforward and made a very good case.
It was by Evans Sellinger and Woodrow heart Zog, published
in The New York Times in October seventeen nineteen. It
(52:45):
was called, what happens when employers can read your facial expressions?
That's that's a great question, um uh. So Sellinger is
a professor of philosophy at the Rochester Institute of Technology,
and heart Soog is a professor of law and computer
science at Northeastern University. And they are responding to the
fact that, of course many many are calling for a
band on this technology. Uh, and this is one of
(53:07):
these rare cases left in in the US politics, at
least where there is actually some bipartisan agreement. Apparently, some
right wing politicians like Jim Jordans have expressed concern. Speaking
to NPR in July twent nineteen, he said he thought
it was time for a time out on this technology
and that we needed to put safeguards in place before
(53:27):
we went forward with developing it. Meanwhile, you've got left
wing leaders Alexandria Cassio Cortez and Bernie Sanders have called
have called for regulation or bands on facial recognition. I
think uh Sanders announced the band as part of a
criminal justice reform agenda for his presidential campaign. So all
over the map, people are throwing up flags and saying
(53:47):
we stop this is this is scary, We need to
do something about it, and that at least is reassuring.
Let's just hope that that can maintain that. You know
that there there remains bipartisan support. It doesn't wind up
politicized in way or the other. But but certainly, the
idea of your face not becoming part of a massive database,
the idea of not being in a perpetual police lineup,
(54:09):
I feel like like that is going to strike a
chord with with with with with with most demographics in America. Well,
I think it's one of those weird things where there
is some bipartisan support, but there's also just not nearly
enough awareness. So like a few people on different parts
of the political spectrum are all sort of in agreement
about this, like wait a minute, we need to do something,
(54:31):
But it's not a lot of people overall, right, and
if all, and if the only thing you've really heard
has been like say, pitched from say a company that
is specializing in this, perhaps with a law enforcement focus,
like you just you just might think, oh, well that
sounds fine, say, you know, find missing people and stop criminals. Sure,
I'll sign off on that those are good things. These
are good things, but it's not the complete picture, right
(54:53):
uh So they also the authors here note that many
local governments have already formally restricted their government agencies, including police,
from using it, so nothing really strong has happened at
the national level, but like San Francisco, Oakland, Berkeley, Somerville, Massachusetts,
and some other local areas have have put a ban
in place. So the authors of this piece, Hertzog and
(55:14):
Sellinger here argue that it is not enough just to
limit the ways in which facial recognition is used, or
to say, restrict government agencies such as police departments, from
buying these tools. They say that, unfortunately, the only way
to actually protect ourselves is to enact a complete and
total ban on the technology. Quote, we must ban facial
(55:36):
recognition in both public and private sectors before we grow
so dependent on it that we accept its inevitable harms
as necessary for progress. Perhaps over time, appropriate policies can
be enacted that justify lifting a ban, but we doubt it.
So I think they make a pretty good case, and
and we'll get to a little bit more about it
(55:57):
in a second. But you know, you might be wondering,
like if there is actually some bipartisan agreement that facial
recognition could be devastating to our basic liberties, like what's
the hold up? You know, what's the problem. And so
of course, the authors note that in general, the United
States is very reticent to enact bands on technology like
with with one of the few counter examples being various
(56:18):
types of malware. And I think that's good because I
think it makes sense to think of facial recognition technology
as a form of cultural malware seeps in makes copies
of itself. You know, you get it as a byproduct
of something you wanted to download. But of course there's
another thing. The authors don't really speculate about these motives,
but obviously one big hurdle is that there's just a
(56:40):
lot of money to be made in this sector, and
even worse, there are significant sunk costs. Like many extremely
talented people and powerful institutions have already devoted significant resources
and time to improving facial recognition software. And we know
that humans do not like abandoning sunk costs. Once you've
(57:00):
already invested in something, you're kind of your psychologically stuck
with it. You know, do you want to throw all
the all that work in the trash now? Right? Right?
You know, companies that have we have created this kind
of technology that are looking for ways to expand new
markets that they can they can get into new new uses,
and certainly when you have uses that in and off
themselves are advantageous things like making payments, personal security um
(57:26):
and just the you know, the broad broadly speaking, the
idea of finding missing persons and catching criminals. Yeah, I
mean that makes all those things in isolation make they
make a good case, right, And that's actually the first
to main argument. So the authors of this piece outline
three major arguments that are used by the advocates of
facial recognition, and the first one is exactly that is that, well,
(57:50):
there might be some harms, but they argue the potential
benefits outweigh the harms. Uh. You know, again, think about
the benefits to law enforcement alone. Think of all the
violent criminals that could be called, Think of all the
missing persons that could be found. Uh. And on top
of that, think about some of the obvious consumer demand
that there would be for public versions of the software.
I mean, it's the kind of thing that, like, we
(58:11):
would be terrified about the thought of people using it
on us, but might be really excited to use it
on others. You know. It's just like this, uh, this
basic failure to like reverse the situation and apply the
same rules you would want applied to yourself to other people.
So under this mindset, instead of being banned, the people
who say, you know, the benefits outweigh the harms, they
(58:32):
would probably say, well, facial recognition should be lightly regulated.
You know, maybe we could require transparency and make sure
consumers are aware when face data is being harvested for
recognition purposes. You know you can't do it surreptitiously. Uh
And the authors here disagree, they write, quote, notice and
choice has been an abysmal failure. Social media companies, airlines,
(58:54):
and retailers overhyped the short term benefits of facial recognition
while using unreadable privacy policies and vague disclaimers that make
it hard to understand how the technology endangers users privacies
and freedom. Uh So, you know, like, does anybody actually
read the end user license Agreement? Of course not No,
(59:14):
I mean, like, even if we did, would some vague
legal phrasing about ownership of face data cause us to
actually forego participation in technological trends that everybody around us
is adopting. I mean again, of course not, Like, even
if the harms vastly outweigh the benefits. The benefits are
immediate and concrete, and the harms are long term and abstract.
(59:38):
Exactly the kinds of cases where we are so bad
at like making informed decisions, like would you like a
slice of pizza right now? Just be aware that you
know it may compromise the integrity of your identity in
some way that's difficult to picture. Now. The next counter
argument they explore is that the idea that strong fears
about new technologies are overreactions, and we've you know, we've
(01:00:01):
looked at lots of ways that this can absolutely happen.
We've discussed it on this show and also a lot
on invention. Think about the panic about a racer. Just
remember that, yes, yes, definitely, the idea that oh, racers
are are exist. All my writings will be a race
or remember some of the panics that came with the
advent of photography, right, yeah, and just the general idea
of a future shock, you know, the idea that that
(01:00:23):
rapidly advancing technology is overwhelming and uh um yeah, that
is a subject unto itself. Yeah, So this argument would
be that things sometimes just seems scary and provoke shock
because they're new, but once we get used to it,
it's great. Uh. The authors disagree. They do not think
this is the case with facial recognition. They argue that
the backlash is not just hyperventilating about something that's unfamiliar.
(01:00:46):
In very concrete ways, facial recognition does have a unique
power to create a world with pervasive automated surveillance in
a way that's disempowering to an individuals in almost uncountable ways,
they write, quote, big companies, government agencies, and even your
next door neighbors will seek to deploy it in more places.
(01:01:06):
They'll want to identify and track you. They'll want to
categorize your emotions and identity. They will want to infer
where you might shop, protest, or work, and use that
information to control and manipulate you, to deprive you of opportunities.
It's likely that the technology will be used to police
social norms. People who skip church or jaywalk will be
(01:01:26):
noticed and potentially ostracized. And you'd better start practicing your
most convincing facial expressions otherwise during your next job interview,
a computer could code you as a liar or a malcontent. Now,
remember in the last episode when we talked about like
these services that are being sold as like identifying people's
emotions through facial recognition, which there is a major study
(01:01:47):
that really undercut that and said these things are not
very accurate, but that doesn't mean they're not going to
be used. Right, and in terms of policing social norms,
again go back to the pajamas because it's it's slightly
hilarious as the pajama cases. It is also frightening because
it is a firm example of facial recognition technology being
(01:02:07):
used to police social norms. Yes. Uh. And then the
third counter argument they look at is basically the argument
that Bruce Schneyer was making when we referenced his article earlier. Uh.
It's that facial recognition technology is just one branch of
a broader privacy and civil liberty debate. Um, and we
we need to focus on all surveillance, not just this
(01:02:28):
particular technology. So how about you know, things that identify
you by your gate, the way you move, or about
retinal scanning or brain scanning or anything like that. And
I'd say about this one, well, you know, it's true
like other technologies could represent the kind of threat to
privacy and freedom that facial recognition presents. And some of
those are scary and awful too. You know, we're not
(01:02:49):
saying like don't, um, don't ban gate recognition scanning, or
you know, don't scan my brain while I'm at a restaurant. Yeah, exactly.
I mean I would say facial recognition is going special
attention because it's already here, like people are selling these programs. Uh.
And another thing is that, you know, different technologies are
slightly different, and they require different regulatory schemes. You know,
(01:03:12):
the authors here point out that the law singles out automobile, spyware,
medical devices, and a bunch of other different kinds of
technologies with their own laws and rules. They're not all
covered under one type of law. We've got individual agencies
for like airplanes and for phones and stuff. But then
another thing is that faces are central to our identities.
(01:03:35):
In the first episode, you know, we joked about going
around constantly changing masks, but is this socially feasible? I mean,
be realistic, Like we need to see each other's faces
in order to see each other. Seeing faces is the
soul of human life. Yeah. One of the one of
the whole aspects of the whole bummer situation that we've
discussed involving social media and just electronic culture in general,
(01:03:58):
is that we don't see each other's faces, not not
the living face, not the face that shows expression and humanity.
We just see the manufactured faces. And uh, we don't
want to live in a world of physically manufactured faces
because we've could created a technology and allowed it to
get out of hand. Yeah. Uh. And so I think
often with issues like this and and something you run
(01:04:18):
into with a lot of these counter arguments that these
things that are in favor of facial recognition, or we
argue in ways that fall into a trap of talking
about just what's possible or what's technically true, rather than
what's realistic and what's practical. Like could you protect people
with opt in methods where they have to sign a
ula disclosing that they've surrendered their privacy? I mean in theory, yes,
(01:04:41):
but in practice we know we just know this doesn't work,
you know, we all just click. I agree, And again,
the benefits are immediate and concrete, and the harms are
long term and abstract. Yeah, I want to see what
I'll look like as an old man on my iPhone
right now. I don't care where this software, where this
app is coming from, and where of this facial data
might or might not be going. Yeah, and I'm obviously
(01:05:03):
i'd advise people not to do that. But if you
react to that with the mentality of will you signed
the contract, You've got nothing to complain about. That just
doesn't take the experience of human life seriously, right, And
it's nothing one of these cases which to to um
to summon the words of William Gibson of technology making
packs with the devil possible, things that were only the
(01:05:25):
wild imaginings of of people in the past we bring
to life with technology making AI demons, making it possible
to to to sell your face to some faceless entity.
I think that's exactly right. I mean, this is like,
this is somehow a case of like a horrible fantasy
becoming reality. Um. And you know, and this could be
(01:05:47):
the case with a lot of things. So it's not
just the u LA agreements about you know, about people
arguing about like what's technically possible without just acknowledging what's realistic.
What are people really going to do with their lives?
Same thing is, you know, you could argue, well, if
you don't like it, you could go around with masks
on or something in order to opt out. I mean
you could try that. But are people actually practically gonna
(01:06:08):
want to live that way? Yeah? I mean not only
living with a mask, but living among the masks, because God,
we've talked about this before, just the psychology of of
masks and what happens when you wear one, and the
group think of mask wearing and and generally the idea
of encountering a bunch of people wearing masks or the
same mask is a frightening a proposition and they are
(01:06:29):
countless historical examples of that. Yeah. So the authors here conclude, quote,
we support a wide ranging ban on this powerful, powerful technology,
but even limited prohibitions on its use in police body cams,
d m V databases, public housing, and schools would be
an important start. And they say the public is ready
for this, and the actions by San Francisco, Somerville, Berkeley,
(01:06:52):
and Oakland show it. Our society does not have to
allow the spread of new technology that endangers our privacy.
And I guess it, just speaking for myself, I I
am highly convinced by this point of view. I mean, like,
I think, if you ultimately in the future were really
confident that you wanted to change your mind and move
forward with facial recognition technology, you could lift a ban.
(01:07:14):
But like, you can't undo it once the technology is
out there and and it's coming really fast. Absolutely, yeah.
I mean on the other side is yes, so you
can think of various situations where you put bands in
place and then something bad happened, something summone's up a
great deal of fear in a nation and allows us
to slide back into this situation and say, give up
(01:07:35):
various privacies and rights in the name of feeling a
little less afraid. But that doesn't mean it's not worth
fighting for now, you know, before the fear, because imagine
how much further we would sink into into the fear. Uh,
you know if we already had all these technologies in place,
eroting and taking aware of freedoms. This is a topic
I think it's worth spreading the word about. I mean, like,
(01:07:57):
this is something a lot of people just probably aren't
thinking about at all, and it's coming really fast. So uh,
it's worth raising the awareness. Yeah, yeah, worth uh worth
recording three episodes that I must admit are not fun episodes.
The middle one was a lot of fun. I thought, okay,
the middle one. The middle one was more fun because
we're just talking about the basic biological facial recognition and
(01:08:20):
parts of this one we're at least entertaining that that
Wired article was was a very fun but also you know,
disturbing read. Uh, but it is. This is a troubling topic.
I am. I am more troubled for having researched it
and recorded it. But I think we should be troubled
by things like this, and it gives us the It
(01:08:40):
gives us the space from which to to act, uh,
you know, hopefully just by spreading the word about it
and uh and you know, getting to a place where
we have some of these protections in place for our
own faces. That's right, don't build the nightmare mask land right,
save your face, Do not go gentle into that good
night right rage, rage against the scanning of the face. Yes,
(01:09:04):
I will stress though. This was the third of a
three part series, So if you didn't listen to the
other two, go back and listen to them because it
will help fill in all the pieces for you totally.
But also it's worth pointing out that this is such
a bleeding edge issue that even though you'll be listening
to this episode mere days after we recorded it, uh,
there's just there's just gonna be more and more coverage.
There's gonna be you know, new movements happening, new new studies,
(01:09:27):
sadly new rollouts that are going to meet with controversy.
So um, just keep that in mind, especially if you
end up coming back to this episode same months from now.
We're in the frenzy period right now, all right in
the meantime, if you want to check out other episodes
of Stuff to Blow Your Mind, you can find us
wherever you get your podcasts. If you want to just
jump on over to the I Heart listing for the show,
go to stuff to Blow your Mind dot com and
(01:09:48):
you will be redirected. But wherever you get the show,
make sure that you rate, review, and spread the word,
because that's how we keep the show alive. Hughes, thanks
as always to our excellent audio producer Seth Nicholas Johnson.
If you'd like to get in touch with us with
feedback on this episode or any other, to suggest topic
for the future, or just to say hello, you can
email us at contact at stuff to Blow your Mind
(01:10:09):
dot com. Stuff to Blow Your Mind is a production
of iHeart Radio's How Stuff Works. For more podcasts from
my Heart Radio, visit the iHeart Radio app, Apple Podcasts,
or wherever you listen to your favorite shows. Bo