All Episodes

April 23, 2025 39 mins

Welcome to our first episode of kill switch, where we’re diving right into the deep end – investigating how police departments are implementing AI technology. Can AI facial recognition be a magical solution to the unreliability of witness identification? Or is it just making things worse? Dexter talks to Douglas MacMillan, a reporter from the Washington Post, who has been tracking the spread of the technology, and where it seems to (repeatedly) break down.

Got something you’re curious about? Hit us up killswitch@kaleidoscope.nyc, or @dexdigi on IG or Bluesky.

Read: Doug’s article, Arrested by AI

Listen: Post Reports podcast episode, Arrested by AI

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
December of twenty twenty, during the pandemic cold winter morning,
the security guard on a train platform and the outskirts
of Saint Louis was working his job and suddenly saw
a man in a security booth who wasn't supposed to
be there. So he approached this man and told him

(00:34):
to leave, and then the verbal altercation ensued and another
man walked up behind him, so he was surrounded by
these two guys who were getting more and more agitated
with him. The next thing he knew, he was assaulted
by these two guys, struck in the head, and when
he was on the ground, they continued to beat him
and hit him in the head and hit ananimous chest

(00:56):
and his ribs. The guys ran off and Michael and
the victim suffered concussion that day and he couldn't remember anything.
So what the police were really left with to solve
this case is the same thing that they're left with
in a lot of cases where there's no witnesses.

Speaker 2 (01:15):
A random crime, a victim with no memory, and nobody
seems to have seen anything.

Speaker 1 (01:22):
A video camera was the only witness to this crime.
This kind of fuzzy, blurry surveillance video captured a still
of these perpetrators, and that was basically the only thing
that they had to go on.

Speaker 2 (01:36):
Douglas McMillan is a reporter for the Washington Post, So Doug,
first off, for real, thanks for being down to talk
with me about all this. Of course, Doug doesn't usually
cover cops or the justice system. His usual beat at
the Post is corporate accountability and technology. But there was
something about this case in Saint Louis that caught us attention.

Speaker 1 (01:56):
There's a lot of crimes, hundreds or thousands of solved
cases out there where the only evidence was a camera,
The only evidence was a photo of this in perpetrator.
And the potential of this technology is we have this
seemingly magical tool to help solve those crimes.

Speaker 2 (02:18):
The technology he's talking about here is AI facial recognition.
Police around the country are increasingly using this on cases
that they can't find a witness for and this was
one of those cases. The investigation into who had assaulted
Michael Feldman had pretty much gone cold.

Speaker 1 (02:35):
Until about eight months after the incident. We believe the
police officers they decided, oh hey, we have these grainy
surveillance images, let's try to run them through the facial
recognition system and see if that will give us any matches.
And so they did that and they got some matches back.

(02:57):
We believe it was between five and ten maybe of
different possible matches. And the one they chose, the one
that they thought looked the most like the perpetrator of
the crime, or one of the two perpetrators of the crime,
was this gentleman named Christopher Gatlin. And then they kind
of were off and running. In this case, they had
a suspect. Then this was the only thing that police

(03:21):
had to arrest this man, take away his freedom, you know,
came down to this AI program.

Speaker 2 (03:29):
I mean, because the thing here that of course we're
sort of circling around, is that that's not the guy,
Christopher Gatlin's innocence.

Speaker 3 (03:46):
I'm afraid.

Speaker 2 (03:51):
From Kaleidoscope in iHeart Podcasts. This is kill Switch. I'm
Dexter Thomas.

Speaker 3 (03:58):
I'm sorry, I'm sorry. Good bye.

Speaker 1 (04:40):
So they went out investigating Christopher Gatlin and the first
thing they did was print out a picture of his mugshot,
and they took the picture of Chris Gatlin to the
apartment building where Michael Feldman lives and they conducted a
photo lineup with him, and they basically sat Michael Feldman

(05:01):
down and gave him the photos, put them on the
table in front of him, and instructed him to kind
of spend time looking at each one and going through
these photos, take.

Speaker 4 (05:12):
A look through all of them and see if anything.

Speaker 5 (05:15):
Rings a bell.

Speaker 2 (05:17):
Eight months after the assault, the cops visited the victim,
Michael Feldman's house, and they brought some pictures with them.
One of those pictures was of Chris Gatlin. And we
know exactly what happened next because it's all on record.

Speaker 1 (05:31):
So we got the body camera a video of this
photo lineup, and the officers who conducted this photo lineup
did not follow the proper procedures for basically getting an
affair and impartial identification of this witness. And you can
see throughout the process when Michael Feldman's going through these photos,

(05:53):
these police officers, who are really supposed to be just
standing off to the side and not saying anything, are
kind of coaching him and giving him kind of little
clues throughout the process on which guy they want him
to pick.

Speaker 4 (06:06):
Okay, let's think about before the incident, he went up
and he talked to him a little bit right, Let's
start to focus on things before you actually got any
altercation of it, maybe even before you actually initially went.

Speaker 3 (06:20):
To talk to him. Just take a minute kind.

Speaker 4 (06:22):
Of ponder, think.

Speaker 1 (06:24):
And at one point he actually picks up the picture
of Chris Gatlan and puts it aside, and he keeps
on looking through these photos, and at one point he says,
I want to say it's him pointing to a different guy.

Speaker 3 (06:38):
I want to say, jan I remember my memory is
I want to say to him.

Speaker 1 (06:47):
When he does that, one of the detectives steps forward
and says, okay. Instead of, you know, accepting that as
an identification, the detective encourages him to continue going and
continue thinking about the interaction. Think about, you know, was
a guy wearing anything, Was he wearing a hat? Did
you recognize anything about his face, his eyes, their clothing

(07:07):
he had.

Speaker 3 (07:10):
I don't remember really.

Speaker 1 (07:11):
I thought he had like a hat on or something, or.

Speaker 4 (07:14):
Stocking something on. So let's picture the picture of these
two guys wearing that, you know, a stocking cap or something.
If you need to use your hands, if you've got
to put hands on the papers.

Speaker 3 (07:23):
That's okay.

Speaker 1 (07:24):
And so you know, he kind of like, you know,
prods Michael into thinking that he didn't pick the right
person and then going back and kind of changing his mind,
And eventually he does go back and he pulls the
picture of Chris Gatland out and he points to that
and he says it was him, kid.

Speaker 3 (07:42):
All right.

Speaker 4 (07:43):
I just remember he getting really angry quick.

Speaker 2 (07:46):
Just the mannerism I'm just trying to and the eyes probably.

Speaker 4 (07:53):
How pissed off he got.

Speaker 3 (07:55):
As not to be in that.

Speaker 1 (07:57):
You couldn't stand there.

Speaker 3 (07:59):
I think.

Speaker 5 (08:01):
Okay.

Speaker 2 (08:03):
So police take that identification and they're able to get
a warrant to arrest Chris Gatlin. They have no other
evidence other than that an algorithm picked him out. He
spent seventeen months in jail. So, Doug, I've read about
Chris Gatland's case, but you've met him in person. What's
he like.

Speaker 1 (08:22):
He's a very sweet, charming guy, father of four kids.
He did not have a criminal record, so the reason
that he was in this system. So the fish recognition
database in Saint Louis has over two hundred and fifty
thousand mug shots of people, including people who were pulled

(08:44):
over for traffic stops and arrested for speeding or arrested
for very minor things. Saint Louis has a long history
of over policing, especially black minority, low income populations, handing
out tickets to people just so that police can meet
their quotas. So Chris, he had had a few different
traffic infractions, and he had one burglary charge from a

(09:08):
few years earlier. It was ultimately dropped right prosecutors. But
that's the reason that he came out, and that's the
reason he was in this system.

Speaker 2 (09:15):
It's incredible that you could be in a system that
would lead you to get arrested based off of a
traffic ticket. I have a completely clean record, with the
one exception that a literal decade ago, I was speeding
to work on the freeway. I was late for work.
I got pulled over. Now, I didn't get arrested, but

(09:35):
had somebody decide to go a little harder on me? Sure,
I suppose they could have. And I'm just imagining myself
being at home one day and police coming to my
door and saying, Hey, we're arresting you because we think
you did something that I've never heard of in a
place that I don't.

Speaker 1 (09:54):
Go, because you were in this database.

Speaker 2 (09:56):
Yeah, because I was five miles over the speed limit
one day going for work. Yeah, so maybe this is
a good time to say that all facial recognition technology
isn't the same, or maybe more importantly, that it all
doesn't come from the same place. The program that the
Saint Louis Cops use searches the internal databases of the
police department. But there's another very popular tool. It's able

(10:20):
to pull data straight from the Internet. It's called clearview AI.

Speaker 1 (10:25):
Clearview scrapes together images from Facebook, from LinkedIn, from Venmo accounts,
from news articles, from all of the public web sources
that it can find images with people's face.

Speaker 2 (10:39):
Clearview is wild, and they've got billions of photos in there.

Speaker 1 (10:44):
Yeah, they say they have billions of images. And there's
a question about whether it's legal for them to scrape
these images and put them in a database, because they
do not have they never really got permission from anybody
to get these images.

Speaker 2 (10:57):
Clearview AI's website boats that they've scraped over fifty billion images.
And again this is just from the general Internet. So
when the police run a search, you could be in there,
even if you've never had a run in with law enforcement.
Do we even know how many police departments are using this.

Speaker 1 (11:16):
No, it's very hard to tell because they're not required
to disclose this anyway. In most places, they're not required
to disclose that they're using these tools. So the best
proxy for that number is what the companies who make
this software have announced. Clearview AI has said that it
has thousands. I think they have said over three thousand

(11:38):
police customers. I did a pretty exhaustive public record search
for police departments that are using this tools. We only
came up with a list of about seventy that we
could verify that are using it in some way, and
of those, we sought public records on individual cases, and
we could only get individual case on something like thirty

(12:02):
or forty of them. Getting to the bottom of you know,
which police departments are using this, how they're using it
is very difficult because for the most part, there's no
requirement for them to make public how and when they're
using the tools, and that is allowing police to mostly
use these tools in secrets.

Speaker 2 (12:20):
But sometimes it's not the machines making the mistakes, it's
the cops. More on that after the break, it's not
too hard to understand why a police department would be

(12:42):
interested in facial recognition tools. It seems pretty logical. I mean,
you're having a hard time crack in a case and
you either don't have any eyewitnesses, or you can't rely
on the witnesses you have, or maybe, like with Michael Feldman,
the victims suffered an injury that affected their memory of
the event, or maybe too much time has passed for
somebody to really remember the details. Eyewitness accounts are full

(13:04):
of problems for law enforcement because human memory is unreliable
and it can be influenced by people's biases. So there's
this potential magical solution. You in put a face into
the system, It searches a database and spits out a match.
All you gotta do is go find that person and boom.
Case closed.

Speaker 1 (13:23):
So we know that there are at least eight Americans
who have been wrongly arrested due to police misuse of
facial recognition.

Speaker 2 (13:34):
And just to be clear here, when he says wrongly arrested,
that's not just his opinion. What he means is that
the police admitted afterwards that they got the wrong person.

Speaker 1 (13:43):
Interestingly, one of those cases this man named Jason Vernaw
who is based in Miami. In that case, the computer
got the right person, but the police had fed the
computer an image of the wrong person. So they fed
it an image who they thought this was the perpetrator
of somebody in a bank committing fraud. But it turns
out they were mistaken and the surveillance image that they

(14:06):
put in the system was just the wrong person to
begin with.

Speaker 2 (14:09):
So that was the one time they got it right.

Speaker 1 (14:11):
Well, yeah, so the computer actually picked the right person,
but the police were relying on this image they got
from the bank and they just pulled an image from
the wrong time.

Speaker 2 (14:22):
So the bank pulled an image from their security cameras
and sent it to the police, and this image was
Jason Vernow, but it was from a completely different time
of the day. The facial recognition software had correctly identified Jason,
but the police failed to make some obvious checks.

Speaker 1 (14:40):
What this case shows was that the police put the
image in their system and it popped up Jason Vernow
and they went out and arrested the guy, rather than
taking a second and saying, okay, is we have a suspect.
Let's see if, like, can we prove that Jason ver
Now was tied to this crime. And in the case
of a financial there is a lot of potential evidence there.

(15:02):
There's a check that he supposedly cashed, a faulty check. Well,
this check of his signature matches. There are witnesses, there's
a bank teller who took let's see if the bank
teller can confirm that was him. They didn't do any
of that. They apparently didn't do any real investigative steps.
They just took the word of the machine and went
not arrested him. It speaks to kind of this phenomenon

(15:24):
that seems to be happening with police as they are
trusting this kind of they're kind of imbuing this software
with this magical ability to lead them to the right suspect.
Some researchers call this automation bias, that when a computer
is telling you an answer, you're more likely to believe
that that is the correct answer, even at the expense

(15:47):
of your kind of typical due diligence, your typical you know,
just rational brain and just normal steps that you should
think through and follow before you go out and arrest
somebody take away somebody's freedom.

Speaker 2 (16:00):
Automation bias is just one kind of bias here, though,
I mean, there was something unique about this particular person though, right,
compared to the other cases. Yeah, he was white, and
that's the one that they got the computer got right. Yeah.

Speaker 3 (16:14):
Yeah.

Speaker 2 (16:15):
The facial recognition has been shown to often get certain
people wrong. Right.

Speaker 1 (16:24):
Yeah. There's two things going on here. One, facial recognition
has struggled with darker complexions, and part of this is
due to how these programs are trained. So when the
first face rerecation algorithms were trained, they were trained on
lighter skin tones. There's a federal research lab it's called

(16:44):
NIST NIST that does testing into how well facial recognition
performs in laboratory settings, and they found in twenty nineteen
that certain demographic groups, including African Americans, and certain facial
rection algorithms could be up to one hundred times more
likely to be mismatched than lighter skin tones. That was

(17:05):
about six years ago, and the industry says, and clear
View in other companies says that their algorithms have improved
dramatically over that time, and there's some evidence that they
have gotten better. However, we ultimately don't know how accurate
and how reliable this software is in the way that
law enforcement use it because it's never been tested in

(17:27):
the way that law enforcement use it, the way that
that federal lab tests things. It's looking at basically perfect lighting,
a perfectly framed profile photo in perfect conditions. How does
that algorithm do in those settings? Unlike how police use
these tools, which is, we're going to use this grainy
surveillance image shot at a weird angle from above, usually

(17:50):
usually if very poor lighting, and usually the face is
partly obscure or it could be. In the case in
Saint Louis, the guy was actually wearing a COVID mask
that was covering part of his and he was wearing
a hood that covered another part of his face. So
this is the typical setting where police are using these tools.

Speaker 2 (18:06):
There's a case where a cartoon came up in a
match at one point, right.

Speaker 1 (18:13):
Yeah, So because clear VII just has these I guess
web crawlers that are just scraping the Internet, and so
the things that come up and the Clearview results sometimes
are bizarre. So in this one case in Ohio, two
of the search results that I think we're in the
top ten of the search results were Michael Jordan just
a picture of Michael Jordan and I by the way,

(18:35):
I reached out to Michael Jordan's rep when I published
this in a story, and she didn't have any comment
on whether he might have been implicated into crime in Ohio.
And then the other one was yeah, just like it
illustrated a cartoon image of a black man. So it
just makes you like, kind of makes it you scratch
your head as to you know, what is this tool

(18:56):
that police are relying on if it's feeding them, you know,
garbage as results.

Speaker 2 (19:01):
And giving them Michael Jordan committed this crime in the
top ten.

Speaker 1 (19:04):
Right, which right, Yeah.

Speaker 2 (19:06):
Again a big allegedly here, but I feel pretty sure
they'd have a tough time connecting him to the scene
of whatever that that particular crime was.

Speaker 1 (19:15):
Yeah, And if this is such an advanced algorithm, why
can't their computer figure out that's Michael Jordan and just
take him out? And why can't a computer figure out
that that's a cartoon black man and not a real person.
I just ca't like a lot of the people who
are who put forward facial recognition as this science, they
have sort of cast it in the kind of the

(19:36):
cloak of this is a scientific tool. The police are
now using a lot of those arguments kind of break
down when you see stuff like this, when you see, oh, well,
your scientific, highly advanced tool, you know, brought back a
picture of you know, cartoon black Man. How do the
science arrive at that?

Speaker 2 (19:56):
Facial recognition as a technology is probably a whole other episod.
But if you've ever used face ID to unlock your iPhone,
you've already used the version of facial recognition on yourself.
In some ways, the technology isn't all that different from
how it started in the nineteen sixties, when researchers were
trying to figure out how to identify facial features like
the bridge of the nose, the edge of the nose,

(20:17):
or the eyes and measure the distance between them. They'd
store all those measurements as data for a single face.
Back in the sixties, this stuff was pretty rudimentary, but
AI's made it more accurate because you can train the
systems on more faces, which means theoretically it's more likely
to tell your face from someone else's. But you've probably

(20:38):
had your face ID fail on you before, and that's
under good conditions. It's just trying to match your face
with well your face. If you add a few million
other faces into the mix, the potential for mistakes multiplies
by a lot. But even with those issues, police are
increasingly leaning on this technology. It seems like police are

(21:01):
presenting the fact that facial recognition software has returned a match,
whether it's true or false, as evidence. But that seems
it seems backwards, right.

Speaker 1 (21:13):
Yeah, Well, not only that, but it's on the one hand,
they actually acknowledge that facial recognition is not enough to
make a case. They say this in their police rules
and their public statements over and over and over. If
you hear police talk about facial recognition, they say to
the public, this is only an investigative tool. We are

(21:34):
only going to use this to find investigative leads that
we then go out and corroborate. And in some places
that's actually the law. There's six states where actually police
are required to corroborate any leads that they given facial recgnition.
What we found in looking at cases all around the
country was that the police were not doing that. Sometimes
they do, but often they will just take it at

(21:54):
face value that this software hit is enough, and then
they will use that to go out and arrest the
per I encountered this really surprising thing, which is I
talked to a number of prosecutors and police who told
me that, well, yeah, we did corroboration. As soon as
we got the name from the software, we went and
we looked at that person's other photos and it visually

(22:16):
looked like the same person. And they said that that
is corroboration, visual corroboration of a match.

Speaker 2 (22:22):
That sounds like you're corroborating evidence on your own, like
you're becoming a witness at that point.

Speaker 1 (22:28):
As a police officer, this thing comes up over and
over is that many people look alike, and humans are
very bad at distinguishing between two people who are similar.

Speaker 2 (22:42):
You would think that this is where technology could help,
but AI isn't much better at telling people apart. The
first known case of a wrongful arrest made by AI
was Robert Williams in Detroit in twenty twenty. Doug was
able to get the police interrogation video. William's case is
really interesting because we have the interrogation video of this

(23:05):
and I was just watching this and it's incredible. So
this is you I know you've seen this video. So
this is where they're you know, these two detectives are
sitting in this interrogation room. Rob Williams is sitting across
the table from them, and he's clearly confused. And they
bring out these papers and I'll play it from here.

Speaker 5 (23:29):
December twenty sixth, around one pm? Where were you.

Speaker 3 (23:37):
Home? Home?

Speaker 5 (23:39):
Can you do me a favorite? Is that you?

Speaker 3 (23:43):
You know? No?

Speaker 5 (23:45):
Not even closed it? No, I'm pissed.

Speaker 3 (23:53):
Keep going.

Speaker 5 (23:55):
Is that you? No, that's not you at all? Not
there either. You can't y'all can't tail that. I'm one hundred,

(24:15):
so I'm one of the pictures. We actually got facial recognition.
I heard that it was probably favorite recognition.

Speaker 3 (24:23):
That is not me.

Speaker 2 (24:24):
That's me on my d right, you're you're smiling and
I'm smiling too. As bleak as this is. But there's
something almost funny in this exchange.

Speaker 1 (24:37):
Well, they because they present the pictures as if it's
evidence in their favor, and then as soon as he
sees them and reacts to them, suddenly it becomes evidence
in his favor because he's like, are you looking at
the same picture I'm looking at because it's clearly not me?
And it's funny. You can kind of hear in the
officer's voice, well, well, it's we have facial recognition, as

(25:00):
if you know, as if that's somehow, you know, bolsters
his case that this is this is accurate, as if
that's going to change his mind that the picture that
he's looking at is actually him when he knows it's
not him, As if the phrase facial recognition is itself
kind of evidence of the police being correct.

Speaker 2 (25:19):
You know, he holds a picture up to his face, like, look,
of course, this isn't me. What are you talking about, Kate?
Forget the software, look at me. The thing that hits
you about this interaction is that when the cop finally
looks at the picture, you could hear the room go
quiet for a second, and nobody's arguing with Robert Williams,
and it kind of sounds like he's agreeing, like it's

(25:41):
the first time he's actually looking at it, like he
had so much confidence in the machine that he didn't
bother to look with his own eyes.

Speaker 1 (25:49):
They by and large are not telling even the people
that they identify and arrest using his tools that they
use them, and that's leading to people either finding out
kind of in offhand ways in interrogation that oh, well,
you know the computer picked you. Well, what do you
mean by the computer. A lot of times, they're probably
not finding out at all, which is concerning because one

(26:12):
of the pillars of our courts and our justice system
is that you need to be able to face your accuser.
And so if your accuser is this algorithm, this computer program,
but you're not even being told that it was used,
let alone given any of the details about how it works.
So all of these things are being kind of kept

(26:34):
vague or sometimes completely kept hidden from the people that
these tools are being used to investigate and ultimately arrest.

Speaker 2 (26:41):
So what does this mean for the future of policing
and how should we be using these technologies if at all?
That's after the break. People are being investigated and arrested

(27:07):
and not being told that police are using these facial
recognition tools to find them. You don't need a law
agree to feel that something about that is just kind
of off, and you wouldn't be wrong. Here in the US,
there's a specific set of rules that might apply here,
the Brady rules. I want to talk about that. So

(27:29):
can we talk about how Brady rules can can potentially
figure into an arrest that is made with AI facial
recognition as a part of that or is the sole
part of it?

Speaker 1 (27:41):
Yeah, So basically, when you prosecute somebody, the prosecution is
required by the courts and required by the Constitution to
share any evidence that speaks to the guilt or the
innocence of the person that they're prosecuting, so whired to
share that with the defense. There's a question now the

(28:03):
courts are grappling with, which is does somebody's identification by
an AI software? Should that be brought into the courtroom?
And should the should the defendant be given a chance
to know everything about that software? And right now this
is playing out in courtrooms across the country.

Speaker 2 (28:21):
Basically putting the AI in a manner of speaking, putting
the AI on the witness stand and saying how reliable
really are you? Yeah.

Speaker 1 (28:31):
A very good indicator of how that's going to go
is the companies themselves have disclaimers saying this does not
hold up in courts. Clearview AI is the one that
I'm very familiar with. They have language and all of
their contracts with police departments saying this is not admissible
evidence in court. And also basically, please don't ask us

(28:53):
to come into court, because nobody from Clearview AI is
going to come and sit in court and defend this software.
The very companies who make it and advertise it and
market it as this great, amazing technology tool, they would
not even stand behind it.

Speaker 2 (29:05):
In your research, in your reporting, you've seen what looks
like and what defendants are saying wrongful arrest based on AI.
Have the police who made these arrests or who made
those decisions. Has there been any repercussions, any discipline, any
guidance for them that you've seen.

Speaker 1 (29:27):
I mean, the answer is no, as far as I
can tell, the individual officers have not publicly faced any punishment,
although you know, you never know what happens behind closed doors.
Typically police unions forbid police from publicly talking about punishment
of individuals. Now, we've seen settlements paid out of three

(29:47):
hundred thousand dollars to a few of these people who
are wrong fully arrested. Is this going to change the
behavior of these individuals. There's only six states that have
laws mandating specific disclosures and mandate specific things about how
these tools cannot be used. So you know, the vast
majority of the country police are kind of still just
kind of figuring out. It's really early days.

Speaker 2 (30:10):
So just to back up here, we've been talking about
the problems with this technology, what about the upsides? And
it's interesting because if you ask people how facial recognition
could be used to catch criminals, there's not really any
high profile cases we could point to. And before you
say January sixth, well that's complicated.

Speaker 1 (30:31):
A lot of people attribute facial recognition with helping to
I identify these people and to bring them to justice.
And yes, facial recond did play a role. However, on
that day, there was a lot of evidence being collected
through social media posts that these people were making on
their own videos that these people were taking and posting.
I think the federal investigators even use of high tech

(30:54):
methods to grab the cellular location data of everybody who
is in the capital that day, and I think they
were able to identify a lot of people that way.
So facial actation in that case was one of many
tools that were used, which is sort of how it's
designed to be used, is you know, not as kind
of the sole investigative lead, but as one of several. So,

(31:14):
you know, other than that, there aren't that many well
known cases out there that we can point to of
you know, this was the thing that brought down, you know,
a murder case that we've been trying to solve for
many years. But I bet we will start to hear
about some of that, and I would love to kind
of have more of those case studies we can pick apart.
I think the public deserves to know about both. The

(31:34):
public should also be aware of the good and the
benefit because as we kind of make laws or think
about regulating how to ring this in, it's going to
be important to understand that balance and to get it right.
I think that there's a danger and rushing to a
blanket ban on this technology.

Speaker 2 (31:56):
I want to play some for you really quick. This
is a really I think powerful moment from your podcast,
and I was hoping I could have have you speak
to this a little bit here.

Speaker 5 (32:07):
They lean on the technology because I think we are
taught that computers don't make mistakes.

Speaker 4 (32:13):
Humans do.

Speaker 2 (32:15):
Now I'm sitting there like, how can you do this?

Speaker 3 (32:18):
You can't do this?

Speaker 1 (32:19):
Like, no way, how can you just put these charges
on them?

Speaker 3 (32:22):
And I'm telling you that that's not me.

Speaker 2 (32:25):
I know I was in this, So how do I
beat a machine?

Speaker 3 (32:29):
So?

Speaker 2 (32:30):
I mean, I guess I guess my question is here
these I'm just gonna call them AI arrests. How do
these arrests affect the people who are being accused.

Speaker 1 (32:41):
That was a clip of our episode of Post Reports,
and we heard these patterns of experience which were very notable.
So a few things. Wrongful arrests are not a new thing.
The technology did not bring about wrongful arrests, but the
idea of you know, usually there's some other incidental relationship
some somebody has to a crime before they get into

(33:03):
a wrongful orest. In these cases, the AI is plucking
you out of thin air in some cases. And in
one case, this man, Karan Reid, he had never actually
even been to the state of a Louisiana and he
was resident in Atlanta, so he's literally been plucked from
the other side of the country and brought into this
crime that he had literally no proximity to whatsoever. So

(33:24):
this what does that do to the human brain when
you are just suddenly poof inside of a criminal investigation
that you have no connection to whatsoever. It's like baseline
is a word that came up over and over. A
lot of people just kind of got stuck on that idea,
even when they sat in jail for one or two
or three weeks or months in the case of Chris Gatlin,

(33:44):
who was in jail for seventeen months. Some of them
had this feeling of and I think you just heard
it in the clip. If the computer is telling you
that I did it, then how am I going to
convince you? Otherwise? How am I going to beat a computer?
Because you know, it goes back to this thing that
we were talking about before, this automation bias of you know,
he never heard of that term, but he instinctively realized

(34:09):
that he was going against a power that was much
greater than him and in some ways probably much greater
than if a witness had picked you for a crime.
It's like it's a computer picked you for a crime
and put your name on the line and put your
name into this investigation.

Speaker 2 (34:24):
So it's our belief in the computer that's what you're
up against.

Speaker 1 (34:28):
You know, many of the people that we talked to
and the cases that are public, many of them feel
like they kind of were the lucky ones. And maybe
they got out because one of them had a mole
on his face and his lawyer discovered that the guy
and the surveillance image didn't have a mole. Or there
was one woman, Portia Woodriff who was eight months pregnant

(34:49):
at the time she has arrested, and you know, there
was nothing in the interviews with the victims that said
there's a pregnant woman involved. So that's ultimately one of
the reasons that she got out. But are there many
other people who didn't get lucky and might might be
behind bars still because of facial recognition, you know, wrongfully
put them there.

Speaker 2 (35:06):
I mean, when you put it like that, you start
to make me think that maybe we're not hearing horror stories.
Those are the success stories.

Speaker 1 (35:14):
This was pretty horrible for these people, not just them
in some of the cases. You know, their their children,
their young children watch them get arrested in their front lawns,
and that's trauma.

Speaker 3 (35:25):
Yeah.

Speaker 1 (35:26):
One of the one of the men that we've talked
about before, Robert Williams, who's arrested in Detroit. You know,
his kids watched them get arrested in the front lawn
and he says, to this day, they still talk about
that incident. One one of his daughters. This is kind
of heartbreaking. One of his daughters every once in a
while comes up to him and says, Daddy, I think
I've solved it. I found the real man who went

(35:47):
and stole those watches, and she pointed to like a
cartoon character as like that was the one that was
the real guy who stole the watches and not daddy.

Speaker 2 (35:54):
Oh my god.

Speaker 1 (35:55):
So these, I mean, these experiences are now baked into
their lives and their childhoods. And then that's you know,
I'm a parent of small kids, and that's a very
sad thing. And yes, they were lucky, probably luckier than
other people whose stories we don't know about, but also
these are horrible experiences.

Speaker 2 (36:14):
I'm not sure what y'all think about this. I'm still
trying to figure that out myself, and I know a
lot of what we've heard sounds pretty terrifying. So for
one of my last questions, I asked Doug what he
thought the next few years were going to look like
with this technology, and he answered in a way that
I didn't really expect.

Speaker 1 (36:33):
We're still at a point of fascination with this technology,
and in places around the country, maybe not in the
big cities, where I mean you do see kind of
a skepticism of tech tools and concerned about privacy and surveillance,
but in many of the places in this country, you know,
in smaller towns around the country, police are adopting these

(36:56):
things and are excited and there's a fascination, and they're
being sold to the public as this kind of magical tool,
like in Evansville, Indiana, in Florence, Kentucky, in Saint John's County, Florida,
these smaller town places where you know, the local population
may not know that the police have started to like
rely on these tools quite a bit. So a simple

(37:19):
goal for my reporting has just been to just kind
of encourage people to ask questions to your go to
your city council meeting. You'll actually learn a lot. You
don't have to sit on the sidelines of this discussion
because you're part of it. You are a person that
is in front of these surveillance cameras too. So now
there's a risk of you, everybody in this country of
getting pulled into an investigation due to, you know, a

(37:40):
misuse of this technology. So you should everybody should think
about themselves as being part of this discussion about whether
and how we use this technology going forward.

Speaker 2 (38:00):
Thank you so much for listening to our first episode
of The New kill Switch, and a very special thanks
to our friends over the Washington Post. Podcast Post reports
who helped us with this episode. You can check out
more Doug's reporting over there and stay with us. We
have so much more ahead. This show is all about
what's happening right now between humans, between machines, how we're

(38:22):
being shaped by our own creations, and a whole lot
beyond that. So let us know what you think, and
even if there's something you want us to cover, you
can hit us up at kill Switch at Kaleidoscope dot NYC,
or you could hit me at dex Digi on the
Gram or Blue Sky if that's more your thing. Kill
Switch is hosted by me Dexter Thomas. It's produced by

(38:42):
Shena Ozaki, Daryl Potts, and Kate Osbourne. Our theme song
is by Kyle Murdoch, who also mixed the show. From Kaleidoscope.
Our executive producers are Ozwa Washin, Mangesh Hutti Goodur, and
Kate Osborne from iHeart, our executive producers are Katrina Norvil
and Nick Etur. That's it for me, catch on the

(39:03):
next one. H

kill switch News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

About

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.