All Episodes

October 27, 2022 26 mins

A Native American man gets pulled over for driving a nice car, a black man is arrested in front of his family for a crime he didn’t commit – innocent people are at risk because of racial profiling. But to stop profiling, you have to first identify it, and that’s not as easy as it seems. Liberty and Scott are going deep into data in this episode, investigating how data is used against marginalized communities, and how it should be used to protect and serve them. They go to the experts to find out which methods are failing, what solutions can mitigate the dangers of facial recognition technology and smart policing, and how we know we’ve succeeded in ending profiling. 


Liberty and Scott speak with Craig Watkins, Martin Luther King Jr. visiting professor at MIT; and Brandon Del Pozo, former police chief in NYC and Vermont.

 

Data Nation is a production of the MIT Institute for Data, Systems, and Society and Voxtopica

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):


(00:04):
Welcome to Data Nation.
I'm Munther Dahleh, and I'm thedirector of the MIT's Institute
of Data, Systems, and Society.
Today on Data Nation,Liberty and Scott
are examining racialprofiling in policing.
[MUSIC PLAYING]

In 2021 in RapidCity, South Dakota,

(00:27):
a police officernamed Jeffrey Otto
was on duty when he reporteda car for suspicious behavior.
It was a Mercedes.
And he told another officerthat he wanted to, quote,
"keep an eye on this car becauseit's a young Native male driving
this really nice car."
End quote.
Otto eventually saw thedriver step out of the car.

(00:49):
And when he realized it wasn'tactually an Indigenous man,
he claimed the car didn't needto be pulled over anymore.
And thankfully, after the event,the Rapid City Police Department
removed Otto from the force.
While Indigenous advocacygroups saw Otto's termination
as a step in theright direction,
they also argued that thiswas not an isolated incident.

(01:13):
This wasn't the first time.
And in a quote toCNN, they said, quote,
"the officer's allegedcomments represent a culture
of discrimination towards NativeAmericans in the city's police
department."
End quote.
This is just one example ofone city in the United States
where communitiesare feeling burdened
and unsafe from the consequencesof racial profiling.

(01:35):
So how do we solve this?
Well, first, you have toidentify who's profiling
and how much they are.
The problem is, figuringthis out isn't that easy.
And you certainlycan't figure it out
with the status quo methods.
The data is tellingus the wrong thing.
One of the currenttests used to identify
patterns of racial profilingis called benchmarking.

(01:56):
Benchmarking comparesthe percentage
of stops for peopleof a specific race
with the percentageof that minority
in that geographical area.
And an example of one ofthese is the 1999 report
on the New York City police'spolicy of stop-and-frisk.
At the time, officerswere patrolling
private residential buildingsand stopping individuals

(02:19):
that they believedwere trespassing.
And in 1999, 25.6% of thecity's population was Black.
But Black individuals comprised50.6% of all the persons
the police stopped.
And the New YorkAttorney General
used benchmarking todetermine if the practice was
being used unconstitutionally.

(02:41):
And later in 2013,a federal judge
ruled thatstop-and-frisk had indeed
been used in anunconstitutional manner.
So basically, it seemslike benchmarking
revealed importantdata in this case.
But does benchmarkingwork every time?
Is it always accurate?
Not really.
The problem withbenchmarking is that it
relies on census data, which cangive a really misleading view

(03:06):
because census data doesn'taccount for non-residents.
If a major interstateruns through a city
or if it's an area thatbrings in a lot of tourists
or visitors, how areall these non-residents
captured in that benchmark?
So what we reallywant to figure out is,
what are the reliable methods toidentify patterns in profiling,

(03:26):
and how can knowingthese patterns help
communities and policedepartments stop
profiling from happening?
So we're going to bring thesequestions to Chief Brandon del
Pozo.
Chief del Pozo served19 years in the NYPD,
where he commanded the6th and 50th precincts,
as well as units ininternal affairs.
What I like to do is startout with a definition.
How do we define thestatus quo in terms

(03:49):
of determining if policeare racially profiling?
Like, what is it?
And what's yourdefinition and how
do we look at it for an audiencethat doesn't necessarily
understand what itis or is looking
for a specific definition?
Broadly, we'retalking about race
informing suspicion andsuspicion giving police
a reason to act, right?
So broadly construed,racial profiling

(04:10):
is the inappropriate, theunjust, in many cases unlawful,
use of race as a factor informing criminal suspicion
against someone.
And then that empowering thepolice under statutes and law
to then act, whetherit's to conduct
a stop, to conduct a frisk ora search, or make an arrest.
So in regulatingracial profiling,

(04:31):
it's a matter of policy andalso understanding the data,
what to include and exclude.
We have to settle on or at leastunderstand or have a working
definition of to what extentrace as a data point can
and cannot informcriminal suspicion.
How do you quantifythat, or how do you
identify that when you're twoparts-- training officers,

(04:52):
interacting with them, ordetermining whether or not
that was a factor?
I came up with acontinuum where when
the race is involved inthe instance of the crime--
hey, there's a group ofKlansmen setting crosses
on fire and Black churches, orhey, there's a pattern robbery.
There was a pattern robberyin a city in New Jersey where
the folks were coming inand doing smash and grabs

(05:16):
of Colombian jewelry stores.
And they wereColombian suspects.
And the victims knew it becausethey're like, we're Colombian.
The people were screamingat us in Spanish.
It was a Colombian accent.
This was insidethe community job.
And they were going and doingthese jewelry store robberies.
If a cop is drivingby a jewelry store
and there's four peoplepeering into the jewelry store,

(05:36):
it makes a differencefor suspicion
whether they'reColombian or not, right?
Whether they lookHispanic or not.
So at one level, youhave the instance
where race-informed suspicionin the instance as a fairly
straightforward data point.
And then on the otherside, you have cases
where race informs suspicionin a very generalized,
unsupportable way thatmakes cultural and frankly,

(05:58):
biased assumptionsabout a race of people.
So those are the two extremes.
And you want cops tounderstand that their work has
to be in the firstset of definitions
and has to steer clear of thesecond set of definitions.
And that bias of manykinds is leading you
towards that second set.
And I think that setsthe framework for what
we are looking for and whatwe are trying to determine

(06:20):
of how we stop this.
So in the police right now--
you're a chief.
What do you do to stop that?
You don't want to tellcops that they can never
use the physical descriptionof a person to know who to stop
and who not to stop, right?
Because knowing who to stop isalso knowing who not to stop.
And the police need to be veryjudicious about seizing people,

(06:42):
telling them they'renot free to leave,
and investigating themand frisking them.
And if a physical descriptionof somebody being dark skinned,
light skinned, Black,Hispanic, Asian
helps narrow down the suspectsso that innocent people aren't
caught in that webof investigation,
that's good, right?
But then we have this othergroup of generalizations that
we've seen all the way fromthe New Jersey turnpike

(07:03):
in the '90s and TSA andimmigration all the way up
through now, where you're justsaying this class of people,
Black people, quote unquote,Hispanic people, quote unquote,
Arabs--
the suspicion is by virtueof being in the class,
not by the instance at hand.
And it's how fruitful apolice officer's searches
are at the roadside, right?

(07:24):
So if a police officer stopsa car, does the investigation,
decides to conducta search, that they
have their adequate levelof suspicion-- in theory,
if race didn't inform hiscalculus in a biased way,
all stops would be equallyfruitful across races.
If race wasn't artificiallyand unfoundedly
informing suspicionas a matter of bias,

(07:45):
let's say for argument's sake,the stops would be fruitful.
3/4 of the time, you'd get thedrugs, the guns, the contraband.
You'd get the personwanted on a warrant, right?
If you're conductingsearches at the roadside
and you're finding thatyour searches are much more
likely to be fruitful for whitedrivers than for Black drivers,
you've got to step back andask yourself at that point,

(08:06):
what is it aboutBlack drivers that's
giving the policeofficer what amounts
to being unfounded suspicion agreater percentage of the time?
If you follow what I'm saying.
And when you see thatdisparity in what
you can call the hit rate,and that disparity is by race,
then you're gettingtowards a metric
that, at least intraffic enforcement,
you can use to detectand attack bias.

(08:30):
So in practice, itmakes sense that we're
using the wrong metric.
We're trying to use thecensus data to figure out
if there's racial profiling.
And really, we shouldbe using this hit rate.
Is there any issue in thepracticality and actually
deploying the concept of hitrate in real time in a police
department to see ifsomebody is racially biased?
So one of the challenges isthat, what counts as a hit,

(08:52):
right?
So I was finding in my oldpolice department in Burlington
that until marijuanawas decriminalized,
so much of themarijuana offenses
were the basis forstops and suspicion.
And so you had communityactivists saying,
I think with someaccuracy-- listen,
if you're getting serious, ifthe hit rates between Blacks
and whites are the same, butmost of the hit rates for Blacks

(09:15):
are marijuana and thehit rates for whites
are more serious crimes, areyou defining down the hit rate
in a trivial way?
It's interesting.
There's debates aboutwhy don't licenses
have race on them, like Vermontlicenses don't have race.
And people would say, well, weneed to know the race because we
need to collect that data.
The counter argument is, whatthe cop perceives the race to be

(09:37):
is what matters becausethe perception is
what yields the actions, right?
So a cop may perceive a PacificIslander to be Hispanic.
I think that's finebecause if they're
recording what theyperceive their race to be,
then they're goingto record what
they've based their actions on.
But to answer your question,you need an administrative data
collection systemthat captures all
of these variables from themoment of the stop, the reason

(09:59):
for the stop, all the waythrough what was found,
whether a warning was given,the length of detention.
And there's a lot ofdisparate data collection
in American policing.
In the big policedepartments, we even
see issues withimplementing smart policing.
You look at facialrecognition, which we know
has a huge racial bias to it.
Should we be usingsmart policing?
Is this a good thing?

(10:20):
Does the benefitsoutweigh the harms?
And how do we implement it evenin the big police departments
in a way that it'sgoing to be just
and lawful for our citizens?
Let's say that there isa pattern rapist that's
on his or her thirdsexual assault,
and the police got somesurveillance footage.
And they know it's a--whatever the race--
a white male, a PacificIslander, an Asian.

(10:42):
But they know that they'relooking for the person.
Old school policing is creatinga dragnet, where you're going
and you're in thatneighborhood, and you're
looking for people thatmatch the description.
And if it's past acertain time at night
and you're walking 50feet behind a woman,
they're going to stop you.
So it's a complex issue.
And I'm trying to workfolks through the dynamics
because you don't want facialrecognition to supply suspicion

(11:05):
in and of itself tojustify an arrest.
You get terrible outcomes.
And as you said,they are biased.
But if you banfacial recognition,
you get the reversion incases of acute public safety,
problems to cast awide net that will
draw innocent people into it.
And so anotherthing to think about
is, do you want to use facialrecognition for petty crimes,
for solving thefts, forsolving misdemeanors,

(11:26):
or do you want to reserve itfor serious patterned felonies
where there's aperson on the loose
that has to bequickly identified?
I will tell you, I'mone of those people that
writes and stands onmy soapbox and says,
I think facial recognitionshould be banned.
I think it should bebanned until we can come up
with these types of methodsof what type of crime
should we actually be usingfacial recognition for

(11:48):
and these very specificboundaries around it.
But that it seems that witha lot of smart policing,
it's like peoplego, I'm so excited.
We can now implement facialrecognition in the police.
They throw it in there as soonas the technology is available.
That's a great point.
So there's two thingsgoing on, at least.
One is everything you said,I completely agree with.
And they put too much stockand have too much faith

(12:11):
in the technology.
And they use thetechnology to replace
other more careful anddeliberate and less error-prone
processes.
And they also useit as an excuse
for not having to beaccountable to the community
and rely on the community to bea participant in public safety,
right?
Most shootings,most robberies, are
done in a community where peoplekind of know what's going on.

(12:33):
I've said this to agroup of police chiefs
in Atlanta, like a lead balloon.
But I said, if youare using technology
as a replacement for gettingthe trust of the community
and for community partnershipand public safety,
you're using technology wrong.
So to say, I don't needsomeone to be a good witness
and give a good description.
I don't need somebodyto agree to testify
that they witnessed something.

(12:54):
I don't need somebody to callme up and leave a tip and say,
I think Joe didthat robbery, or Joe
did that shooting because theydon't trust the police to say,
that doesn't matter.
I have facial recognition.
I can do this myself.
It's a terrible miscarriageof American policing.

As the policing industry movesto establish new practices
to identify racialprofiling and patterns,

(13:16):
they still facemany new challenges.
Chief del Pozo brought up theissue of facial recognition.
Right now, millions ofsurveillance cameras
are being used in privatehomes and public spaces,
and local law enforcement wantsto utilize these smart security
devices.
Google Nest Doorbell and ArloEssential Wired Video Doorbell
are devices that includebuilt-in facial recognition,

(13:37):
and they can providedata to police.
In fact, Amazon's Ring ispartnered with almost 2,000 law
enforcement agencies to allowofficers to ask Ring users
to share their video recordingswithout use of a warrant.
While it's helpful tolaw enforcement agencies,
the sharing of data raisesconcerns about privacy rights
in the digital age.
And it raises ethicalconcerns about the ability

(13:58):
of law enforcement toutilize this data properly.
For police to get yourDNA and fingerprints,
you need to be arrested.
But for facialrecognition to be used,
you just need to be in public.
Another big problem when itcomes to facial recognition
technology is that thealgorithm just doesn't always
get it right.
Actually, it was faultyfacial recognition
that led to a completelyinnocent African-American

(14:21):
Detroit man to be arrested fora crime that he did not commit.
Robert Williams waspulling up to his house
after work when a DetroitPolice car pulled in behind him,
and they blocked his SUVin case he tried to escape.
The police then proceededto arrest Williams
on his front lawn in frontof his wife and his two

(14:42):
little girls.
No one would tell him whatcrime he had committed.
And after 18 hoursin police custody,
Williams was finally connectedwith the defense attorney who
explained that someonehad stolen watches
from a store in Detroit.
The store owner had sentthe surveillance footage
to the DetroitPolice Department,

(15:02):
and the blurry image was thensent to the Michigan State
Police.
They ran facial recognitionon this blurry photo.
It matched with Robert Williams'old driver's license photo.
And so he was arrested forstealing watches at a store
he had never even been inside.
Following this,lawyers at the ACLU

(15:24):
filed a lawsuit against thepolice department and won.
But Williams arguesthat winning the case
didn't undo the traumainflicted on his family
by a failure infacial recognition
and really, in policing.
And no one can ever erasethe image of their father
being handcuffed fromhis daughters' minds.

(15:46):
And this family willforever remember the day
that really an algorithmtook their dad away.
This particular example raiseda lot of red flags in how facial
recognition and otheremerging technologies
are used in policing.
It seems likethings can go wrong.
And when they go wrong, theconsequences are severe.
It leaves us withthe question, how
can we move forward with thistechnology, and can it be used

(16:08):
in a way that produces positiveoutcomes for the communities?
We decided to talk withCraig Watkins of MIT.
Craig is the Martin Luther King,Jr. visiting professor at MIT
and the founding director of theInstitute for Media Innovation.
Craig is leading ateam that is addressing
the issue ofartificial intelligence
and systemic racism.

(16:28):
So Craig, we were justdiscussing the case in Detroit
where Robert Williamswas arrested for a crime
he did not commit.
And it was all because offaulty facial recognition.
So is this really justa one-off incident,
or is there potentialfor this same incident
to happen over and over andbecome really problematic?

(16:51):
Prior to this,technology activists,
critics of thesetechnologies, have long
argued that facial recognitionis seriously problematic
insofar as it's lesspredictable in terms of accuracy
when it comes to peoplewith darker skin tones.
It's less accurate when itcomes to women versus men.

(17:11):
That is, recognizing femalefaces versus male faces.
And this, of course, hasto do with training sets,
the data around whichthese technologies have
been developed.
And so this has led tosome serious problems,
not only in theory, butin the real material
world in terms of identifyingpeople falsely, accusing them
of crimes that theydid not commit,

(17:31):
and putting themand their families
through the horror of havingto go through that experience.
Do you think technology is a netpositive in policing these days?
And can it be a net positivein policing going forward?
Yeah, that's aninteresting question.
So I think clearly, thepromise of technology
is real insofar asthe promise being
that it can makeany form of labor,

(17:54):
policing included, moreefficient, more data-informed.
And so those are all thingsthat have the potential
to be a net positive in terms ofbeing able to accelerate access
to information--
importantly, not only accelerateaccess to information,
but access to insightsthat can be generated
from that information.

(18:15):
And that can be good for, again,any form of human endeavor,
policing included.
Of course, the questionis, to what extent
do we create procedures, do wecreate policies and practices
that allow us to realizethat net positive,
to realize the potentialpositive impacts

(18:36):
of these systems?
And we can't assume that justby virtue of them existing
and by virtue of usadopting and deploying them,
that they will generatethese net benefits.
And that unless we'revery intentional in terms
of how we adopt, how wedeploy these systems,
it can lead to impacts thatdon't lead to a net positive.
So you brought up aninteresting term, net benefit.

(18:58):
Net benefit means there'ssome good, there's some bad,
but there's more good than bad.
When we talk aboutjustice in policing,
we want to eliminate asmuch bad as possible, right?
I mean, you don'twant a false positive
because false positivescould end someone up
in jail or on death row.
How do you integrate thistechnology and these data
science techniques in away that the public will
understand the net benefits?

(19:19):
In my research, inconversations that I've
had with communitystakeholders, for example,
there are somecommunities, communities
of color, communitiesmade up primarily
of working class orpoor individuals.
There is a history andan understandable one
of a high degreeof mistrust when
it comes to policingand police-related work.

(19:41):
And so for them, it's going torequire some additional effort.
It's going to requirea strategic effort
in terms of convincingthem that these systems can
lead to a net benefit.
So the question is,net benefit for whom?
And I think somepopulations might say, well,
certainly these technologiesare a net benefit maybe

(20:03):
for some segments of society.
They may be a netbenefit for those
who are in charge ofmanaging these systems
and deploying police resources.
But for the communities who bearthe brunt of these systems, who
are disproportionatelyprofiled and surveilled
as a result of thesesystems, that there's
just no possible way that theycould see these technologies
as a net benefit in any way,shape, form, or fashion.

(20:25):
So we've heard the term"predictive policing."
Can you expound onwhat that really means
and what it looks likein the real world?
I mean, basically, it'sthis idea of being able to--
as a result, collectingmassive amounts of data,
processing that data,identifying patterns
in that data thatpolice might be
able to predict where acrime might take place.

(20:49):
A particular type of crimein a particular location
at a particular time bya certain type of person.
These are all of the kindsof things that are now being
promised with the technology.
But it's this idea thatyou can marshal data,
that you can organizedata in such a way
and develop algorithms thatallow you to identify patterns

(21:10):
that give you a greaterdegree of precision
in terms of tryingto identify where
you think a certain typeof crime might happen.
And what that suggests is thatif you can predict what crime
is going to happen,then you allocate
resources that perhaps preventthat crime from happening.
A greater police presence incertain areas at certain times

(21:31):
of the day, thosekinds of things.
So I got my startmodeling and predicting
what people are going to do,such as, is this person going
to eat a Big Mac or are theygoing to eat at Taco Bell?
I can be 80% sure they're goingto eat a Big Mac, which means
20% of the time, I'm wrong.
So in this example,I think the math
is good and useful, butuseful in a low stakes game.
So when you talk aboutpredictive policing,

(21:53):
is the math andscience good enough
for a high stakes game whenit comes to somebody's liberty
and freedom?
So let's say yousubscribe to the model
of predictive policing.
I think currently,the way in which
that model is beingpracticed, what I would argue
is problematic about it is,if you can coordinate data
in such a way that it cangive you that kind of insight,

(22:14):
that could be useful.
But then the questionbecomes, what
actions do you take as a resultof those predictive analytics
that are being generated?
And who's deciding whatthose actions are, right?
Who's deciding?
Are they purely punitive?
So then, are you justgoing in and surveilling
and criminalizing andpunishing certain segments
of the population?

(22:35):
Or an alternative is that if weare predicting certain patterns
and we see certaintrends as it relates
to certain kinds of crimesthat we're concerned with,
what can we do to perhaps createconditions and situations where
this no longer becomeslike a prevalent problem?
So there are otherways to respond
to that data in those analytics,rather than simply write

(22:57):
these punitive measures.
More police on theground, more aggressive
policing, stop and frisk,those kinds of things.
And that requires avery different kind
of mindset, very differentapproach to policing.
How do we know thatwe've succeeded
in ending these bias practices?
How or when do we know thatwe've done a good job and hit

(23:17):
success?
Yeah, that's a great question.
And there are manypossible answers.
One that I will give is when thecommunities that historically
have felt the most pain frompolicing-related behaviors
and processes, when they beginto trust the system, when they
begin to say, OK, thesetechnologies are creating safer,

(23:42):
more secure communities,safer and more secure
lived experiencesfor us, then I think
that could be one indicatoror one marker of success
when you get buy-in from them.
My vision is, hopefully withinthe next five years or so, maybe
this will happen,maybe it won't.
But in the next five yearsthat any police department,

(24:02):
any decision that they'remaking as it relates
to what kind ofnew technology they
want to adopt, whatkind of new technology
they want to integrateinto their practices,
that before doingthat, they would
engage the community in ameaningful and substantive way.
And I think whenwe get there, that
will be a marker of success.

(24:22):
That'll be a markerof where we've turned,
I think, a reallyimportant corner
because that speaks to a greaterdegree of accountability,
a greater degree ofpartnership and engagement
between police andthe communities
that they tend to work in.

We've heard inthis episode, there
are several dilemmasthat have to be

(24:44):
addressed when it comes toeliminating racial profiling.
From detectingprofiling patterns
to navigating newpolicing technology,
the solutions aren'texactly simple.
When it comes todetecting bad actors,
it's clear that we need simpleand understandable metrics that
serve as an earlywarning system and flag

(25:04):
individual officers whoare actively profiling.
And as we work toremove these bad actors,
we really need to keep aneye on the new technology
and algorithms that arebeing introduced to policing.
New methods like smartand predictive policing
need to be added verycarefully so that communities

(25:24):
feel they can work with policeand actually be served by them,
not be victims of police whenthey've committed no crime.
I guess the good thing is thateven though these solutions may
be complex, there aresolutions to move forward.
And like Craigsaid, we know we've
hit success when affectedcommunities begin

(25:45):
to trust the system again.
That is the realmarker of success.

Thanks so much for listeningto this episode of Data Nation.
This podcast is broughtto you by MIT's Institute
for Data, Systems, and Society.
And if you want to learnmore about what IDSS does,
please follow us atMITIDSS on Twitter,

(26:07):
or visit our websiteat idss.mit.edu.
[MUSIC PLAYING]
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.