All Episodes

October 24, 2023 48 mins

This week’s guest is Rebecca Balebako,  Founder and Principal Consultant at Balebako Privacy Engineer, where she enables data-driven organizations to build the privacy features that their customers love. In our conversation, we discuss all things privacy red teaming, including: how to disambiguate adversarial privacy tests from other software development tests; the importance of privacy-by-infrastructure; why privacy maturity influences the benefits received from investing in privacy red teaming; and why any database that identifies vulnerable populations should consider adversarial privacy as a form of protection.

We also discuss the 23andMe security incident that took place in October 2023 and affected over 1 mil Ashkenazi Jews (a genealogical ethnic group). Rebecca brings to light how Privacy Red Teaming and privacy threat modeling may have prevented this incident.  As we wrap up the episode, Rebecca gives her advice to Engineering Managers looking to set up a Privacy Red Team and shares key resources.

Topics Covered:

  • How Rebecca switched from software development to a focus on privacy & adversarial privacy testing
  • What motivated Debra to shift left from her legal training to privacy engineering
  • What 'adversarial privacy tests' are; why they're important; and how they differ from other software development tests
  • Defining 'Privacy Red Teams' (a type of adversarial privacy test) & what differentiates them from 'Security Red Teams'
  • Why Privacy Red Teams are best for orgs with mature privacy programs
  • The 3 steps for conducting a Privacy Red Team attack
  • How a Red Team differs from other privacy tests like conducting a vulnerability analysis or managing a bug bounty program
  • How 23andme's recent data leak, affecting 1 mil Ashkanazi Jews, may have been avoided via Privacy Red Team testing
  • How BigTech companies are staffing up their Privacy Red Teams
  • Frugal ways for small and mid-sized organizations to approach adversarial privacy testing
  • The future of Privacy Red Teaming and whether we should upskill security engineers or train privacy engineers on adversarial testing
  • Advice for Engineer Managers who seek to set up a Privacy Red Team for the first time
  • Rebecca's Red Teaming resources for the audience

Resources Mentioned:

Guest Info:


Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Rebecca Balebako (00:01):
You really need to make sure that if your
privacy red team finds something, that you have total leadership
buy-in and they're going tomake sure it gets fixed.
I think that the politics, thefunding of it are so crucial to
a successful privacy red team.

Debra J Farber (00:21):
Hello, I am Debra J Farber.
Welcome to The Shifting PrivacyLeft podcast, where we talk
about embedding privacy bydesign and default into the
engineering function to preventprivacy harms to humans and to
prevent dystopia.
Each week, we'll bring youunique discussions with global
privacy technologists andinnovators working at the

(00:41):
bleeding edge of privacyresearch and emerging
technologies, standards,business models, and ecosystems.
Today, I'm delighted to welcomemy next guest, Rebecca Balebako,

(01:03):
founder of Balebako PrivacyEngineer, where she enables
data-driven organizations tobuild the privacy features that
their customers love.
She has over 15 years ofexperience in privacy
engineering, research, andeducation, with a PhD in
engineering and public policyfrom Carnegie Mellon University,
a master's in softwareengineering from Harvard

(01:25):
University as well, and herprivacy research papers have
been cited over 2,000 times.
Rebecca previously worked forGoogle on privacy testing and
also spent time at RANDCorporation doing nonpartisan
analysis of privacy regulations.
Rebecca has taught privacyengineering as adjunct faculty
at Carnegie Mellon University,where she shared her knowledge

(01:48):
and passion for privacy with thenext generation of engineers.
Today, we're going to betalking about all things privacy
red teaming.

Rebecca Balebako (01:59):
Thank you so much.
I'm excited to be here.

Debra J Farber (02:02):
I'm excited to have you here.
Red teaming is definitelysomething that's near and dear
to my heart as a privacyprofessional because my other
half, he's a pentester, redteamer, works in bug bounty, and
offensive security stuff.
So, we're going to talk aboutprivacy red teaming today, but
there's tangents with security,so it's a topic that I'm excited

(02:22):
about.

Rebecca Balebako (02:23):
Yeah, me too.

Debra J Farber (02:25):
So why don't we kick it off where you tell us a
little bit about yourself; andas an engineer, how did you get
interested in privacy, and whatled you to found Balebako
Privacy Engineer?

Rebecca Balebako (02:37):
I was a software engineer for about a
decade and I really got to thepoint where just coding wasn't
cutting it for me.
I wanted to work on somethingthat was more at the
intersection of policy orhumanities and really delved
into what society wants andneeds.
It was at this point Idiscovered some of the work that

(02:58):
Lorrie Cranor - ProfessorLorrie Cranor - at Carnegie
Mellon was doing on UsablePrivacy; and, from there, the
PhD program with her as myadvisor.
So, if you aren't familiar withher, there's a previous podcast
, Debra, that you've done withher, so listeners can go back
and listen to Lorrie Cranor'spodcast.
She's amazing, and ever sincethen, I've been doing privacy

(03:21):
engineering work, really tryingto find that intersection of
engineering, regulation,technology, and society; and
really felt that by creating myown company, I could have the
most impact and work with abroad range of companies.
So that's why I am where I amright now.

Debra J Farber (03:39):
That's awesome.
I love the blend of skills; and, like myself, just one area is
not enough.
You want to connect the dotsacross the entire market, the
regulation, what's drivingthings forward in the industry.

Rebecca Balebako (03:54):
That's right, because you have a background in
law, correct?

[Debra (03:57):
Correct, yeah] But, you are now doing a podcast that's
really about the technologicalaspects.
How do we shift it left?
How do we get it into theengineering aspects?
What got you to do that shift?

Debra J Farber (04:09):
Oh gosh.
There's so many reasons.
I could do a whole podcastepisode on that.
I think for me, it's I wantedto do more tactical work after
law school and there weren't any- in 2005, there were very few,
if anyprivacy law focuses at law firms
.
It was more sectorally handled,and going into operationalizing
privacy was really a greatopportunity because businesses

(04:32):
needed people to be hands on andlike create processes and
procedures and all that forprivacy in their organizations,
by law.
So, they were just all too gladto have someone who was like
interested in taking that on atthe time, because this was so
new.
As the industry got morecomplex - it wasn't only about

(04:52):
cookies and small aspects ofprivacy, then looking at how
does personal data flow throughorganizations in a safe way and
privacy- preserving way.
Then, just through that process, I learned more about the
software development lifecycleto round out my understanding of
how systems get developed andproduct gets shipped; and so, I

(05:14):
just got more and morefascinated with that.
At one point - it totally offtopic from red teaming here - I
actually thought about "Oh, Ireally like, could I go back to
school for electricalengineering?
Because I was working in thetelecommunications space for a
little bit right before lawschool and I found it
fascinating.
I found out that I would haveto go back and get a bachelor's

(05:34):
degree again and spend anotherfive years just on another
bachelor's, and I just felt liketoo long after college and just
before law school.
It just felt like too long toget another bachelor's.
So, I didn't go that direction.
But, my whole career ended upbeing a shift left into more
technical.
I felt like laws weren't drivingthings fast enough.

(05:55):
It wasn't actually curtailingthe behavior of companies fast
enough for me and for my brain.
I could see where things aregoing and it took way longer
than I wanted it to for themarket to play out and go that
direction.
I found that working on thetech stuff, you could actually
be working on cutting edge, butalso help with the guard rails

(06:16):
and make sure we're bringing itto market in an ethical way,
rather than working oncompliance - or governance,
risk, compliance, legal.
It's changing, but the way it'sbeen has been very siloed, and
I thought that what would reallydrive things forward is privacy
technology, privacy enhancingtechnologies, and strategies for
DevOps; and, you know it wasobvious to me after just years

(06:39):
of working in this space thatthat's really where we could
make some immediate changes -that we don't have to wait for
many years like legislation,which is important but is too
slow for me.

Rebecca Balebako (06:51):
Yeah, I mean you really speak to me there.
I'm a big fan of privacy- by-infrastructure.
Let's build it into thetechnology.
Let's get it in the tech stack.
Let's not just rely oncompliance, but let's build it
in, and then it can bereplicated.
I totally agree with you; thereare a lot of benefits to
shifting privacy left.

Debra J Farber (07:11):
Yeah, and then I'm just a really curious person
.
So, I think that these deepdives into different areas of
privacy technology on the show,for me, it's a treat to be able
to ask all the questions I have.
I'm just glad other people findthose questions interesting.
But, let's get back to you.
We're here today to talk aboutprivacy red teaming, but first I
want to get to the broadertopic of adversarial privacy

(07:34):
tests because I think that's anarea that you focus work on in
your work.
Can you give us an overview ofwhat 'adversarial privacy tests'
are, why they're important, andhow can we disambiguate that
from some other tests forsoftware development?

Rebecca Balebako (07:53):
Yeah, absolutely.
Thank you.
Well, so 'privacy rightteaming' is sort of the hot term
right now, but it's just onetype of adversarial privacy test
.
I think we're going to get moreinto the details of when
privacy red team doesn'tencompass all the types of
adversary privacy tests, butbasically they are tests where

(08:15):
you're deliberately modeling amotivated adversary that is
trying to get personal data fromyour organization.
So really, the key things herethat might differentiate it from
other types of privacy testsare the adversary.
It's not just a mistake in thesystem or it not being designed
right.
If there's a motivatedadversary, there's personal

(08:37):
data, and it's a test.
The reason I really want toemphasize test is because it's a
way to test the system in asafe way.
You are trying to attack yourorganization's privacy, but
you're doing it in a way thatwill not actually cause harm to
any users or any people in yourdata set.
Yeah, so adversary.
You're thinking through what issomeone going to try and do

(09:00):
that would be bad to the peoplewho are in my data set and how
are they going to do it withthat data, and then you're
actually trying to run that testand find out if it's possible.
That's like a broad definitionof adversarial privacy tests and
I think if we keep 1)adversary, 2) personal data, and
3) test all in mind, it's goingto help us differentiate it
from a lot of other types oftesting and privacy versus
security and so on.

Debra J Farber (09:22):
Yeah, that makes a lot of sense.
I'm wondering, is anadversarial privacy test based
on the output of a threat modelthat you're creating, like the
threat actors and all that, oris that part of the adversarial
privacy testing process -defining who the threat actors
are and threat modeling?

Rebecca Balebako (09:40):
I've seen it work both ways, where I've seen
organizations who have thethreat model already and then
they realize, "Oh, we probablyneed to do some specific
adversarial privacy testing.
" But it can also happen wherepeople haven't really thought
through their adversaries yet,specifically their privacy
adversaries, and so the threatmodeling becomes a part of, and

(10:03):
process in, doing the privacyred teaming or the adversarial
privacy test.
It's part and parcel, but whichone you realize you need to do
first, kind of depends.

Debra J Farber (10:13):
Okay, that makes sense.
I would imagine that, sincethis is still a relatively new
concept in organizations, thatbest practices in setting this
up are still being determined.

Rebecca Balebako (10:22):
Oh, yeah.
Thanks for saying that becauseI think it's really important to
say that.
There are not standards and thevarious organizations that are
doing privacy red teaminghaven't really gotten together
to define and document all ofthis.
I mean, we're trying.
We're trying to move forwardand create these clear processes

(10:43):
, but it's not an old field witha lot of standards and
processes already.

Debra J Farber (10:53):
Which also makes it an opportunity.

[Rebecca (10:53):
Exactly].
We talked about adversarialprivacy tests.
How would you define a 'privacyred team?
'

Rebecca Balebako (10:58):
So, a privacy red team is one particular type
of adversarial privacy test; andwith the privacy red team test,

you're going to have (11:06):
1) a very specific scope; 2) a specific
adversary; and 3) a goal in mind.
A privacy red team will try totest your incident management,
your defenses, and run throughan entire scenario to hit a
target.
A red team attack shouldpartially test your data breach
playbook.
It should partially testwhether your incident management

(11:30):
sees this potential attackgoing on.
And also it's going to bedifferent than a vulnerability
scan.
A vulnerability scan might trya whole bunch of different
things whether or not it getsthem to a specific goal; whereas
a privacy red team, much like asecurity red team, you have a
goal in mind and you want to dowhatever it takes to get there.

(11:52):
"Can I re identify the peoplein this data set?
Can I figure out who the mayorof Boston is in this data set?
You have this specific goal andso you try to get there.

Debra J Farber (12:01):
That makes a lot of sense.
You just mentioned how privacyred teams are similar to
security.
Red teams borrow the name fromsecurity.
How are they different?

Rebecca Balebako (12:12):
There is a fair amount of overlap between
privacy red teams and securityred teams.
I think one of the maindifferences are.
.
.
there are a couple ofdifferences, but it's basically
the motivation of the adversarymight be different in a privacy

(12:33):
threat model than in a securitythreat model.
The adversary is specificallygoing to be looking at personal
data and they're going to takeadvantage of features that are
working as intended.
We've seen this with some ofthe recent data breaches.
Basically, features that aredesigned to share data, a
privacy red team might takeadvantage of that; whereas the

(12:54):
security red team might notconsider that a vulnerability
because it's working as intended.
Of course, that really dependson the red team and their goals.
I say this with full respect forsecurity experts because there
are so many systemvulnerabilities and patches and
so on going on that I think it'sreally hard for security

(13:15):
experts to also keep in mind thesliver of vulnerabilities that
are specific to privacy.
Or, at least, not all securityexperts have that training and
similarly, not all privacyexperts (I certainly don't) have
that level of skill torecognize the latest security
vulnerabilities and Windowspatches that are needed.

(13:36):
There's a slightly differentskill set and also this
difference between themotivation and aiming for
personal data and features thatare working as intended.

Debra J Farber (13:46):
I want to underscore something that you
just said, and that's privacyred teaming is about personal
data specifically.
That brings up a bunch ofthings that I know you've talked
about on your blog.
You're really talking aboutprotecting a person, the
information that is connected toa person, as opposed to
protecting a system or anetwork; and so, those different

(14:07):
goals are just so.
.
.it seems nuanced, but they'reactually vastly different things
that you want to protect.
So, you make a lot of sense inhow security teams would be
looking at things differently.
I just know from conversationswith security folks, especially
hackers, sometimes, if they'renot privacy- knowledgeable, they
kind of conflate privacy withconfidentiality anyway, which is

(14:30):
an aim of.
"Oyou have a breach, sotherefore you have a privacy
breach of data, and that's whattrivacy hink privacy pthink p.
Privacy - security forprotecting from breaches.

Rebecca Balebako (14:42):
To dive into that a little more, I think a
privacy red team isn'tnecessarily going to try and
attack your company's financials.
It's going to try to attack thecompany's data that they have
about people.
Right?
It's a different thing.
It's not going to try andattack the organization's

(15:03):
resources or proprietaryinformation.
It's going to be looking at thedata about people.

Debra J Farber (15:09):
Exactly! So, why should organizations invest in
privacy red teams?
I know it now sounds maybe alittle obvious, but what are
some of the benefits that can berealized?

Rebecca Balebako (15:18):
Privacy red teams give you a really unique
perspective of the entire threadof an attack.
It can chain together a wholeseries of features working as
intended, features not workingas intended, to come up with
something novel - informationthat you wouldn't necessarily

(15:38):
get in another way.
It's also going to be veryrealistic, furthermore, it's
going to be ethical, or itshould be ethical.
I do want to say, though, thereare lots of benefits to privacy
red teams, but I don't thinkall organizations should invest
in privacy red teams.

Debra J Farber (15:56):
Okay, why is that?

Rebecca Balebako (15:58):
I think privacy red teams are useful for
organizations that already havea pretty mature privacy
organization, and so there aresome privacy maturity models you
can look up online.
If your system is still in the'ad hoc' or you don't really
have your complete privacysystem defined, then privacy red

(16:22):
teaming is too easy and tooexpensive.
There's other low hanging fruityou can do first like get your
privacy organization to besystematic, to be clear, to
build in those protectionsfirst; and then, you should come
back and test everything thatyou've put in place.
I think privacy red teams giveyou this incredibly realistic

(16:46):
understanding of a chain ofevents, but you do have to have
a pretty mature privacyorganization in order to make
that test worthwhile and useful.

Debra J Farber (16:55):
That makes a lot of sense.
I think later on we'll talkabout smaller sized companies
and maybe what they can do.
So, I don't want to lose thatthread, but it makes sense that
if you were immature in privacyand started a red team, what
would your scope be and howwould you even be able to take.
.
.
if you don't even have a processto intake the results of the

(17:16):
test and then fix them, thenwhat's the point in doing it?
I understand what you're saying.
You need a certain level ofmaturity.

Rebecca Balebako (17:23):
One of the other benefits of privacy red
team is that you're testing yourincident response.
If you don't have an incidentresponse team, there's no blue
team for the red team to testagainst, which, honestly, I
think that leads us into thenatural question of what is a
red team?

Debra J Farber (17:42):
and what is the blue team?
What does the term come from?

Rebecca Balebako (17:46):
We are borrowing it from security and I
largely use privacy red teamsbased on the security world.
But even before that, it was amilitary term where a military
would have a red team.
So, people within that militarypretend to be the enemy and
pretend to attack their soldiers.
Red team would be the pretendbad guys and the blue team would

(18:10):
be the good guys, and it givesyou a more realistic way of
testing how does your blue teamrespond when it's being attacked
.
That's largely where the termred team comes from.
This idea of having an incidentresponse team as your blue team
is a really important part ofthe concept and the realistic

(18:30):
attack, the adversary; it's allvery important part based on
this historical meaning and useof the term.

Debra J Farber (18:38):
I read that it's 'red team' because it was in
relation to - during the ColdWar era - in relation to Russia.
Have you heard that?

Rebecca Balebako (18:48):
I have heard that.

Debra J Farber (18:50):
Is that true or is it just folklore?

Rebecca Balebako (18:53):
I don't have the sources on that, so it could
be true.
It sounds true, but I don'tknow.

Debra J Farber (19:00):
Okay, well, just putting that thought out there
to others - that's aninteresting factoid, it may or
may not be true.
So, what exactly do privacy redteamers do?
Obviously, we talked about thebuckets of things, the outcomes
we want from them.
But, what are the types ofapproaches they might take, or
some examples of what theyactually might be coding up or

(19:21):
doing?

Rebecca Balebako (19:23):
There are three things.
There's three steps to aprivacy red team attack, and
it's 1) prepare, 2) attack, and3) communicate; the attack part
in the middle gets the most hype.
It sounds the most exciting andsexy and like, "oh, you're
going to code up a way to getinto the system or you're going

(19:43):
to re identify the data that'salready been supposedly de
identified.
Those are the types of attacksyou can do, but actually that's
the smallest part of a privacyred team exercise.
First, you have to prepare, andso, that's defining the scope.
Which adversary are you goingto model?

(20:04):
What are their motivations?
What can they actually do?
And, you cannot leave thatscope.
And then, you have to thinkthrough "how can we do this
attack ethically, because itcould be real users.
Are you actually going tocreate, like, a different
database with a synthetic set ofusers to run the attack against

(20:25):
?
Like, what are you going to doto make sure that this attack
provides the most benefit whilecausing the least harm?
That's all in the preparationand that takes time.
Then you run your attack and itcould be like pretending to be
an insider and re identifying adatabase.
It could be a bunch ofdifferent things.

(20:45):
But then, you have tocommunicate what the attack
actually found, and you have todo this in a very clear way so
that anyone reading itunderstands why they should
care; this can be hard.
The communication has to be atboth the very detailed level of
like "Here are the steps we tookthat created this chain where

(21:08):
we were successful or notsuccessful, and here is why you
should care.
This is why it's actually bad,and that's sort of the high
level putting it all together sothat the leader of the
organization understands whythey should fix all these things
Like what really is going onhere.

Debra J Farber (21:27):
What I'm hearing you say is that, besides having
some technical skills, redteamers really need to be
excellent communicators.

Rebecca Balebako (21:34):
Absolutely, or someone on the team should be a
very good communicator, butactually, at the same time, red
team is team.
There is a team in 'red team,'and so one of the things while
you're doing the attack isyou're likely to meet your team
every day and tell them what youworked on that day, what worked
and what didn't work, and planthe next day's work.

(21:57):
So, you do need to be able tosummarize to your team and
communicate what you've beendoing.
It's not really work thatspeaks for itself.
So, as opposed to softwareengineering, where you write
your code, it passes the test,people can look at it and
they're like, "Oh yeah, you didgreat work With red teaming, you
really have to be able tocommunicate it.

Debra J Farber (22:19):
Yeah, because the decisions will be made, it
sounds like, based on thatcommunication.

Rebecca Balebako (22:24):
Yeah, how convincing you are.

Debra J Farber (22:27):
Right.
Well then, that brings me up tothe next question, which I'd
love for you to unpack this forus, how leveraging a red team is
different from other privacytests like conducting a
vulnerability analysis, forinstance, or managing a bug
bounty program, where I knowit's also important to have good

(22:48):
communication skills.

Rebecca Balebako (22:50):
Yeah, absolutely.
I think the reason we'refocusing on red teaming is
because it's kind of a hot termright now and people are
wondering what it is and somecompanies are hiring for it; but
it doesn't necessarily meanthat a privacy red team is the
right thing for yourorganization.
A vulnerability analysis isalso a really great way to do

(23:11):
privacy testing.
What a vulnerability analysiswill do is perhaps look at an
entire product or feature andscan through all the potential
things that could be wrong froma privacy perspective.
Imagine there's like a wholebunch of doors and a
vulnerability analysis.
You're going to go try to openeach one of them and then at the

(23:34):
end you're going to have a listof all the doors that could be
opened.
Whereas in a privacy red teamattack, as soon as you can open
a door and get in, you're goingto use it and to continue with
the attack to get to your goal.
So, you might not necessarilydo the same scan of trying to
open all the doors.
You might notice, "Hey, thatdoor looked like it would

(23:55):
probably be easy to open, but wedidn't actually test it.
It's a little bit different interms of scope.

Debra J Farber (24:04):
I might even ask you this question earlier on
then anticipated, but based onyour response there, that brings
up bug bounty programs andwhether or not you think bug
bounties are going to be a thingin the future.
Previously, I've been excitedbecause I thought that this
could be a real opportunity.
Successes in security show thatyou can crowdsource for finding

(24:25):
bugs in software and only payon performance, and so it's been
really great for security teamsto gain hacking resources
outside of their organizationsand only pay as there's been
bugs that are found.
But, I just heard you say isthat you might have to keep
going further in scope to see ifthe door is open.

(24:46):
You want to still go further.
That to me sounds like youwould need to have internal
people who have permission to dothat, and that you might want
to keep that within yourorganization and not have
external hackers try to findprivacy or penetrate that deeply
.

Rebecca Balebako (25:05):
I think it's going to really depend on your
organization and their goals.
I think there are some hurdlesin the privacy community.
Some problems that we.
.
.no, not problems,opportunities for us to solve
before we're going to seesuccessful privacy bug bounties.
I think largely it's because.

(25:26):
.
.
privacy bug bounties, as yousaid, they seem great.
You can just crowdsource it.
You only have to pay whensomeone successfully finds a
vulnerability.
You don't have to have a fullred team coming into your
company and penetrating it.
It does have a lot of problems,but bug bounties are

(25:48):
unexpectedly expensive forprivacy because you need to
evaluate any bugs that come inbecause each bug is documented
risk.
It's telling you about apotential problem.
If you do not have a quick wayto assess how bad that
vulnerability is, you couldspend a lot of resources going

(26:12):
back and forth and trying tofigure out whether this should
be fixed or not.
So I'm just going to give a.
.
.
I am going to make up anexample of a company or a
feature.
Let's say you are a photosharing app and you're like y"ou
know what, we delete photosafter seven days, so we protect
your privacy, don't worry, it'sjust like temporary photos.

(26:33):
And then you get a reportthat's like you know what?
My photo is still here, so youdidn't delete it in seven days.
And then the company has to golook at it and they're like
"well, actually you saved it toyour favorites, and so when you
do that, we assume that you wantit retained for longer.
And the user's like well, Iknow I can delete it myself if I

(26:55):
save it.
" Then you start this back andforth of like "was it a
communication thing?
Was it a feature thing?
It can be really hard forsomething like that.
Like the company built thefeature assuming that, like, if
you save it to your favorites,then you don't want it
automatically deleted, thecustomer assumed that it was

(27:16):
going to be automaticallydeleted.
It's not really clear whetherthe company should sort of like
jump up and stop all its otherproduct work and fix that
privacy issue, or how to fix it.
So, the security community has asecurity vulnerability scoring
system and people argue about it.

(27:37):
They don't all love it, but atleast they have it and the
community has more or lessagreed on it.
We don't really have a privacybug vulnerability score and so
that's an opportunity for thecommunity to get together and
develop some standards aroundthat.
Because once we have that, thencompanies can more quickly, if

(27:59):
they were to have a privacy bugbounty, they could more quickly
use these standards that areagreed upon, assess like how bad
is this vulnerability, and thenmake a decision.
Otherwise, they're just sittingon risk and it takes a lot of
work to say whether they'regoing to work on it or not.

Debra J Farber (28:19):
You don't even know the size of it, because how
do you assess the size of therisk?
So, that that makes it tough.
What work needs to happen inthe industry or within companies
in order for bug bounties to berealistic then?
Is it just having standards?
That would be.
.
.
obviously that'd be a huge.
I don't want to say 'just;'obviously that's a huge .
But, but would that be thehurdle, or are there some others

(28:41):
?

Rebecca Balebako (28:42):
There's some other hurdles because privacy
has grown a lot from legal.

[Debra (28:48):
It has.
It's true.
]And it sometimes has the - how
do I put this?
You're the lawyer - the riskaverse nature to documenting
potential problems; whereas thesecurity community, the
engineering community, is muchmore like write down the problem
, document it, and then we candecide where to fix it.

(29:09):
I think when we're sort ofsaying, "don't write down the
problems, I don't even want toknow the problems, if that's the
culture that exists, I'm nottrying to blame employers, right
, but it's just a differentculture.
It's a fine culture, but itdoesn't really work for a
privacy bug bounty.

Debra J Farber (29:27):
It's a risk-based culture that doesn't
scale well to engineering.

Rebecca Balebako (29:33):
It's just a different culture and if that's
where your organization is, it'sgoing to be really hard to
overcome those antibodies tomake a privacy bug bounty
effective.

Debra J Farber (29:43):
That's fascinating.
You've given me a lot to thinkabout, but I think that makes a
lot of sense.
Okay, let's get away from bugbounties for a little bit and
get back to red teaming.
What are some examples ofbreaches that might have been
prevented by adversarial privacytesting, either in terms of red
teaming, or vulnerabilityscans, or privacy- related,

(30:04):
obviously.

Rebecca Balebako (30:05):
I think anytime you see something in the
news that seems like a privacyleak, but the company is
actually saying, "h, we haven'tdetected a security data breach,
then it's probably somethingwhere a privacy red team would
have identified it.

Debra J Farber (30:21):
I have an example, I think.
Have you heard about the recent23andMe security incident where
they found that a list of 1million Ashkenazi Jews, and I
think also a list of - it was acredential stuffing attack - a
list of Ashkenazi Jews and alist of Chinese users, I believe
, and all of their connectedtrees and relatives if you've

(30:41):
shared with other people.
They were able to, in thiscredential stuffing attack, get
a list of all those users, theiremail addresses or whatnot.
But then, 23andMe was like, "Ohwell, an incident was detected
but, whatever, it wasn't us, youknow we didn't suffer a breach
and it was very focused on thelegal definition of a breach.

(31:02):
I am an Ashkenazi Jew, so for me, and what's going on in the
Middle East right now, this isactually kind of scary to know
that, even though I wasn'taffected in the attack itself,
people I was connected to on23andMe were; and therefore,
I've been alerted by 23andMethat my info has been exposed.
But, they're like "It's not ourbreach, we didn't do it, you

(31:25):
might be affected.
This was their communication.
So, a privacy red team mighthave been able to surface a risk
like that?

Rebecca Balebako (31:33):
Before I talk about the technical aspects of
that, I just want to say thatparticular incident makes me so
sad.
I mean, as you said, there'santi-Semitism, which is horrible
.
There are the events going on.
I mean it's October 2023 foranyone who listens to this
podcast later.
Anyone who even saw theheadlines that like, "oh, lists

(31:55):
of different ethnicities areavailable, of Jewish people are
available on the dark web,that's already scary.
People feel scared.
Yeah, Hitler would love a listlike that.
You know?
] Yeah, and the emotions arethere and 23andMe has kind of
missed a boat on acknowledgingthat there's real fear here.

(32:16):
So, I just want to acknowledgesort of that emotional aspect of
it, and now I'm going to talk alittle bit about sort of the
technical aspects.
Between you and me, let's callit a breach, even if they don't.

Debra J Farber (32:30):
Exactly.
It's a breach of trust.
It's a breach that I have with23andMe, after all the
representations they've madeover the years of how they keep
genetic data separately from.
.
.
that can't be breached becausethey keep it completely
separately from your identity,and then blames the user.
"h, change to strong passwordsand don't reuse them on the web.

(32:55):
" Right, well, I did everything.
Right, and I'm still being told.
Like you know, I've beenaffected.
And they wouldn't even mentionthat it was my Ashkenazi Jewish
information that was compromised.
They kept that out of thecommunication, too.
It was only through newsarticles that I was able to put
those things together, and sothe comms was.
.
.
I'm like really pissed at thembecause they've eroded trust

(33:18):
with me over the years now.
Right?
So, a breach, breach

Rebecca Balebako (33:27):
think one of the reasons a privacy red team
may have caught this wherepotentially a security team may
not, is it's this combination ofthe credential stuffing as well
as using a feature as intended.
So, it has a find people withsimilar DNA to me.
That's how people are findingabout half brothers and sisters

(33:51):
that they didn't know about it.
This is like all sorts ofinteresting privacy implications
of that feature, but thatfeature is working as intended.
So, it's a combination with thecredential stuffing.
Right?
So, I think anytime you have anorganization that doesn't match
their authorization andauthentication strength to the

(34:13):
sensitivity of the data, whichis what you have here, right;
like it's just a username andpassword and it's super
sensitive information aboutvulnerable populations.
It's super scary to see thatit's on the dark web.
There was a mismatch betweenthat design in the company and I

(34:33):
think they didn't think aboutit because, h"Hey, the feature
is working as intended or, asyou said, they're blaming the
user, like oh, your passwordwasn't strong enough.

Debra J Farber (34:41):
To that point, they made two-factor
authentication optional, and Iguess, in many ways, there might
have been a discussion and go"oh well, let's give the user
the option.
This way they're not forced togo through a gating process that
takes away from the design andmakes it harder to use.
Right, you want usable products, so don't give users too many

(35:04):
hurdles.
To log in perspective, like,let's give them the option if
they want more security on theiraccount, right, but you could
see here how that, like, I'mjust kind of almost a bystander
here and I'm affected.
So there was some more threatmodeling that needed to be done
for privacy and if they madethat mandatory, perhaps that
would have been a better controlto have in place instead of

(35:24):
leaving it to the user to assessthe risk for themselves and
determine, given how sensitivethe data is, whether or not they
want to use 2FA.
I know I just kind of threwthis example at you.

Rebecca Balebako (35:35):
It's a real example.
It's a good one.
I mean good in terms of likehighlights.
But, it is so sad.

Debra J Farber (35:45):
It makes me angry, obviously because I'm
like affected, too; but, itmakes me angry in that they
didn't think that that feature.
.
.
it seems to me that anythingthat's not the genetic data
itself, like the actual geneticdata, that they classified it as
super sensitive or whatever andhave done all of the
infrastructure implementation onthat, but that they just kind
of like left the door wide openon certain other areas, like the

(36:09):
Jewish people, for instance,being both an ethnicity and a
religion, and just not thinkingabout it in those terms, where
now our ethnicity has been addedto a list somewhere on the dark
web with all our data.
So, you know that's scary.

Rebecca Balebako (36:21):
Debra, actually, I had to look it up,
when I heard about this breach,what Ashkenazi is.
If you want to explain tolisteners a little bit.

Debra J Farber (36:35):
Oh yeah! Thank you for that because I am making
assumptions that people knowwhat I am talking about.
Yeah, so the Jewish people.
.
.
I think there's three types,maybe there's four, but there's
three that I know of ofethnically, like in our
mitochondrial DNA, going back toJewish mitochondrial women,
that are its own ethnic groups,that have actual DNA lineage
that are different from otherpeople.
You could track the lineage ofthe different Jewish people

(36:56):
based on this.
Ashkenazi Jews are the onesduring the diaspora that kind of
came into Europe and havelighter skin as a result of many
years in Europe and maybeintermarriage and whatnot.
There's other groups that aremore Middle Eastern and have
always stayed there or have gonethrough Spain and darker
features, and so that'sSephardic and Mizrahi and a lot

(37:19):
of the Mizrahi Jews I believelive in Israel - I actually
don't really know many becausemany still live in the Middle
East.
Then, I think there might be afourth group that's smaller,
that I don't know of.
I think Ashkenazi is one of thelargest groups.
So, most of the Jews fromEastern Europe are sprawled
around to all those countrieswere Ashkenazi.
So am I.
So, I know plenty of PersianJews.
They're typically Sephardic andjust slightly different

(37:43):
religious customs as well thathave followed the fact that they
have lived all across the world, and so regionally there's some
cultural differences to thesegroups, but in terms of lineage
you could trace each of thesegroups back to, like I said,
long lineage of Jewish peopleover hundreds of years,

(38:04):
thousands of years.

Rebecca Balebako (38:05):
Yeah, thank you.
Thank you for explaining that,because I think, if we maybe
even take it up a level, anytimeyour database reveals groups of
vulnerable populations - and Iwill include Ashkenazi Jews in
those vulnerable populationsbecause there is so much

(38:25):
anti-Semitism, but it could alsobe like identifying black
churches in the U.
S.
, or whatever.
Anytime your database lets youfigure out and identify these
people who are historically morelikely to have violence or
disproportionate harm.
You have to start thinkingabout adversarial privacy.

(38:45):
You have to think aboutmotivated adversaries who are
gonna try to cause harm topeople and how can you protect
them.

Debra J Farber (38:52):
And then thinking of privacy harms.
That's the other thing.
You're not necessarily thinkingabout the confidentiality,
integrity, and availability ofsystems here.
Right?
Instead, you're thinking aboutsurveillance, will people feel
we've invaded their privacybecause we know too much about
them or because all of thedifferent Solove Privacy Harms
should be thought about.

Rebecca Balebako (39:12):
Absolutely.

Debra J Farber (39:13):
So, it's just a different outcome that you're
focused on.
There's a growing trend that Isee in the privacy engineering
space, where big tech companiesseem to be the first ones that
are building and deployingprivacy red teams, but they're
also some of the most, let's say, notorious privacy violators in
terms of fines they've had andmaybe some of their past
approaches.
So, we're talking Meta, TikTok,Apple and Google all have

(39:38):
privacy red teams, at leaststate that they're building in
that space.
So, does this mean that privacyred teams are ineffective if
it's privacy violators that areusing them, or is it more that
they are starting a great trendand they have the resources and
this is making their practicesmore effective?

Rebecca Balebako (39:58):
I think we have to be very careful about
the direction of causation here.
It's probably because of thosefines that many of these
companies have more matureprivacy organizations.
As I mentioned earlier, ifyou're still at the ad hoc
stages of your privacyorganization, privacy red team
probably isn't right for you.
These companies that have facedregulatory scrutiny, they've

(40:22):
been required to upgrade theirsystems.
They're a more mature privacyorganization and that's when an
organization should startthinking about privacy red teams
.
It's not that privacy red teamsdon't work, but it's just that
you need to be pretty mature,and a lot of companies become

(40:44):
mature because of those finesand because of those regulations
.

Debra J Farber (40:48):
Awesome.
Well, not awesome, but that wasa helpful response.
I was gonna ask you if therewas a frugal way to set up a
privacy red team in a smaller,mid-sized org; but, I guess for
this purpose, let's also say notonly is it small and mid-sized,
but that it's got a matureprivacy team.
Maybe it's just not anenterprise.
How would a company go aboutthis if they don't have a lot of

(41:09):
money, but they do have amature privacy team that's
smaller and mid-sized?
How do you see that beingimplemented or what are some
best practices to think about?

Rebecca Balebako (41:19):
There's a lot of other privacy testing that
can come before privacy redteaming and add a lot of value.
If you have all that in placeand you know you want to do
privacy red teaming because youknow there's adversaries out
there and they're motivated toattack your data, one thing you
can do is train some of yoursecurity folks in privacy.

(41:40):
Then, if they can push somechunk of their time into privacy
and if they can get thatprivacy mindset, that's a more
affordable way to startsomething like adversarial
privacy testing.
I think other types of privacytests I would recommend first.
.
.
so, for example, I know Privadois the sponsor of this podcast

(42:02):
and they have the staticanalysis of your code, and they
can give you results.
That's just like the kind ofconstant monitoring and privacy
testing that's not adversarial,but might be a really good first
step.
It might actually be cheaperthan implementing a privacy red
team.

[Debra (42:16):
That makes sense too].
If you know you need a privacyred team, but you don't have a
lot of cash, then maybe thinkabout training some security
folks.
Or maybe, if you have someprivacy engineers, think about
training them for adversarialprivacy.

Debra J Farber (42:30):
That's actually brings up a really good point.
Do you see privacy red teaminggrowing from upskilling security
engineers or security redteamers more so, or taking
technical privacy folks, privacyengineers and turning them into
red teamers.
Or, do you think it's acombination of both?

Rebecca Balebako (42:49):
I think it's gonna be a combination of both
and the teams should have a mixof skills.
Also, I think there are waymore red teaming and pen testing
external consultants than thereare companies that have that
all in-house.
You don't need to grow itwithin your org; you can hire it

(43:09):
externally and then you canjust have it done once a year,
twice a year, you know, asopposed to like the continually
staffing a privacy red team thatmakes sense.
I do see it potentially growingmore for sort of contract,
part-time, external consultants,obviously who sign an NDA and

(43:30):
obviously who work only in thescope, before I see a lot of
in-house teams.

Debra J Farber (43:36):
That is interesting.
Thank you.
And then, for EngineeringManagers who are seeking to set
up a privacy red team for thefirst time, what advice do you
have for them to get started?

Rebecca Balebako (43:47):
When I talked about what is the actual work in
a privacy red team, I mentionedthat the first step is
preparing.
Even just to set up a red team,the first step is preparing,
and you really need to haveleadership engaged.
You really need to make surethat if your privacy red team
finds something, that you havetotal leadership buy-in and

(44:10):
they're gonna make sure it getsfixed.
I think that the politics andthe funding of it are so crucial
to a successful privacy redteam.
Then, when it comes to hiring,I would say, look for a mix of
privacy and security folks and Ialso would include usable
privacy or people who have someexperience thinking about the

(44:34):
usability perspective, the userside of things.
If a user gets a notice or ifthey see a certain setting, are
they gonna understand it or useit in a certain way?
And that can be part of privacyred teaming.

Debra J Farber (44:48):
Oh, fascinating.
What resources do you recommendfor those who want to learn
more about privacy red teaming?

Rebecca Balebako (44:54):
Well, thanks for asking.
I put together a list of redteam articles, guides, and links
to courses on adversarialprivacy testing on my website.
There's a special page just forlisteners of this podcast.

Debra J Farber (45:10):
I will definitely put that in the show
notes, but I'll also call it outhere.
It's www.
privacy engineer.
ch/shiftleft.

Rebecca Balebako (45:23):
Exactly so - 'ch' is the domain for
Switzerland.
I'm based in Switzerland, sothat's why it's privacy engineer
.
ch and 'shift left' as areference to the podcast and as
a way to thank all the listenersfor sticking around with us.

Debra J Farber (45:37):
Thank you so much for doing that.
I think it's such a greatresource and I hope people make
use of the resources there.
Any last words before we closefor today?

Rebecca Balebako (45:45):
It's been a real delight.
Thank you so much for having me.
I love your podcast.
There's so many interestingpeople to listen to, so thank
you.

Debra J Farber (45:52):
Well, thank you, and thank you for adding to the
many interesting people tolisten to, because I think
you're one of them.
I'd be delighted to have youback in the future to talk about
the developments in this space,and I'll be watching your work.
You're definitely one of theleaders in my LinkedIn network,
at least in privacy red teamingand privacy adversarial testing,

(46:12):
and so now, I hope peopleengage you from listening to
this conversation, since youhave your own consulting firm.
If you have any questions aboutprivacy red teaming, reach out
to Rebecca.
So, Rebecca, thank you so muchfor joining us today on The
Shifting Privacy Left Podcast totalk about red teaming and
adversarial privacy testing.

(46:32):
Until next Tuesday, everyone,when we'll be back with engaging
content and another great guest.
Thanks for joining us this weekon Shifting Privacy Left.
Make sure to visit our website,shifting privacy left.
com, where you can subscribe toupdates so you'll never miss a
show.
While you're at it, if youfound this episode valuable, go

(46:53):
ahead and share it with afriend; and if you're an
engineer who cares passionatelyabout privacy, check out

Privado (46:59):
the developer- friendly privacy platform and sponsor of
the show.
To learn more, go to privado.
ai.
Be sure to tune in next Tuesdayfor a new episode.
Bye for now.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.