All Episodes

April 9, 2024 40 mins

In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.

Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. 

Topics Covered: 

  • Aaron’s deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence 
  • The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy left
  • What motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should be
  • Aaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he sought
  • How HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use case
  • Why it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you there
  • Lessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorship
  • The amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnout
  • Aaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologies
  • The importance of being open to continuous learning in the field of privacy engineering 

Guest Info: 

Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to y
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Aaron Weller (00:00):
It's the ability to understand what the company
is doing strategically so youcan kind of get ahead of it, is
really important.
And, also being close enough tothe engineering teams that
you're not seen as beingsomebody on the outside who's
imposing things, but reallysomebody who's helping to solve
the problems that they arefacing.

Debra J Farber (00:18):
Hello, I am Debra J Farber.
Welcome to The Shifting PrivacyLeft Podcast, where we talk
about embedding privacy bydesign and default into the
engineering function to preventprivacy harms to humans, and to
prevent dystopia.
Each week, we'll bring youunique discussions with global
privacy technologists andinnovators working at the

(00:38):
bleeding- edge of privacyresearch and emerging
technologies, standards,business models, and ecosystems.
Welcome everyone to TheShifting Privacy Left Podcast.
I'm your host and residentprivacy guru, Debra J Farber.

(01:01):
Today, I'm delighted to welcomemy next guest, Aaron Weller.
He is the leader of HP'sPrivacy Engineering Center of
Excellence, where he providestechnical leadership and
solutions for privacyengineering, enablement, and
experience for HP's globaloperations.
Aaron has 20 years of globalconsulting experience in
security and privacy, includingthe founding of his own
consulting firm, Ethos Privacy.
He's served in executive roleslike CISO and CPO, and he also

(01:26):
sits on IAPP's PrivacyEngineering Advisory Board.
I'm really excited to haveAaron on as a guest today, as I
know he has so much wisdom toshare with you all.
Welcome, Aaron.

Aaron Weller (01:37):
Hey, Debra, glad to be here on this sunny Friday.

Debra J Farber (01:41):
Yeah, it's pretty sunny here, too.
We're both in the PacificNorthwest.
I'm here in Southern Washingtonand, yes, there is some sun;
so, it feels like springactually is finally here.
The main topic of today isreally to talk about the Privacy
Engineering Center ofExcellence.
So, before we dive into that,you've got such deep consulting

(02:05):
experience in both privacy andsecurity.
Maybe you could tell us alittle bit about your background
and how you ended up where youare at HP today running the
Center of Excellence.

Aaron Weller (02:16):
Sure.
I tend to think of privacy asmy third career.
I started off with audit andassurance, realized fairly early
on that that was not reallywhere my passion was and
transitioned into informationsecurity around, I think, 1999,
which seems like a long time agoand I guess it was.
But, back when a lot of theinformation security controls -

(02:37):
a lot of it was manual,scripting a lot of things
ourselves - I ran teams ofethical hackers, which was
interesting.
I did some forensic work aswell.
I lived in London; I lived inAustralia; and then, I moved out
to Silicon Valley.
When I was working in SiliconValley in a CISO role (or an
interim CISO role) one of thecompanies that I was working
with had experienced a privacyincident, and I realized fairly

(03:01):
quickly that I needed to knowmore about privacy.
We were relying on kind ofoutside counsel support.
I realized I needed to knowmore about it myself and took my
CIPP, I think in 2007 / 2008;and, I've really kind of
transitioned into privacy fulltime since then.
I ended up at HP because HP wasactually a client of mine when
I was running Ethos; and, when Iended up selling the company, I

(03:24):
reached out to a few people,including my friend Zoe - who's
the former Chief Privacy Officerat HP - and said, h"Hey, I'm
looking for somethinginteresting to do next.
And that's kind of how I endedup here.
The role that they'd beenlooking for someone to fill, and
what I was looking to do next,matched up really well; and, I
joined HP in December '22.

Debra J Farber (03:42):
That's awesome.
Coincidentally, a lot of thoseroles are actually in Vancouver,
Washington, which is where Ilive, so I just love it.
Every time I pass by the HPCenter, I always think of you.
What exactly is a Center ofExcellence, and how does a
Center of Excellence in privacyengineering drive value for an

(04:04):
organization and enable it toshift privacy left?

Aaron Weller (04:08):
So, the Center of Excellence idea certainly not
unique to privacy.
It's been around for a fewyears.
One of the earliest examples -I'm not sure if it was the first
one - was actually at NASA andwhat they were looking to do was
to have a team of people whowere abstracted from the day-
to- day and could really thinkabout building for the future.
So, the Center of Excellence atHP has a number of different

(04:29):
Centers of Excellence indifferent domains.
What my team does is reallylook at what are the controls,
the techniques, the guidance,the training, the things we can
put in place that will raise thebar across the whole of the
organization.
And, we do have PrivacyEngineers that work specifically
within certain teams, but theyare working on those day-to-day

(04:51):
privacy engineering things.
I think a lot of the folks whoare in a career in privacy
engineering, they're working inthat stuff.
Right?
They're solving real problemson a day-to-day basis; whereas,
I have (and I do think of it asa luxury), my team is a little
bit removed from that.
We can then say, "Where do wewant privacy engineering to be
at HP in a year, in two years,in three years?

(05:12):
And then how do we kind ofbuild those capabilities to get
us there in a way that enableseverybody else to come along
with that as well.
So half of the people on myteam are PhDs who've got really
deep backgrounds in privacy, intechnology, in cryptography,
interface design - a lot ofthese things where it's hard for
any individual team to havesomeone with that expertise

(05:35):
unless they are kind ofcentrally located and can then
serve a number of differentgroups across the organization.

Debra J Farber (05:41):
Well, that makes sense.
So, definitely a place whereyou could have a lot of key
experts that can be almost likea consulting to other business
lines.

Aaron Weller (05:49):
Exactly.
That's a good way of looking atit.
Yeah, we definitely get broughtthese interesting problems from
across the business and thenlook for a way to solve not just
that problem but really theclass of problems that it
belongs to.

Debra J Farber (06:01):
That's fascinating.
So, what motivates leadersgenerally to create a Privacy
Engineering Center of Excellence?
If you feel like you can, it'dbe great to just frame it in
terms of what motivated HP tocreate such an organization.

Aaron Weller (06:15):
Yeah, so the discussions around creating.
.
.
so, I was the first person intothe team, so the discussions
were happening before I washired; but, some of the drivers
were really going back to what Iwas saying: how do they get
ahead of some of the emergingchallenges in the field of
privacy engineering and not havepeople who are just going to
get dragged into the operationalday-to-day pieces?

(06:36):
So, I think that was part ofthe motivation.
The Privacy team had beenaround for quite a while, good
capabilities in many areas ofprivacy, but privacy engineering
was not being run at that, whatwe call "pan-HP," so kind of
that global level.
So there was a gap there thatwas identified and the team, and
the role that I took on, wasdesigned to help address that

(06:59):
gap.

Debra J Farber (07:00):
That makes a lot of sense.
So then, what kind of reportingline works well for a Privacy
Engineering Center of Excellence?
Is it under a Chief PrivacyOfficer's purview?
Is it Head of Engineering?
Something else?

Aaron Weller (07:14):
You know, there's good arguments and I've had this
discussion with a few of mypeers.
There's good arguments, I think, in different directions; and
what I look at it as being, it'sthe ability to understand what
the company is doingstrategically so you can kind of
get ahead of.
It is really important.
And, also being close enough tothe engineering teams that
you're not seen as beingsomebody on the outside who's

(07:36):
imposing things, but reallysomebody who's helping to solve
the problems that they arefacing.
It could be seen as tension,depending on kind of.
There's a lot of research thatshows that where a team actually
reports into gives a lot offlavor of kind of how that team
will operate.
But, I do think that there are.
.
.
I mean, I have closerelationships with the Chief
Privacy Officer; the VP, whoruns data governance kind of in

(07:59):
the Chief Data Officers team, Iwork very closely with them; the
folks who run our AI Pipelines;and then, the Engineering teams
and the broader Privacy teamsacross the rest of the
organization.
So, I'm very much a matrixedperson and on a matrixed team
because I think you need to haveall of those relationships.

(08:19):
So, the reporting line to anextent it's important, but I
think it's more important tohave this broad network and just
be tied into what's going on sothat when I get these
questions, I've already beenthinking about a lot of these
things rather than trying tohave to do things off the cuff.

Debra J Farber (08:34):
That makes a lot of sense.
I really like the tie-in withstrategy of the organization,
like corporate- level strategyso that you're aligned.
I mean, so much of the time,privacy is looked at as a
compliance exercise and the restof the business isn't even
paying attention.
It's more of like just go fixthe problem, fix the bug - the
privacy bug, right, withoutreally understanding how wicked

(08:56):
a problem privacy can be.
So, that's pretty cool that, itsounds to me, like a key
component of the Center ofExcellence is really aligning
the corporate strategy.

[Aaron (09:04):
Yeah, yeah, for sure.
] Okay, so you're the firstemployee on the Privacy
Engineering Center of Excellenceteam.
How did you go about creating aroadmap for the Center of
Excellence and then yourstrategy around building out
your team?

Aaron Weller (09:18):
So, the first thing I did was I went on a
listening tour and I talked toprobably 40 or 50 people across

the organization (09:24):
executives, people who are working with
privacy, people in teams likethe marketing team who have a
great reliance on being able touse data for those purposes;
and, I went through and I askedthem a set of questions around
"how do you interact withprivacy today?
What are some of the mainchallenges you're facing?
What do you think that me andmy team would be able to do to

(09:47):
support and help you achieveyour goals?
That alignment again, and whatcame out of that exercise was,
"here are a list of issues.
These are the ones that seem tokeep coming up over and over
again that I think my team couldaddress, and from that I built
out a roadmap and also a hiringplan, to say, "based on what
we're seeing,these are the kinds of things

(10:08):
that I think are important.
" It's interesting, because whenI was hired, when the job
description was first put out,there was a lot of focus on
engineering specifically; and,what I realized through that
listening tour was that therewas also a need for more
education and enablement ofthese teams.
Like, how does PrivacyEngineering help.

(10:28):
How can we actually transformsome of these data sets in a way
that we can get more value outof the data you've already
collected, which I think is verydifferent from kind of that
compliance lens that you werejust talking about.
Then, one of the other areasthat I saw was really in kind of
what I'm looking at as 'PrivacyExperience Design.
' How do we design greatexperiences that are good for

(10:49):
users?
You know, to be transparent andall those other things, and
they're also telling part of ourprivacy story.
So, as I kind of had thoseconversations, I said, "Look, I
think we need to broaden themandate of what this COE should
be doing, because without someof these other elements, I worry
that we'll be building a lot ofgood backend processes, but

(11:09):
we'll never really have thatconnection back to "How do
people even know that thesethings exist and will it
actually move the needle from anexternal perspective as well?
"

Debra J Farber (11:19):
That makes a lot of sense.
So then, what kind of skillsand abilities were you looking
for when building out your team?
I imagine it will be differentfor every organization, but I
would think for a PrivacyEngineering Center of Excellence
, there might be some guidancebased on what you're doing, or
how you staffed your team, thatothers could probably learn a

(11:40):
lot and deploy something similar.

Aaron Weller (11:42):
Sure, yeah, I mean , my first hire was actually a
Privacy Architect.
I wanted somebody who couldreally help to address some of
these strategic problems andthen building in kind of
engineers under that kind ofpillar of the COE to help them
work on actually solving thoseproblems directly.
So, I built out that pillarkind of with an Architect and

(12:03):
then hired a Privacy Engineerinto that role.
And, I think you and I havechatted previously about how I
am not a fan of the phrase"Privacy engineer when it comes
to hiring because it means somany different things to
different people.
What I really wanted and what Iended up hiring was a pet
engineer, so somebody who canreally go and build privacy
enhancing technologies.

(12:25):
It took me a while to findsomebody.
We made a great hire, and we'vereally focused on the PETs side
of what we need to do.
I also hired somebody to helprun that enablement side of
things, too.
So, looking at how do we reallyget the right resources into
the hands of thousands ofengineers across the whole
company, looking at both from aresourcing perspective,

(12:48):
communication channels, buildingand engaging with an audience
across the organization.
So, almost like we don't quiterun podcasts internally.
But, what are those variouschannels?
Where do people congregate toask questions and where may they
be asking privacy questionsthat we just don't have
visibility into unless we're inthose communities working
alongside them?

Debra J Farber (13:09):
That makes a lot of sense, and if you do ever
decide to run a podcastinternally, please do reach out.
So, staying on the topic ofPrivacy Enhancing Technologies,
it sounds like you definitelyhave a person that helps create
them and implement them.
I was going to ask, does yourteam determine when and under
what conditions differentbusiness lines can use a

(13:29):
particular PET, or is it morelike it's guidance created for
just educating on the benefitsof PETs and when you might want
to use them; and then, thebusiness line makes the
determination?

Aaron Weller (13:43):
We have broken it down really into what I'm
calling kind of "type one" andtype two problems.
Type one problems are where weactually want to achieve
anonymity against a legalstandard.
So, particularly if we'relooking at kind of a legal
standard in Europe around GDPRor other emerging standards
around "this data set isanonymous, therefore it's no

(14:05):
longer personal data right.
So that's kind of a type oneproblem where we have to work
closely with legal folks andwork out what can we do to this
data set to achieve a level thatwe can say confidently this
data set is anonymous.
The other one is more around.
So a type 2 problem is morearound a spectrum of
de-identification.
So, we can reduce risk in thisdata set.

(14:31):
We're not going to go as far asactually being able to say that
it's truly anonymous.
So, it could be using somethinglike pseudonymization
techniques, maybe aggregation;although, if you aggregate
enough and we could consider itanonymous as well.
So, we're looking at kind ofbuilding out effectively a
decision tree of what are thekinds of business problems that
we're looking at.
Like, "We want to use this dataand we don't have consent for
it.
" Okay, so in that case we'dneed to anonymize it before you

(14:54):
could use it.
Or we're looking at this datawhere we just don't.
We don't think we need to.
.
.we need to strip out thepersonal information because
that's not really important tothe outcome.
" So, looking at some of thoseredaction techniques, or we
really need to manipulate thepersonal information, but we
don't then want that to appearin the output.
So, there are a lot of differentthings that we're looking at,

(15:14):
which are "What are thosebusiness problems?
How do we break them down intoclasses?
" classes?
And then, h"ow do we look at allof the PETs that are available,
and the availability may bethat it exists in a research
paper or it may be that thereare commercial off-the-shelf
solutions for it.
Then, how do we work out whichare the ones where we can really
get the most ROI for theorganization.

(15:36):
Either they're broadlyapplicable or there's a specific
use case where being able tomanipulate this data using a PET
is going to open up and unlockthings that previously weren't
able to be done with that data.
That's kind of the way we lookat it.
It's interesting because itmeans that we're always looking
at it through that lens of,"Okay, what's the latest problem
or latest solution or, I guess,new initiative that the

(15:59):
organization is looking to doand how can we enhance that
using a pet, as opposed to justlooking again from that
compliance side right, Becausewe couldn't possibly anonymize
all the data in the organization.
That doesn't make any sense.
So, it's really trying to matchup those use cases with what
the problem is or what thebenefit could be that we would
achieve from implementing thatPET or a combination of PETs,

(16:21):
because we've found that oftenthe combination is the one that
actually gets the job done.

Debra J Farber (16:25):
Yeah, that makes a lot of sense.
You know, I like what you saidthere too, because it follows
the mantra I've always beensaying around PETs that you
really kind of want to workbackwards from whatever the
specific outcome that you wantto guarantee.
So, what are the privacyguarantees that you want to say
- so, whether it's flowing fromthe privacy notice, it states
that you're guaranteeing acertain level of privacy.

(16:48):
If you say you anonymize things, you want to actually be able
to ensure that, or whateverlegal certification that you're
aiming for, you then can workbackwards on what privacy
enhancing technology can get usthere, as opposed to going o"Oh
I really like this particularpet, let's deploy it here, and
it maybe doesn't make sense forthe privacy guarantee that it

(17:09):
produces or does not produce.

Aaron Weller (17:10):
Yeah, and I love that you brought up privacy
guarantees, because that'ssomething else that we've been
noodling on as well.
I see a spectrum from kind ofwhat I would see as "hard
guarantees, which are reallykind of mathematical proofs.
Right?
Like differential privacy cangive you a mathematical
guarantee to a certain level,depending on the values you
choose for Epsilon and the restof it, the implementation
details.

(17:31):
But, you've also got kind ofwhat I think of as "'softer
guarantees, things like "I willnot sell your data, where you've
then got to do a lot of work togo from "Okay, what does that
really mean?
How can I prove it and what arethe steps along that to be able
to say, if we made thisstatement, that we can actually
provide assurance that it's true?

(17:51):
So yeah, we're working on someof that stuff where a lot of
PETs don't really come withbuilt-in privacy guarantees.
So, that's some of the workthat my cryptographers are
working on.
How do we almost advance thestate- of- the- art with some of
this and say, What are theprivacy guarantees we can
provide from something likemulti-party computation, and can
we do it in a way that would beat least broadly comparable to

(18:15):
other techniques?
" Because without thatcomparability, it's really hard
to know what are you reallygetting from this technique from
a privacy reduction perspective.

Debra J Farber (18:24):
Yeah, that makes a lot of sense.
One of the things that privacyfolks have not had for so many
years is KPIs and metrics thatreally move the needle Like it's
.
.
.We used to have breaches andthe number of them and all of
that, but that's kind of movedinto the security world; and so,
here, by providing what yourteam is working on, and if we
can get more of a collaborationon that and get that more at a

(18:46):
community level, maybestandardize it, I think that
would go a long way in anorganization.
You know, organizations thathaven't done this work but want
to be able to like benefit fromall the knowledge out there
about how to go about deployingPETs.

Aaron Weller (19:04):
I do see that as part of our mandate to what you
were just saying about beingable to share some of this
knowledge as well.
I'm looking for thoseopportunities where we can help
to share some of the work thatwe've done and say that this is
something that I think otherscould use as well and could
benefit from.
Nobody is an island.
We certainly look to others andare inspired by what other
people are doing.

(19:25):
I heard someone say the otherday that really stuck with me.
They said "the way to innovatefast is to make the borders of
your organization porous.
" You're letting ideas in, butyou're also, importantly,
letting ideas out as well, sothat you can really be seen to
be somebody that others want tocollaborate with, rather than
just taking everybody else'sideas.
So, I do think that's a reallyimportant part, and that's one

(19:47):
of the reasons you mentionedearlier that I'm on the IAPP
Engineering Advisory Board forthe next couple of years.
That's one of the things Ireally want to do is to help
influence.
How do we help with some of theresources that the IAPP and the
involved organizations haveavailable to advance that state-
of- the- art in a way thateveryone can benefit, who maybe

(20:07):
don't have the resources to setup a team similar to mine.

Debra J Farber (20:11):
Yeah, that would be great, especially for other
privacy leaders.
IAPP is a great org.
I do wonder what level ofinfluence they have in the
engineering space, but I applaudthe effort and I think that
you've got a great AdvisoryBoard, a great panel of other
awesome privacy engineers towork with.
So, I look forward to seeingthe output of you guys putting

(20:32):
your heads together.
What advice would you have toleaders in other organizations
about setting up a similar styleCenter of Excellence for
Privacy Engineering?
Basically, what are some ofyour lessons learned?

Aaron Weller (20:47):
Oh, that's a good question.
I think that it's, like Imentioned, HP has a number of
different Centers of Excellence.
I think that that's kind ofpart of the culture is that
being able to have people whoare focused on driving the
state- of- the- art.
I joked with my team beforethat we are not a Center of
Mediocrity, we are a Center ofExcellence.

(21:08):
So, there are those certainexpectations that I have around,
"We should be doing stuffthat's at least aspirational in
some ways, but also we can getdone.
" So I think it's making surethat your organization is going
to have a culture that's goingto accept that Center of
Excellence idea would beimportant.
I found that the listening toolthat I was mentioning earlier,

(21:29):
that I did when I started, wascritical both to building and
starting to build thoserelationships across the
organization, but also notcoming in as a 20-plus year
consultant earlier in my career.
I think, probably a few people,when they looked at me, were
like y"You're going to come inand not understand what we need
and then just tell us what to do.
"I think helping to reallyshape that narrative by saying,

(21:51):
"I'm just here to listen and I'mhere to really understand what
the problems are.
" and problems are and then comeback to you with these are the
ones that I think I canreasonably address and these are
the ones that I think would bebest for the organization, even
if they're not best for you asan individual stakeholder.
Hopefully we can align thatthese are the best problems for
my team to be addressing.

(22:11):
So, I think that was helpful.
And then, really, it's beingvery picky with the hiring as
well.
I think there are a lot ofpeople I mentioned that it's
really hard to hire a PrivacyEngineer because that definition
is so broad and the resumeswere so varied that I think
really knowing what I wanted andbe able to say, This is a very

(22:34):
particular set of skills thatI'm looking for.
" And in that case, I think ittook me six months to make that
hire; but, it was somethingwhere I was fighting that
tension of t"There's work thatneeds to be done now" with
really holding on until I hadthe right person in the seat.
So that's challenging whenyou're trying to balance those
two objectives.

(22:54):
But, I think, yeah, really, ifyou're looking for a team that's
going to be really at thecutting- edge of helping to
drive the organizationaldirection in this space, it
takes a certain mindset andability to be comfortable with
ambiguity, you know, and some ofthese other things where we are
literally building the plane aswe're flying it.
Not everybody wants to do that.

(23:15):
So, I think that was a key partof the interview process as
well.
You've really got to push homethe idea that this is going to
be exciting, but sometimes it'llbe Monday and your entire week
gets turned upside down.
If that's going to upset you,this is probably not the job for
you.

Debra J Farber (23:31):
Yeah, that's fascinating and that's all with
a company that has definitelyinvested in the concept of a
Center of Excellence.
It's already in its DNA, in theculture.
What about for companies that.
.
.
or maybe a Business Leader or aPrivacy Engineering Leader that
wants to ask leadership whetherthey would invest in a Center of
Excellence?

(23:51):
I mean, I know you didn'tnecessarily have to do that.
What advice would you have tothem to make the business case?

Aaron Weller (23:58):
I think it's really the couple of things that
I think that the COE candeliver
that would be very hard forsomebody who's in a more of an
operational privacy engineeringrole to do is really helping to
spend the time to invest.
AI is a great example this yearwhere we have built a Privacy /
AI process, and triage, andseparate assessment processes

(24:20):
because we have the bandwidthwhere I could prioritize people
to say, w"We didn't plan to getthis work done this year, but
it's clearly a business priorityand we can go and focus on it
because we're that little bitremoved from some of the
day-to-day operational stuff.
I think that's some of thevalue in having .
.
.
it's almost like.
.
.

(24:40):
I was watching Braveheart theother day, where they keep part
of the army in reserve becausewhen things change on the
battlefield, it's great to beable to say, "Okay, now that's
changed, we can go and adaptbecause we didn't deploy
everyone all at once".
That's kind of how I feel theCoE can really be useful is to

say, "We don't have to spend: either we go hire a consultant (24:56):
undefined
and we need to train them on thebusiness, or we're going to
spend three to six months goingand hiring somebody.
" We already have people herewho know the organization, know
some of the problems and the waythat it works; and we can apply
them to this problem.
That's not to imply my team issitting around waiting for new
problems.

(25:16):
Right?
We have a huge backlog of stuffto do; but, it does mean that
we can prioritize when thingscome up.

Debra J Farber (25:22):
It makes a lot of sense.
I really like that analogy.
All right, let's talk a littlebit about AI governance.
I know you brought it up.
I know you said your teammanages.
.
.or not manages, but is incharge of at least looking at
some AI pipelines from a riskperspective and maybe
recommending some controls there.
There's been a lot of hypearound AI, right?
Especially in the last year,with LLMs coming to market all

(25:45):
over the place, and many privacyteams have been asked to advise
on AI governance risks as well,which was not necessarily part
of those teams' privacy mandateand thus might be eating into
the amount of time spent onprivacy engineering and
technology.
I was going to ask what's thepercentage of time, but really

(26:05):
just an estimate that your teammight have spent on activities
outside of privacy engineering,like AI governance?
And then, how can leaders helpprevent burnout on their teams
for adding these additionalprojects and requirements?
I mean, I guess that'sbasically my question.

Aaron Weller (26:23):
Yeah, I mean, I guess when you say that you know
you've spent on activitiesoutside of privacy engineering,
what I've really looked to do -and I think we've been fairly
successful with this - is whatare the things that we can do
that apply to AIthat also, then, we can turn
around and apply to other usecases as well.
A good example of that is someof the work we're doing at the

(26:44):
moment around synthetic datageneration and use, which is
great for testing AI models, butalso we can use it for other
parts of the organization aswell.
So, it's trying to find theplaces where we're not doing
something that's just AIspecific, unless we have to;
but, but really looking at howdo we integrate that with the
rest of the existing roadmap.

(27:04):
So where would we apply PETspets to AI?
Where would we apply some ofthese other techniques to AI as
well?
But, yeah, I mean we have hadto spend some time building, you
know, AI-specific ai-specifictriage processes for privacy and
some of those things that webuilt from scratch.
Now, we have them up andrunning and they're becoming
more efficient.
You know, we can really focusback onto some of the other

(27:26):
challenges, too.

Debra J Farber (27:30):
While some of the great aspects of AI will
continue to move forward,hopefully some of the hype that
was a little unwarranted willcontinue to die down.

Aaron Weller (27:40):
I mean, I've seen a lot of really intriguing AI
use cases and some things whereyou look at it and you say,
"That is demonstrably betterthan the way we were doing
things previously.
I've also seen some where I'mlike "You know, we could have
done the same thing a differentway"y.
So I think there is that kindof, at the moment, everybody's
all- in on AI; and, I remember afew years ago with what I

(28:04):
called "peak blockchain"n, whereI was literally sitting on a
beach in Sayulita, Mexico, and awoman was talking to a venture
capitalist at the next tabletrying to sell her blockchain
idea to them.
And, I'm like "This is peakblockchain," right when it's
just.
It's so pervasive that you justcannot get away from it, and I
feel that a little bit about AItoday.

Debra J Farber (28:26):
Yeah, that makes sense.
I feel like it's definitely thenew blockchain or metaverse
even.
I feel like nobody's talkingabout that anymore, compared to
AI.
I also recently saw a LinkedInpost of yours where you were
discussing the value of gettingan early handle on governance
over the adoption of innovativetechnologies.
And you say "Think AI today,but before that, blockchain,

(28:50):
cloud, mobile, et cetera"a andyou made a hypothesis about
effective governance.
Would you share that hypothesiswith the audience?

Aaron Weller (28:59):
This actually came out of, if you're familiar with
it, the Gartner Hype Cycle,where you've kind of got the
peak of inflated expectations(as I think you're referring to)
and then the "trough ofdisillusionment and then it kind
of levels out to "It's just atechnology we use right.
And you can think the cloud nowit's just a technology we use.
Right.
There are additional risks andcontrols, but we've got

(29:21):
processes and standards and allthe rest of it.
That it's not something - and Ido remember several years ago
people would be like, "Are weever going to move to the cloud
or is it just too risky?
And now you don't hear thatquestion being asked.
So, my hypothesis, kind ofsimilar to the Hype Cycle, was
you've got organizations you tryand be often - and I've seen

(29:41):
this in many organizations overthe years - when there's a new
technology, there is this FOMOright, the fear of missing out.
So, people really try and leanin hard to these new
technologies.
And, not to say that there isno value in blockchain.
There are definitely thingswhere, particularly around
supply chain management and beable to validate that things are
the way that you believe themto be, there are some great use

(30:02):
cases.
But, I think it overextendeditself into use cases that
really didn't line up with thecore value of the technology.
So I think that there is kindof this tendency to over index
and many governance teams, in myexperience over the years, are
maybe slow to react.
Right?
There's a lot of stuff going onand AI in particular is moving
really fast.
How do you react quickly enoughand then avoid kind of that

(30:27):
over-correction where you'relike this is just out of control
and we saw this again?
I'm using a cloud analogy for acouple of reasons.
One, to show that this is not anew thing, but also because
we've kind of got through thecycle when we were looking at
firewalls back in the day andevery different service would
run over a different port.
We would set up the firewallrules based on where we knew the

(30:47):
traffic was going.
With the adoption of cloud,everything ran over port 80 or
port 443.
You've suddenly lost thatcontrol you had before and we
had a whole new range ofsolutions around understanding
which cloud providers were evenexisting in your environment, or
SaaS solutions, because theyall ran over the same port.
So, we had to kind of have adifferent technique and I think
then there was the tendency tosay, "Well, no new cloud", right

(31:10):
, we've got to impose theseprocesses, we've got to make
sure it's all, which wasprobably an overcorrection.
My hypothesis is, I think thateffective governance, if you can
get in there quickly, you canreduce the level of risk that's
accepted before companies reallyunderstand the new technology.
So, you're accepting riskbecause you want to get it done,
but at the same time you maynot really understand or be able

(31:34):
to quantify what that risk is.
I think then when companiesrealize that sometimes they're
like "ah and panic and thenreduce the test, they
overcorrect and the complianceorganization's like we've had
way too much risk.
We need to then restrict thislist, get it back down to an
equilibrium and they overcorrectand then eventually you'll
settle on a this wasn't as badas we thought it was.

(31:54):
We understand the risks betternow.
We have more standards andframeworks and things we can
rely on, so you reach thisequilibrium.
I believe that if you have agood governance team - I'm
particularly going back to myBraveheart analogy - if you've
got people that can jump onthese kinds of things quickly
and not have to wait for a breakin their day job to do it, I
think both you can reduce thatlevel of risk that's accepted

(32:15):
and then reduce thisovercorrection that I think
organizations tend to do whenthey realize that they were slow
to govern in the first place.
Therefore, they've got to kindof go back and work out.
You know, how do we get back tothat, whatever the long-term
usage of AI is going to be, orcloud or any of these other
things?

Debra J Farber (32:32):
I like your hypothesis and I wish that the
Gartners of the world focusedmore on privacy as a separate
industry and not just a subsetof security, which it never is.
Privacy is not a subset ofsecurity, but that's kind of how
it's treated at Gartner andForrester and all the analyst
firms.
Maybe they're listening andthey can hear about your
hypothesis like this and thenmaybe that inspires them to

(32:53):
expand into privacy more.
We're getting close to the endof the conversation and I'd like
to ask you - what advice do youhave for those listeners
interested in getting intoprivacy engineering?

Aaron Weller (33:06):
I think there's a lot of.
.
.
and I've said this throughoutmy career - so, two things.
One is that the job that you doin 10 years' time may not even
exist today.
Privacy engineering as adiscipline, a separate domain,
didn't really exist outside of afew companies 10 years ago.
I think, with all of thesedifferent areas, people are like
h"How do I get into this newarea?
Well, the easiest way to get inis to have an adjacent skill

(33:30):
set.
I would much rather hire anengineer that is a good engineer
and I can teach them privacy,somebody who's got kind of the
right attitude and some of thegood background, than try and
find kind of a unicorn who haseverything.
So I would say if you'relooking at getting into privacy
engineering and you have a goodengineering background, you can
get enough privacy, I think, tobe able to be effective in those

(33:54):
roles.
But often that may be ahorizontal transfer within an
existing organization.
One of the best security guysthat I ever had work for me was
given to me from the IT helpDesk.
He knew everybody in theorganization; knew how the
processes worked; had aninterest in security; and, was
able to be trained up andreceptive to being trained up in

(34:16):
some of the details.
So, I think that getting intoprivacy engineering, it's not
like you have to go to CarnegieMellon or one of the other
places where you can actually doa full-on course.
I think it's reallyunderstanding.
Why do you want to get intoprivacy engineering?
What is the piece about thatthat really intrigues or
interests you?
How will that be the next stepin your career and what are the

(34:37):
skills that you already have tobe able to then make that move?
You look at a bunch of thestuff with AI right now, where
so many people are claiming tobe an expert, and you go and
look and say, w"ell, how canthey claim to be an expert?
A lot of the people that I knowwho've been very successful,
they're lifelong learners andthey always, if you get an
opportunity to talk to someone,they can talk intelligently

(34:59):
about different kinds of privacyenhancing technology or
different kinds of things thatwould be relevant to the role.
So, I think, absolutely,there's a lot of stuff that
people can do to know enough tohave that foundation.
But yeah, I've always beensuccessful in recruiting from
adjacent domains where maybethere isn't the hype around it
and there are people who've gotthose good baseline knowledges

(35:22):
that can be really effective ina privacy engineering role, as
long as they're open to thatcontinued learning.

Debra J Farber (35:28):
I think that's great advice.
Thanks for that.
I mean, you just dropped somuch wisdom.
Do you have any other words ofwisdom that you'd like to leave
the audience with today, or anyupcoming conversations that you
want to plug, or any frameworksand working groups that you
think people should know about?
Let us know.

Aaron Weller (35:47):
Yeah, I mean, I think there is an overabundance,
probably, of information outthere about the world of privacy
engineering and I am stilldiscovering new things all the
time.
The latest one that I think Ifound a couple of weeks ago was
some work that OWASP, who arefamous for their Top 10 security
vulnerabilities, has done, andthey now have a Top 10 AI

(36:07):
vulnerabilities and a Top 10machine learning vulnerabilities
.
They've broken down controlsframeworks around the various
stages of an AI ingestionpipeline.
So, I read all of these thingsand I think what I've been
successful at doing is thensynthesizing a lot of these
things together to say, "I cantake a little bit of X, a little
bit of Y and work out somethingthat's going to be effective

(36:29):
within the organization that Iwork in.
So I think my words of wisdomare there is more information
than you could possibly consumeout on the internet.
There are so many workinggroups that are doing good work
around this space and I think,yeah, we've mentioned LinkedIn
earlier, but so many peopleshare papers and things they're
working on and all of that stuffon LinkedIn.

(36:50):
I had someone reach out to methis morning and say, h"ey, I
just published this, what do youthink about it?
Use those resources because, tomy point about the way you
innovate faster is that porosityof the border.
I've always believed thatanything that I do, somebody
else could help me improve.
I think, if you have thatapproach to it and you say, I"I
want to read ISO and I want toread NIST and I want to read all

(37:13):
of this other stuff I may notnecessarily agree with all of
one perspective, but by readingall of it, I then can produce
something or synthesizesomething that's going to be
exactly in line with kind ofwhat I want, without having to
go and build it from scratch.
So I think my words of wisdomare if you're building something
from scratch, you are eitherright on the bleeding edge or

(37:35):
you are not looking hard enoughfor something that you can
leverage.

Debra J Farber (37:38):
Those are excellent words of wisdom.
Thank you for that, Aaron.
Thank you so much for joiningus today on The Shifting Privacy
Left podcast.
Until next Tuesday, everyone,when we'll be back with engaging
content and another great guestor guests.
Thanks for joining us this weekon Shifting Privacy Left.
Make sure to visit our website,shiftingprivacyleft.

(37:59):
com, where you can subscribe toupdates so you 'll never miss a
show.
While you're at it, if you'vefound this episode valuable, go
ahead and share it with a friend.
And, if you're an engineer whocares passionately about privacy

, check out Privado (38:13):
the developer-friendly privacy
platform and sponsor of thisshow.
To learn more, go to privado.
ai.
Be sure to tune in next Tuesdayfor a new episode.
Bye for now.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.