All Episodes

July 23, 2024 38 mins

In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that  helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need.

Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout.

Topics Covered

  • How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering 
  • Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneys
  • The unique ideas and practices presented in this course and what attendees will take away 
  • Frameworks, cases, and mental models that Eric and Amalia will cover in their course
  • How Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework 
  • The importance of upskilling for privacy engineers and attorneys


Resources Mentioned:


Guest Info


Send us a text



TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amalia Barthel (00:00):
We know intuitively that actually,
privacy engineering really isabout being in- the- know about
how much good or harm you couldbe doing with the data and the
processing of the data.
And I think when organizationsembark on these new projects,
like AI, they actually have noidea whether their outcome is

(00:20):
going to be good or bad forsociety, for other people, and
not just for their company.
I think this is going to be aor bad for society for other
people and not just for theircompany, and I think this is
going to be a huge eye-openerfor them.
They're going to go in withtheir eyes wide open, not shut.

Debra J Farber (00:33):
Hello, I am Debra J Farber.
Welcome to The Shifting PrivacyLeft Podcast, where we talk
about embedding privacy bydesign and default into the
engineering function to preventprivacy harms to humans and to
prevent dystopia.
Each week, we'll bring youunique discussions with global
privacy technologists andinnovators working at the

(00:53):
bleeding edge of privacyresearch and emerging
technologies, standards,business models, and ecosystems.
Welcome everyone to The ShiftingPrivacy Left Podcast.
I'm your host and residentprivacy guru, Debra J Farber.
Today, I'm delighted to welcomemy next two guests: Amalia
Barthel from Designing Privacyand Eric Lybeck, Independent

(01:16):
Consultant and Privacy Engineer.
Withover 15 years experience in
building privacy management andcompliance programs with chief
Chief privacy Privacy officersOfficers and compliance
Compliance officers Officersgeneral General counsel Counsel,
and CISOs across many industryverticals verticals, Amalia
founded Designing Privacy, aprivacy consultancy that helps

(01:36):
build privacy into clients'business operations through
privacy engineering, riskmanagement, management and
easy-to-implement tools.
She's also a Lecturer lecturerand Academic academic Programs
programs Advisor advisor for theUniversity of Toronto SCS, so
that's the School of ContinuingStudies.
Eric has two decades ofcombined cybersecurity and

(01:58):
privacy experience, developingsolutions that help
organizations implementresponsible AI, protect their
data and comply with regulatoryrequirements.
Eric was most recently thedirector of privacy engineering
at Privacy Code.
He's now an independent privacyengineering consultant
currently on assignment with amajor automobile manufacturer

(02:21):
and is also working withMichelle Dennedy to update and
co-author her seminal book, ThePrivacy Engineer's Manifesto.
Today, we're going to bediscussing the need for more
training on AI system enablementand why it's not enough for
privacy professionals to justfocus on AI governance.
We'll learn more about Amaliaand Eric's new hands-on course

(02:42):
and certificate, which they callPrivacy Engineering in AI
Systems Certificate (PEAS).
Welcome Amalia and Eric!

Erick Lybeck (02:49):
Thank

Amalia Barthel (02:51):
Thank you.
That was such a great intro.

Debra J Farber (02:53):
Yeah, well, I'm just reading your bios.
You have such great backgroundsand are doing some really
exciting stuff, so thank you forbeing here today.

Erick Lybeck (03:00):
Yeah, you're most welcome.
We're delighted to have theopportunity to talk about what
we're doing.

Debra J Farber (03:06):
Awesome.
Well, Eric, why don't we startwith you?
You've worked as a privacyengineer for the last 10 years,
so why privacy engineering andAI?
Why is that a topic we shouldfocus on?
I feel like this is an obviousquestion, but how is it
different?

Erick Lybeck (03:19):
Right.
Well, I mean, I think most ofus have already been doing this.
We've already been working withsystems.
We've already been doingengineering in these systems
with privacy, and now we're justtalking about AI, because we
have so many more capabilitiesnow.
So, the automateddecision-making that 10 years
ago we didn't really touch thatmuch, now we have these great
new AI technologies that areallowing us to do so much more

(03:43):
decision-making.
So, this is really about AIsystem enablement - making sure
that we're engineering the rightfeatures, the right systems,
we're considering the differentprivacy threats when we're
working on those systems, andmaking sure that we're all
up-skilling so we understand howAI may be impacting these
systems differently.

Debra J Farber (04:04):
How is AI system enablement different from AI
governance?

Erick Lybeck (04:08):
I think we know how to do AI governance.
It's similar to how we've doneprivacy governance, grant policy
stand up an organization to dogovernance, handle the structure
, handle the people.
But, instead, we need to betalking about what to do with AI
and specific systems.
So, if we're using AI, we needto understand what some of the
threats are to AI-enabledsystems.

(04:28):
We need to understand how touse personal information
correctly, what the risks are tobe using personal information
that might be processed throughmachine learning or through a
large language model.
Whereas AI governance is morethat high level strategy, what
we're doing in this course isgetting into practical examples
of how we can be working andengineering better AI- enabled

(04:51):
systems.

Debra J Farber (04:52):
That's awesome.
Thank you for that.
Amalia, what prompted you bothto design this new course, this
Privacy Engineers in AI Systemscourse.

Amalia Barthel (05:00):
So, that's a really great question and, just
for our listeners, I want totell them the story about how
Eric and I decided to gotogether on this.
We've known each other for along time.
We actually met in our previouslives, as we all have previous
lives.
At PwC we worked me in Canadaand Eric in the U.

(05:21):
S.
in the privacy departments.
We started talking aboutprivacy, how we can collaborate.
So, that's when I first got toknow Eric and we touched base
throughout, like over a decade,and then I found out that he is
one of the right- hands ofMichelle (Dennedy).

(05:42):
Michelle has many right handsat Privacy Code and I was just
fascinated.
I was trying to implement aprivacy engineering discipline
for a couple of my privateclients and when I saw the
Privacy Code software, I thought, "Oh my God, this is exactly
what I need to do.
" So I got talking to Eric andit just was a meeting of the

(06:03):
minds.
But, I have also authored, asyou caught in my intro, in my
bio - I've actually designed,and am delivering, a certificate
program for privacy atUniversity of Toronto School of
Continuing Studies.
That was done in 2016 and itwas done on the same premise.
I found a gap in the market.
At the time, the gap was thatthere were a lot of privacy

(06:29):
professionals that understoodprivacy at a theoretical level.
But a lot of them came to meand they would say "Can you
mentor me?
I know about privacy, but Idon't know how to do privacy.
So I found that there was a gapthen in 2016 in the market with
the operationalizing of privacyand, more so, bridging that gap

(06:50):
between the business people,the legal people, and then the
IT people, because you have totell them how to create those
features, the functionality, insuch a way that is privacy-
protective and respectful.
I saw the exact same problemnow and I talked to Eric.
And, Eric put actually a postkind of understanding, engaging

(07:10):
the market.
How are people going to receivethis if we were going to do
something, of course, about AI,that goes deeper, deeper than
governing risk, governancestrategy of AI in general?
But how do you do AI?
You know?
How do you implement it inoperations?
We've had some great feedbackfrom our network and we thank

(07:31):
them.
And somebody said, y"You know,I think you should orient this
course towards lawyers andengineers.
So that was the first thing -our aha moment - was "we need
these people.
In the same room Now, Michellehas been saying that in her book
for 25 years, Eric or so?

Erick Lybeck (07:49):
It's the 10th year anniversary this year, actually
.

Amalia Barthel (07:52):
Oh, okay, all right, so sorry.

Debra J Farber (07:55):
(Debra) She probably has been saying that
for 25 years.

Erick Lybeck (07:57):
Exactly, exactly.
I'm sure she has.

Amalia Barthel (08:19):
So, that is why privacy engineering in AI
systems, because we just feltthat we have this gap and we
need to bring together the twoworlds of the people involved
with either privacy or privacyand technology so that they can
talk and understand each other.

Debra J Farber (08:19):
I was going to ask, ideally, who should take
this course, but I think youalready kind of answered.
It's meant for both legalprofessionals and technical
professionals.
I do know that those lookingfor technical coursework often
are looking for hands-on labs,or lawyers or consultants are
looking for maybe tools likeframeworks they can use and
unpacking those.
How do you guys bring togetherthe concepts that are at the

(08:42):
right level for both technicalfolks and maybe potentially
legal?
I don't want to just saytechnical.
I feel like I'm a technologist,but I'm not necessarily going
to like configure some serversor write code.
So, when I mean tech, I meanapplied technologists versus
maybe someone who's interestedin technology and can talk about
it but isn't necessarily goingto go into a lab and start
coding something.

Erick Lybeck (09:13):
We're working with, planning for, any sort of
skill level.
Certainly, if you're a legalprofessional and you have some
experience understanding casestudies, understanding use cases
from your business.
That's how we're going to beteaching the course.
So we'll have case studies thathave specific use cases of some
AI-enabled system and there maybe some technical aspects to it
.
So if we have a very technicalcomponent diagram or something

(09:33):
like that, we'll thoroughlyexplain those diagrams.
So we'll do this in a way thatany professional will understand
what we're teaching.
We've done benchmarking.
Professional will understandwhat we're teaching.
We've done benchmarking.
We've seen courses out therethat are maybe more specific on
AI risk management or are verytechnical about artificial
intelligence technology itself.

(09:53):
Those courses would requirecollege calculus or linear
algebra.
That's not us.
I mean we're going to befocused on real practical case
studies, real practical examples, as well as working with
students to develop a capstoneproject that is really real
world for them so they canreally apply what they learned
through the course to their jobs.

Debra J Farber (10:14):
I think that's pretty exciting, because one of
the things I guess I didn't drawout of you earlier on is that
this course it's notpre-recorded and you just pay a
price and then do it at your ownpace.
This is actually like a weeklycourse.
We'll go into the differentmodules and what you'll be
covering and the approach alittle later on, but this way
people will be able to bringtheir own experiences and talk

(10:35):
amongst themselves and sharewhat they've seen and ask you
questions, and so it's a livecourse.

Erick Lybeck (10:41):
Absolutely.
We'll have live sessionsbecause we know we're going to
learn from our students as muchmaybe as our students learn from
us, and so those conversationsand those classroom discussions
will just be very essential forthe learning in this course,
because we'll all understandthese case studies, these use
cases, much better through thoseclassroom conversations.

Amalia Barthel (11:02):
Yeah, and one of the unique, maybe ideas that
we're bringing into this courseis that, even though we have
more technologists on one sideand legal people on one side,
we're actually going to put themtogether in a virtual room and
we're going to ask them toexplain things to each other,
and I think that is going tobenefit them tremendously both
of them because what we'refinding is that we're reading

(11:25):
from the same page, but weunderstand completely different
things.

Debra J Farber (11:30):
Absolutely, in fact.
To go back to the littleparable about Michelle Dennett,
he's been talking about thisprobably for 25 years.
I just want to draw out thatthe actual challenge that she's
always talking about, especiallyin that book, is that lawyers
like to architect their languagea little more generally to
capture as many risks aspossible, right?
So in a privacy policy, youmight see something like we use
reasonable security mechanismsor approaches, and engineers

(11:54):
need something that is tangible,that they can code to and
determine whether or not it'sbuilt correctly or not, right.
And you can't code toreasonableness, right.
And so I think having thesediscussions, as you're talking
about, will really get folks toflex their muscle on exercise
how they discuss these topics,so that the other side not other

(12:15):
side, but the other- specialitycan understand, and then they
could realize oh, I need to bemore specific, or maybe I need
to be more high level andsystemic about how I'm framing
something right.
So I think this is reallyexciting.
Ambalya, what do you hopeattendees take away from this
course?
What will it enable them to doas they each go back to their

(12:37):
respective organizations?

Amalia Barthel (12:39):
I know I'm jumping the gun a little bit by
saying that, but we're going totalk about the frameworks.
But in our prep work for eachclass, we are feeding the
fundamentals of trustworthy AIand privacy engineering one step
at a time.
So it's like a ladder we bring,we build knowledge with every

(13:01):
single module and when theyactually start working on the
use cases, they get to dip intovarious frameworks.
I'm not going to name thembecause we talk about them a bit
later, but what they're goingto bring with them is an entire
toolkit.
They're going to know aboutthese many resources that they
can always mix and match.

(13:23):
They will understand again.
It will be a systematizedapproach.
As to how do I approach onerequest that comes from group X.
The business has this need,they want to use artificial
intelligence and they will nottake no for an answer.
A how am I going to be anenabler?
And B how am I going to protectthe organization from itself?

(13:44):
And that's what they're goingto be an enabler.
And B how am I going to protectthe organization from itself?
And that's what they're goingto take away a toolkit that
enables them to do that.

Debra J Farber (13:51):
Thank you so much.
That's awesome, Eric.
What about you?
What do you think thatattendees might walk away with
and bring back to theirorganizations?

Erick Lybeck (13:59):
We did a series of webinars and we talked about
one of the frameworks as being agood tool to be used for
developing policy.
So we'll be talking aboutdeveloping policy and we'll
about one of the frameworks asbeing a good tool to be used for
developing policy.
So we'll be talking aboutdeveloping policy and we'll be
talking about developing theprogram that goes in place
around governing AI systems andperforming these tasks, and
we'll bring in other examples ofa lighter touch you know ways

(14:20):
of doing assessments, and we'llbring in concepts of privacy by
design, but AI systems by design, so working through the systems
, through a systems lifecycle,and so that's some of the things
we'll be doing through thesecase studies, and one of the
examples we've worked on we'veworked on a number of different
case studies as we've preparedfor this is we had one example

(14:40):
where it was a product and theproduct enables police
departments to save time byautomating the report writing,
and so it could actually connectto the video camera footage and
go through all of that andautomate the report writing, and
so we can bring a case studylike that and talk about it.
What are some of the privacyrisks?
What are some of the ethicalconsiderations.

(15:01):
What are these potential risksor threats by using AI in this
particular use case?
And talk through that and Ithink by talking through those
different types of use casesthat are not just about one
specific industry they can bepublic safety or they could be
automobile, they could bedifferent types of industries
It'll really provide a nice richfoundation for you to work in

(15:25):
this area and to have justbetter results when you're
sitting at that table as theprivacy professional, so you'll
be able to contribute so muchmore to the projects that you
get involved with.

Debra J Farber (15:38):
That would be really helpful and I also want
to point out it sounds like CIPPUS, the M, the T.

Erick Lybeck (16:04):
You know it provides a certain amount of
core background knowledge.
It's the theory knowledge ofprivacy, but you still need that
real-world example, thatreal-world experience, to really
be an effective privacyprofessional.
And it's the same thing with AIand it's the same thing with AI
.
And so that's what we'reworking on in our course is help

(16:34):
students develop thatreal-world skill by going
through real-world examples,real-world case studies, helping
students go through their ownproject through the course so
they apply what they learn toperhaps a real concern that they
have in their organization.
They have in their organization, and so it really helps to
apply the information in a muchmore specific way than just
having the knowledge of whatmachine learning is or what
privacy is.
It's really going into moredepth with it.

Debra J Farber (16:53):
That makes a lot of sense.
In some ways.
To me it feels more like abootcamp.
It's getting you ready toactually practically work on AI
projects within yourorganization, so that's pretty
cool Right.

Erick Lybeck (17:03):
I like the word bootcamp, but I don't think
Amali and I are very much drillinstructors.
We're much more about havingthose conversations and bringing
these two different groupstogether the legal professionals
and the technical professionalsand I think the groups will
have a lot of fun interactingwith each other and coming to
the class with the differentperspectives that they have.

Debra J Farber (17:24):
So we spoke about it a little earlier and we
talked about AI risk frameworks, but I'm curious what
approaches and mental models,basically what risk frameworks
are out there that you end upcovering and not only educating
on, but then taking thoseframeworks and then walking
through the use cases of how youwould evaluate and map to those
frameworks.
What are some of them?

Amalia Barthel (17:45):
In our free webinars that we offered to
anyone who was interested.
We talked about three of them.
Of course, the darling of AIgovernance, nist AI.
But NIST AI is a riskmanagement framework, so a lot
of people are maybe a little bitconfused.
It's not just about AIgovernance, because AI is a risk
management framework, so a lotof people are maybe a little bit
confused.
It's not just about AIgovernance, because AI is a

(18:05):
technology.
You have to remember the daysof bring your own device.
We had to govern how weintroduced that technology, the
cloud.
We had to have a position howis our organization going to
work with this particulartechnology offering?
But NIST also goes into riskgovernance and risk management.
So I'm going to go through theother frameworks that we talked

(18:26):
about.
So we talked about NIST AIframework, the US Government
Accountability Office AIframework, also known as GAO,
g-a-o, the generative AI riskassessment, created by Vischer,
a fantastic lawyer fromSwitzerland who has created an
Excel spreadsheet tool that isfantastic At the use case level.

(18:48):
It talks about applyinggenerative AI, but it could be
used as a privacy impactassessment for AI.
It's very broadly augmented toadd other considerations such as
intellectual property,copyright law, fraud, other laws
that may intersect with an AItype use.

(19:11):
And these were the threeframeworks that we discussed in
the webinar, as we had not a lotof time, but there are
additional ones that are coming,almost being issued every day.
Some of the ones we noted wasthe Germany's joint guidance on
AI and data protection.
That was a very, very goodframework.
Of course, the Colorado AI Actthat is also incredibly

(19:35):
informative because it's veryrisk-based and very
interestingly formulated.
There is the UK PrivacyCommissioner ICO AI Risk Toolkit
, which I have personally usedand I think it's a fantastic
tool.
There's the CUNIL guidance, theFrench regulator guidance for
AI.
The World Economic ForumAdopting AI Responsibly.

(19:56):
The Future of Privacy Forum hasissued an AI policy checklist,
which I also we found that wasvery, very informative.
So there's a number, andrecently the EU just issued a
framework called Human Rights,democracy and the Rule of Law
Assurance Framework for AISystems framework for AI systems

(20:19):
.
It's 335 pages and it's called.
The acronym is impossible toremember, but it's human rights,
democracy and rule of lawassurance framework and we're
going to talk about that too.
What we are asking our studentsto do is to learn how to
navigate these frameworks.
Nobody's going to remember allthese frameworks, but they're

(20:42):
going to find areas where theyfeel that it fits better into
their use case or theirorganization, and they have the
ability to reach out into all ofthese different resources and
use them to their advantage.

Debra J Farber (20:59):
That is pretty great.
There's so many fire hoses ofinformation out there and
there's just so much you know.
It's kind of like wadingthrough a haystack trying to
find needles that make sense,and so it's great to have you
lead people through what isrelevant right now, what's
coming down the pike, what aregood for certain use cases, what
might be better for others.

(21:19):
I think that's great thatyou'll be able to walk people
through that.
Eric, what about you?
Are there any other?

Erick Lybeck (21:25):
Yeah, you know, one of the things that I worked
on when I was at Privacy Codewas a privacy engineering
process, and it's also beingincluded in the revision of the
Privacy Engineers Manifesto, andwe took a look at different
sources.
So, like the Privacy by DesignISO standard, 31700 standard so,
for example, there were aspectsof that standard that were

(21:46):
different from some otherprivacy by design work that we'd
looked at.
We looked at what the Instituteof Operational Privacy by
Design had done and we combinedthese together into this privacy
engineering process, and sowe'll be using something similar
like that as well.
So there's all these differentAI frameworks, right, and you
can't well, you could, but youcould apply all of them, but

(22:09):
you'd never get anything done.
You have to come up with somesort of process that can allow
you to triage, can allow you tounderstand where you need to
spend your time, and maybe yourorganization does use the NIST
AI risk management framework theentirety of the framework on
some system or some majorbusiness transformation, but
it's very comprehensive.
Just that one framework is verycomprehensive and you couldn't

(22:32):
apply it to every singleAI-enabled system.
So what we'll be working on ishow do you come up with that
toolkit that you can apply atthe beginning in design, working
with that product manager, tounderstand what the potential
privacy risks and other riskswould be from this AI-enabled
system, and how do you scalethat?
How do you bring in moreattention to some of the threats

(22:53):
and some of the risks, how doyou scale that up and what are
some of the things that drivethat?
So we'll also be looking at arethere some systems or is there
some ways of automating doingthat?
Can you take these frameworksand put them into a large
language model, into the prompt,to help you come up with some

(23:15):
of those implementationrequirements?
So we'll be exploring some ofthat during the course as well.

Debra J Farber (23:20):
Oh, that's pretty exciting, because then
you're actually using thetechnology to make it easier to
use the technology.

Erick Lybeck (23:26):
Exactly, exactly.

Debra J Farber (23:28):
Why don't you tell us a little bit about how
you structured the course?
How long is it?
What's the format?
What topics do you cover?

Amalia Barthel (23:35):
Yeah, we have thought originally, also based
on my experience with theUniversity of Toronto there are
certain requirements for acertification and a certain
number of hours of study, ofassignment work, projects and,
of course, an exam that provesthat the student has ingested

(23:58):
and understands the knowledgethey've gone through at a
significant level.
So we have envisioned thiscourse to be 12 weeks, which
includes assignments, of course,class discussions every week
and also includes the capstoneproject which is at the end, and
then we give students maybe acouple of weeks to do recap and

(24:20):
take the certification exam.
And then we give students maybea couple of weeks to do recap
and take the certification exam.
In terms of how many hours isthe classroom, what we've
envisioned is that they wouldhave some material that we
provide up front for them tostudy as they come into the
class and then, with theknowledge that they've obtained,
by reviewing what we suggestedin the materials, we're going to
go into a case study and thediscussion is going to be once a

(24:44):
week.
We bring the class in for anhour and a half around there and
we just try to get to thebottom of this use case and
progress it as we accumulatemore knowledge.
The topics are, of course, thetrustworthy AI, but we're also
blending the trustworthy AI, sowe talked about bias and

(25:04):
fairness and ethics thefundamentals of AI.
We bring in the concept ofharms.
We bring in the newest whitepapers written by the phenomenal
Daniel Solove and Daniel Citronand Margot Kaminsky that are
just bringing these questionsaround.
How are privacy laws equippedto support the introduction of

(25:28):
AI?
Where are the gaps?
And bring to the attention ofthe students that that's the
area where they're going to bethey're always going to be in
the in-between and trying tosolve something the business is
going to ask them to do, butalso within the confounds of the
legal guidance available, andhelp them become instrumental in
solving those problems.

(25:48):
So that is the format of theclassroom.
I know that we've had an FAQsection during our webinar and
that's also on the website,where potential students or
anyone interested can find moreinformation Excellent.

Debra J Farber (26:05):
And are there set dates for it, like when does
the next course start?

Amalia Barthel (26:09):
We have put a kind of a date start date as
September 9th because everyonegoes back to school.
We are hoping people willenroll through the summer and we
counted the 12 weeks.
We want to make sure that theCanadian Thanksgiving and the US
Thanksgiving we're going to tryto fit everything in but be
ready for Christmas.
That was the point.

Erick Lybeck (26:30):
I think that we'll have some sessions that we will
have.
We have a 12-week plan but someof the sessions will have maybe
a midterm exam or we may havesome ability to have.
If somebody is not available tobe one week, we may have the
ability to have a catch up week.
We have some extra time builtinto the schedule and we don't
want it to be so onerous thatpeople are afraid oh, there's 12

(26:53):
weeks of classes.

Debra J Farber (26:56):
Or oh, there's exams, there's exams.
Yes, exactly.

Erick Lybeck (26:59):
We're really talking about a very
conversational course wherewe're going to be looking for
students to provide theirperspectives from these case
studies and really apply it totheir real world organizations
and the challenges that theyhave in their organizations, and
so I think it'll be veryvaluable and for me for an hour

(27:26):
and a half or two hoursdepending on the night or the
day when it's offered I thinkit'll be very useful and
everybody is going to get a lotout of it.
And at the end, they're goingto have a number of different
tools that they've worked onthrough the course that they've
been able to use to apply todifferent case studies, so those
tools will be very useful aswell.
They'll have experience workingwith these tools.
We mentioned the GIRA, theGenerative AI Risk Assessment.
There's a light version of thatas well as a more comprehensive
version of that.
They'll understand how topotentially use the NIST AI risk

(27:51):
management framework in a waythat is a more light touch, more
pragmatic approach as well, andso I think that'll be very,
very good for our students.

Debra J Farber (27:58):
Excellent, so curious why you decided to
create a certification elementat the end of your course.

Amalia Barthel (28:04):
I think that the amount of work the students are
going to put in, as well as thebroad breadth and depth of the
knowledge that we bring in thematerials, really lend
themselves into more of acertification rather than just
the course that people take.

Erick Lybeck (28:22):
Exactly, and we don't intend this to be a
one-time course.
We want to continue to buildthis course over time.
So we'll certainly be askingour students for feedback.
If we get feedback and werevise the course, we'll provide
the updated materials to thestudents going forward.
But we'll also bring in newemerging topics in AI.

(28:44):
So we actually one of these 12weeks we've reserved it
basically the 11th week to talkabout emerging topics in AI.
So if something comes up duringthe course and we haven't
prepared for it maybe in thethird or fourth session we get a
question or like we didn'taddress that during our plan for
this We'll, in the 11th session, then cover that topic.

(29:04):
So we'll be very adaptive tothe questions that the students
are bringing to us during thecourse as well.

Debra J Farber (29:11):
That sounds great.
I think it makes sense, as thisis an evolving field, there's
always going to have to beupdates, but I'm excited, I want
to take your course.
I think it's going to bepulling from all of the good
approaches that are out thereand pulling it together into one
place where there's amethodology as to how you're

(29:32):
going through the material andthen having hands-on use cases.
It really seems a great way toupskill.
I could see this beingsomething that people are
putting on their Q4, whateverquarter it is for them, but the
end of the year, the lastquarter of the year learning
plan where they can say, heylook, I know AI is a big part of
what we're working on, eventhough I'm a privacy
professional.
So it'd be great if I had thistraining, because it'll

(29:55):
supercharge my ability to likehit the ground running with
actual implementation andassessing not just a risk
assessment but of futureengineering and basically
enabling AI systems.

Amalia Barthel (30:06):
And the thing is Debra.
We know that people need tocontinue to upskill.
They need to show that they'rekeeping up with the times to be
marketable, and we feel likewhat's happening with AI.
I think this is the perfectpairing of the fact that we
still need to engineer privacyand safeguards and guardrails

(30:30):
into technology from now to theend of times.
Yeah, so why?

Debra J Farber (30:33):
not do AI focused privacy at the same time
?
If there's such hype around itright now and all companies are
like looking at what is my AIstrategy, it seems like an
opportunity to also push theprivacy shift left mantra of
let's address it earlier andthen thus enable the company to
innovate but also protectpersonal data at the same time.

(30:54):
Why not add them together anduse this ai hype cycle to?
It's not going away.
It's only going to get more andmore embedded into our
organizations?

Amalia Barthel (31:03):
seems like a good time to combine the effort
we think so because you and Iand Eric, we know intuitively
that actually privacyengineering really is about
being in the know about how muchgood or harm you could be doing
with the data and theprocessing of the data.
And I think when organizationsembark on these new projects

(31:25):
like AI, they actually have noidea whether their outcome is
going to be good or bad forsociety, for other people and
not just for their company, andI think this is going to be a
huge eye-opener for them.
They're going to go in withtheir eyes wide open, not shut.

Debra J Farber (31:41):
Definitely that makes sense.
So how much does the coursecost?
And by any chance, do youhappen to have a discount code
for listeners of the ShiftingPrivacy Left podcast?

Amalia Barthel (31:50):
Well, I'm glad you asked.
I think we have priced itincredibly reasonable and I'm
not going to say the cost on thepodcast, but if anyone
listening is interested andthey've listened till the end,
there is a bonus.
Until the end there is a bonus.
If you go to the websitewwwdesigningprivacyca so

(32:22):
wwwdesigningprivacyca and youclick on the course tab, then
you're going to be able to seeright now we're in a promotion
mode.
See, right now we're in apromotion mode.
And so for the listeners here,we have a 300 US dollars
discount and you can submit aninquiry there and just say I'm
interested, I'd like to takeadvantage of the code, and we
will definitely make sure thatyou get that discount.
Is there a?

Debra J Farber (32:42):
specific code that you wanted to share Podcast
300.
Podcast 300.
There it is.
I will also put it in the shownotes.
It'll be easy to refer back toagain.
But thank you so much forgiving that coupon to our
listeners.
I really look forward togetting feedback from them on
the course and I hope we get alot of signups.

Erick Lybeck (33:04):
We knew that your podcast definitely reaches a lot
of the technical professionalsthat do work in privacy
engineering, and so that's oneof the reasons we reached out to
you and wanted to talk to youabout what we're doing.
To me, it's just essential that, as a privacy engineer, that I
continue to upskill.
That's one of the reasons Ireached when Amali and I started
talking.
We started talking about thiscourse and doing this course

(33:33):
together because it's part of myupskilling.
Right by helping teach theclass, I am certainly learning
much more, in a much more depthas well, about different AI
systems, different use casesthat are going to be explained
through these case studies.
So that's really what Iencouraged everybody that's a
privacy engineer is really lookat opportunities for continuous
upskilling.
Ai is just moving so, so fast.
It's probably not even reachedthe top of its hype cycle.

(33:53):
I know every organization I'vetalked to, certainly from my
work at Privacy Code, was doingwork with AI.
We were doing work with AI atPrivacy Code, using machine
learning to read privacypolicies and identify the
privacy engineering requirements.
So I really encourage yourlisteners to take a look at what
we're offering and let us knowif you're interested, and if

(34:16):
they have any questions, justemail us and we'll definitely be
more than willing to haveconversations about what your
listeners are looking for froman upskilling perspective.

Debra J Farber (34:25):
I was actually going to ask if you had any
other words of wisdom to leaveour listeners with today, but I
think that upskilling makes alot of sense.
As a last point.
What about you, Amalia?
Any last words of wisdom?

Amalia Barthel (34:36):
I do have a couple of points because I
really want everyone listeningto realize that we are so
different from a professionalassociation or from an academic.
We are practitioners, we'reworking in the trenches, so we
are like you and so learningtogether at this level, with, of
course, the added bonus thatwe've been in the instructor

(34:59):
role for a long time, so we knowhow to teach, which is quite a
different skill, but we feelyour pain.
We're going to be there withyou to understand how to make
sense of things, not like anacademic, not maybe a
professional association, butwe're practical.
We're going to give you thatpractical in the trenches
knowledge.
So that's one point I wanted tomake.

(35:20):
We are like you.
The second point I wanted tomake there is the European Union
has a program.
They have an academy that isfree and we're happy to provide
the link through Deborah.
They've created at least onecourse that talks about how
they're actually going torewrite or evolve the writing of

(35:42):
policy in general laws so thatthey can be acted into machine
code.
So this is the future and Ihave sent that to Eric and I've
sent that to a couple of myfriends.
I'm like the European Union isdoing this they're leading the
way.
They've realized that's thebiggest gap in adoption of their
laws that people don'tunderstand how to make them real

(36:05):
, into technology, into code.

Debra J Farber (36:08):
Wow, yeah, I would love to read up on that.
So please do share the link andI will add it to the show notes
.

Amalia Barthel (36:14):
Yeah, it's at academyeuropaeu, and we will
send you Debra the link soeveryone can see what we're
talking about.

Debra J Farber (36:24):
Excellent.
Well, Amalia and Eric, thankyou so much for joining us today
on The Shifting Privacy LeftPodcast.
Until next Tuesday, everyone,when we'll be back with engaging
content and another great guest.
Thanks for joining us this weekon Shifting Privacy Left.
Make sure to visit our website,shiftingprivacyleft.

(36:45):
com, where you can subscribe toupdates so you'll never miss a
show.
While you're at it.
If you found this episodevaluable, go ahead and share it
with a friend, and if you're anengineer who cares passionately
about privacy, check outPrivado: the developer-friendly
privacy platform and sponsor ofthis show.
To learn more, go to Privado.

(37:05):
ai.
Be sure to tune in next Tuesdayfor a new episode.
Bye for now.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.