All Episodes

August 22, 2023 64 mins

This week’s guest is Elias Grünewald, Privacy Engineering Research Associate at Technical University, Berlin, where he focuses on cloud-native privacy engineering, transparency, accountability, distributed systems, & privacy regulation. 

In this conversation, we discuss the challenge of designing privacy into modern cloud architectures; how shifting left into DevPrivOps can embed privacy within agile development methods; how to blend privacy engineering & cloud engineering; the Hawk DevOps Framework; and what the Shared Responsibilities Model for cloud lacks. 

Topics Covered:

  • Elias's courses at TU Berlin: "Programming Practical Privacy: Web-based Application Engineering & Data Management" & "Advanced Distributed Systems Prototyping: Cloud-native Privacy Engineering"
  • Elias' 2022 paper, "Cloud Native Privacy Engineering through DevPrivOps" - his approach, findings, and framework
  • The Shared Responsibilities Model for cloud and how to improve it to account for privacy goals
  • Defining DevPrivOps & how it works with agile development
  • How DevPrivOps can enable formal privacy-by-design (PbD) & default strategies
  • Elias' June 2023 paper, "Hawk: DevOps-Driven Transparency & Accountability in Cloud Native Systems," which helps data controllers align cloud-native DevOps with regulatory requirements for transparency & accountability
  • Engineering challenges when trying to determine the details of personal data processing when responding to access & deletion requests
  • A deep-dive into the Hawk 3-phase approach for implementing privacy into each DevOps phase: Hawk Release; Hawk Operate; & Hawk Monitor
  • How open sourced project, TOUCAN, is documenting conceptual best practices for corresponding phases in the SDLC, and a call for collaboration
  • How privacy engineers can convince their management to adopt a DevPrivOps approach


Read Elias' papers, talks, & projects:


Guest Info:


Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Elias Grünewald (00:01):
Many times we have additional auxiliary
services that process somefraction of the traffic to do
analytic stuff or to just ensurethe correct functionality of
our system, and if personal dataalso gets into these services,
then this is relevant from aprivacy and regulatory
perspective.
And this is, of course, why wehave to implement some good

(00:24):
communication and inventorymeasures to have an overview
about that all the time, also inlight of the quick changes that
could happen to a system.

Debra J Farber (00:38):
Welcome everyone to Shifting Privacy Left.
I'm your host and residentprivacy guru, Debra J Farber.
Today, I'm delighted to welcomemy next guest, Elias , Privacy
Engineering Research Associateat Technical University Berlin,
where he focuses on cloud-native privacy engineering,

(00:58):
transparency, and accountabilityin distributed systems and
technology regulation.
He also teaches undergraduateand graduate courses focused on
privacy and cloud engineering.
I love that he's leveragingtechnical expertise to teach
practical privacy engineeringconcepts rather than only
staying in the theoretical.

(01:19):
Welcome, Elias.
I can't wait to dive into theconversation today.

Elias Grünewald (01:24):
Hi, Debra! Hi and everyone who's listening.
I'm very happy to be here.
I think this is my first actualpodcast session, so I'm very
excited about it.

Debra J Farber (01:34):
Well, we got lucky then, because I think
we're going to have fun todaywith some of the topics that
we're going to dive into.
If you wouldn't mind, just tellus a little bit about your
research journey.
How did you get interested inprivacy engineering and
DevPrivOps, especially with afocus on the cloud?

Elias Grünewald (01:54):
Yeah, no problem.
I started studying computerscience basically as an
undergraduate student or abachelor's student in Berlin.
Then I continued my studiesalso in Spain and Italy, and
then I came back to Berlin.
We had this one course rightbefore our Bachelor of thesis,
which was called 'Informaticsand Society.
' This was the first time myeyes basically opened once more

(02:18):
during my studies, where all thethings came together somehow.
We learned about programming,about what a database is, what
performance optimizations youcan do in code (which are all
great things and lovely to learnabout that), but in this
course, it was the first time Irealized we can apply these
technologies to do somethingbetter.
There are so many societalproblems or challenges and also

(02:42):
regulatory frameworks that framethe work, how we as computer
scientists do our way.
Then, this was the first time Iactually got in contact with the
real research perspective onprivacy and privacy engineering.
Of course, I knew about privacybefore that, but then my former
lecturer, and now one of mycolleagues and mentor, he

(03:03):
introduced me to that field andit was a great pleasure to have
that.
Frank, if you hear that, thanksfor that again.
Then, I wrote my Bachelorthesis, and also Master thesis,
with a specialization on cloudcomputing.
Then, the point reached where Isaid, "Yeah, I'm now finished
with my studies.
I also worked part-time as aresearch student assistant at
that time.

(03:24):
My now supervisor, Ph.
D supervisor, asked me, "o youhave any topic or a direction
where you want to go into foryour PhD?
Then it was quite a naturalchoice to then pick privacy and
the cloud because I'm working ata distributed systems research
group.
Now, I'm one of the guysresearching privacy engineering,

(03:44):
and I'm deep- diving into thattopic and I still love it, and
try to bring as many people onthis nice track that we now
established at TU Berlin.

Debra J Farber (03:56):
Amazing.
There's so many opportunities,I think, to bring people in
because, as you said, it is asocietal challenge or it
addresses societal challenges.
I could see that luring someonelike myself in if I were an
applied technologist.
You mentioned some of thecoursework.
You're teaching some reallyinteresting courses in the
Information Systems EngineeringDepartment.

(04:17):
Please tell us a little bitabout them.
The first one is "Programmingpractical privacy, web-based
application engineering and datamanagement.
Then the second one is advanceddistributed systems prototyping
cloud-native privacyengineering.
Tell us a little bit about bothof those courses.

Elias Grünewald (04:37):
Yeah, both of these courses are examples of
courses that I taught over thelast semesters.
The first one, the programmingpractical, is an undergraduate
course where students cometogether in teams of four or
five people, usually where theyare first confronted with a real
problem.
Let's say like this, they learnthrough their first four or

(05:00):
five semesters all aboutprogramming, about distributed
systems in a sense.
Also, some of them start totake project management courses
or something like that.
In this programming practical,it's a course where you first do
not get graded.
So, there's no intrinsicmotivation for the students to
actually study for a final examor something like that, but they

(05:21):
are motivated and you, ofcourse, introduce these topics
to the students for solving realworld problems.
This is a nice challenge, alsofor us as instructors, because
every semester we try to findtopics that are related to
something that happened recentlyor to something that is, of
course, relevant for ourresearch.

(05:43):
We did that once in oneiteration, definitely with a
focus on privacy engineering.
In that iteration we focused ontwo very relevant privacy
principles.
The first one was transparencyand the other one was data
access.
For example, for data access ordata subject access requests,
the students then designed somenice dashboards, for example, to

(06:06):
load some data that you couldget from different services -
real world services like Spotify, Facebook, Google, you name it.
Then, usually from these datatakeouts you just get a large
zip file or a folder, whichcontains thousands of files of
maybe JSON files or Excel assheets or I don't know some type
of data that my grandma forsure wouldn't be possible to

(06:29):
analyze or to read, but also forher it's important to get to
know what's in that data.
So, my students then developedsome nice approaches to actually
visualize that data and tocompare different data takeouts.
And, then we could really seewhat a service knows about you;
and the students both learnedsomething about data privacy,
but also about their projectmanagement skills and how to

(06:50):
create such a real world project, because we were working with
real data and, of course, thenin this context.
I think that's a nice way toteach the students by letting
them also shape the wholeproject with this scope on
privacy, but without that formalcoursework of homework and a
final exam and so on, but ratherabout them being more and more

(07:11):
motivated over the weeks becausethey want to do something real,
really cool in the end.
The second one is a master'scourse - Advanced Distributed
Systems Prototyping.
This is something that we expectstudents to have a strong
interest in our topic.
As a Cloud and DistributedSystems Engineering Research

(07:31):
Group, we of course teach alsocourses specialized on that, but
in this Distributed SystemsPrototyping course then they
also come together in teams ofusually seven students and they
get a real- world project,usually together with a business
partner or with an NGO oranother institution that we have
collaborations with, and thenthey solve the tasks on their

(07:53):
own and usually also report tothe project partner, the
external project partner, and usas the instructors, once a week
or with some presentations onthat.
The duration, which I calledthen Cloud Native Privacy
Engineering was one where weworked together with an NGO
which deals with mentalillnesses or the prevention of

(08:13):
mental illnesses of youngstudents and pupils in school.
This was in a young NGOdeveloping a mobile app, which
wants to collect, or had theplan to collect, some mood
diaries or information aboutmental illnesses and contact
information for people that canprovide support in case of
mental illnesses.

(08:34):
And of course, these are verysensitive data that have to be
stored securely.
The data has to be processed ina transparent way.
We have to think about dataminimization aspects and so on,
and that was once again a realworld project, which then also
led to some very nice researchideas for us, which are now
continuing.
So we are really living thatteaching and research paradigm

(08:57):
together, so that both can gainsomething from each other.

Debra J Farber (09:01):
I love that.
I love the gaining thepractical experience because,
again, I think it's this wayyour students can kind of more
hit the ground running anddeliver value if they end up in
a privacy engineering role rightout of university, especially
when it comes to metrics.
You know, metrics, havingworked in this space for about

(09:21):
18 years has been one of thehardest things about privacy -
to demonstrate that you have theright levers in place and
you're doing all the rightexpected things and you're
finding relevant data andsurfacing it, and so what a
great course, the AdvancedDistributed Systems Prototyping.

(09:43):
So, they ask us after thesecourses for writing their final
thesis with us, or even theywant to work in that domain, and
then of course, we're veryhappy to advise them in that
direction.
And, it's right before theyjump into their jobs.
So, of course it's somethingthat can coin their lives for
some very long amount of time.
So, it's very important to alsoguide them in that period.

(10:06):
It is.
What a great opportunity! Ithink that's awesome.
Most of the conversation we'regoing to have today, I think, is
going to be around multiplepapers that you've written or
that you've published.
Some of them have beenco-authored, so I'll call that
out as we're going through them,but the first one I want to
talk about is about transparencyin cloud native architectures.

(10:27):
This paper came out in March2022 and it's called "Cloud
Native Privacy Engineeringthrough DevPrivOps, where you
argue that the differentdimensions of privacy
engineering, like dataminimization, transparency,
security, etc.
They must be factored into thedevelopment and operation cycles
.
Woo! Of course, I agree! It'stotally that "shift privacy left

(10:52):
mindset right, so obviously Iwas excited to see that you were
publishing content like that.
In the paper, you identifyconceptual dimensions of cloud
native privacy engineering bybringing together cloud
computing fundamentals andprivacy regulation; and, so you
propose an integrative approachto be addressed that overcomes

(11:13):
the shortcomings of existingprivacy enhancing technologies
in practice and then evaluatingexisting system designs.
Could you please tell us alittle bit about your approach
and the findings, and I don'tknow if you're able to
articulate this beautiful visualfrom the paper in here, but I

(11:33):
will do my best to at least putit in the show notes or link to
it.

Elias Grünewald (11:39):
Yeah, of course .
So, the paper was one of myfirst papers where I basically
set out the plan and also thewhole research direction of
myself when I started my PhD,and so it's very broad position
paper basically on what I, atthis point, already realized
during my research.
Because there are so manyprivacy enhancing technologies

(12:02):
out there and they are great andI learned a lot about designing
different types of them andsuper great.
But, at the same time as I wasstudied in that cloud direction
and specialization during mystudies, I was also many times
wondering, "How are they then inpractice aligned with the real
world system designs that wehave out there?

(12:23):
Because many times we see someprivacy enhancing technologies
that could be applied to verystand- alone, monolithic system
that is placed somewhere andoperated by a single person or
even a team, but that has somededicated inputs and outputs and
this all to be handled well.
But then, you can apply a nicedata minimization technique, for

(12:44):
example, or a transparencymeasure, but this does not play
well with the real- world ofinfrastructures and also
development life cycles ofmodern cloud architectures.
That was somehow the pain thatI realized that developers have
in that field, and so I decidedto think about what are first

(13:06):
perspectives and goals ofprivacy engineering in general,
and that's fairly easyto at least list these basic

principles (13:13):
such as fairness, transparency, accountability,
minimization, and so on.
You can find them in the GDPRor other privacy regulations and
scientific ground works, butalso to factor in then the cloud
basics.
And, this concern, of course,infrastructure, platform, or
application- level challenges orspecifics of these different

(13:35):
dimensions, but also, of course,the whole organizational
structure of the institution orthe company developing software
and their internal processingprocesses, as well as external
legislation and frameworks thatapply to every software
development project.
And then, if you factor all ofthese dimensions together, you

(13:56):
can come up with a matrix orsome other visualization; and to
just not forget about all thesedimensions, because it's not
surely not enough to implement asimple data minimization
measure, for example, at theapplication level, when the
infrastructure still leaks someinformation or has open ends to
the sensitive data that you tryto protect.

(14:18):
And so, this was very muchabout combining the fundamentals
of both privacy engineering andcloud engineering, and I think
this is somewhat what guided meright after that, which we use
many times also as a checklistor some guidance to check
whether we have thought aboutall the possible problems that
could arise - the differentdimensions in a cloud setting.

Debra J Farber (14:42):
In looking at this, this visual.
So, on the left hand side itsays: Legislation, Organization
and Process for the regulatoryaspects, and then Infrastructure
, Platform and Application, andthen you have all of the just
dimensions of which we wouldlook for in privacy.
Right?
Like, as you mentioned,Lawfulness, Fairness,
Transparency, and so forth andso on.
Have you come across a companythat even thought about these

(15:06):
overlapping dimensions or doesthis appear to be really novel
when your students have beenworking with organizations, or
when you have been, in order towrite this paper.

Elias Grünewald (15:17):
Yeah, of course , when we work together with
business partners and ourresearch institute or with
partners in our projects, andalso when we talk to our
students or when we talk tocompanies, then of course there
are single experts on all ofthese dimensions - so, usually
security specialists, a veryprime example of a well-
researched domain.

(15:38):
Of course, many open challengesstill, but people know about the
toolbox for solvingapplication- level security
problems or platform- levelsecurity problems.
But, to find the people thatactually know about the
challenges of realizing a datasubject access request (DSAR) or
about how to ensure purposelimitation in a distributed

(16:00):
system, then usually, if I askedfor that in one of our larger
courses, I don't know, maybezero hands, maybe a few hands
that then raise that they haveeven heard about the problem.
And, I think that's alreadywhat illustrates that many
people are not aware about thewhole design space that we have

(16:20):
to deal with as computerscientists, as legal specialists
and so on and, of course, aseveryone who wants to build
their own business, because allof these combinations of
challenges can, course, inducebusiness threats and so on.
And, yeah, many people are notaware about the intersection of
many of these principles, and sowe try to teach that and also

(16:42):
come up, of course, with smartsolutions for solving them; but,
everyone is invited to do thatand also to complete the picture
with more examples like that.
But also, this contextrealization helps to better
describe different privacyenhancing technologies because
we can then say, "If you usethis in that tool or technique

(17:02):
and combine it with somethingelse, then we cover several of
these dimensions", while if wecall something a 'transparency-
enhancing technology,' we stilldon't know whether it affects
the platform or the applicationor something.
I think that's then helpful todissect what we are talking
about.

Debra J Farber (17:22):
Absolutely! So, where my brain went - because
you're more applied technicalthan I am.
I was the Head Security, PrivacyAssurance at AWS, but embedded
within the Security Assuranceteam, so my affect was limited.
What I really learned and tookaway from my role there was how
much the markets rely on TheShared Responsibilities Model

(17:46):
for cloud right, where thecontracts are written with
(pretty much all the cloudcompanies now - all the major
ones like Google and Amazon andthe other major ones, because
there are other ones - whereyou're basically saying that you
know that the cloud provider isresponsible for security of the
cloud and privacy of the cloud,but whoever's putting personal

(18:06):
data in it and, based on howyou're using it, the
organization that signs up isresponsible for the privacy and
security in the cloud.
Right?
And so, this delineatedresponsibilities from a
contractual standpoint, whichmakes a lot of sense.
You need to delineate those, butthe challenge is that this has
been really about security.

(18:28):
There's very little of theprivacy part baked in, and what
I would love to see is somethinglike what you put together here
baked into the SharedResponsibilities Model and
really address privacy, becausehow can you address each of
these elements if you don'tspell them out in the
contracting process, thatsomebody's responsible for all
or part of these?

(18:48):
What are your thoughts on that,if any before we dive into it?

Elias Grünewald (18:53):
I agree, and it also inspires me to actually go
into that direction or to talkabout that more, because
sometimes occasionally I readalso some of these contractual
or processing agreements wherethen people say, "yeah, there is
end-to-end security measures orencryption or something like
that, and usually in the list oftechnical organizational

(19:16):
measures, a few ones are listed.

Debra J Farber (19:19):
It's always security for privacy.

Elias Grünewald (19:22):
It is, of course, a problem that we have
in that domain, and with thatillustration or with that figure
, we can, of course, also helppeople to understand that it's
more than security and dataminimization, of course, and
it's not something to blameanyone, but rather for
establishing trust and to agreeon that this is necessary and

(19:42):
relevant, because if we don'thave any purpose limitation
measure within our system, westill have the problem, or at
least a chance, that we are notcompliant with regulatory
frameworks from differentcountries or member states or
something like that, and thenthe security measures alone

(20:04):
don't help us.
So, I think that's a greatfoundation for talking about the
whole thing, if you want to behonest about your system and the
partners you're working with.

Debra J Farber (20:14):
Excellent.
Well, I'm glad I inspired you.
I would love to see any fruitborn from that idea, so feel
free to share future papers withme.
I know that you proposed areference software development
lifecycle called DevPrivOps toenhance established agile
development methods with respectto privacy, so I'm definitely
eager to hear more about thisfrom you.

(20:35):
How do you define DevPrivOpshere and how does it work well
with agile development?

Elias Grünewald (20:42):
Yeah, of course .
So if we are looking at thatfrom a scientific perspective,
but I think all the industrypeople that are listening right
now would agree to that, that wehave cloud engineering for 20
years already, and also agiledevelopment practices evolved
over that time because thetechnology stacks became more

(21:04):
and more complex and also thesystem architectures are
inherently complex, and that'swhy people try to start to think
about more structured waysabout how to develop software in
different teams that are thenresponsible for different
components of the system, or,especially if we are talking
about cloud-native solutions, sothe most recent kind of

(21:27):
software architectures where wehave different microservices
talking to each other just overAPIs or through a message bus or
something like that, withexchanging messages and many,
many dozens or even hundreds ofdifferent components in a system
where dozens or even hundredsof different teams - we are
talking about the very largecompanies, online companies -

(21:49):
work on together with differentteams and do not want to follow,
of course, this old approach ofone team developing a piece of
software and a whole other teamof just operating that piece of
software, which we many timescalled something like 'throwing
a piece of software just overthe wall' as a developer or a

(22:10):
development team and then say,"Yeah, it's ops problem now that
we have to operate that system,and that comes, of course,
together if we think about theprivacy domain or the whole
compliance domain.
It's not limited to privacy, buthere we make the example of we
have privacy measures and wehave regulatory frameworks with

(22:31):
very extensive descriptions ofwhat a system has to ensure, and
the larger the system gets, themore things you have to ensure.
Especially in Europe verycurrently regulate very large
online platforms.
There's, of course, many thingsyou have to think about, which
is important and right that wehave to do it, but then the

(22:51):
development teams and theindividual developers, many
times not able to actually breakthat down to what they do in
their daily business becausewhat you learn, or what you're
trained with, in a usualcomputer science curriculum or
the training phase of thecompany is, of course, how you
develop software and how you dothat fast way and with

(23:11):
fast-paced development lifecycles, but not about privacy.
And so, I put them togetheragain is two things that
basically existed before, but intheir combination I think makes
sense - at least I hope thatthat which is the basic DevOps
cycle.
So, I think many people haveseen this figure of a number

(23:33):
eight basically lying on itsleft, with eight different
phases that are in this DevOpslife cycle: so code, build, test
, release, deploy, operate,monitor, and then plan your
software development projectagain.
This is a continuous cycle thatthen runs through the whole
time a team is working on asoftware project, and it

(23:55):
introduces some separation ofconcerns in that process and
also enables quick development,because we don't want to go back
to the time where we hadwaterfall software development,
where you basically compile abig book of requirements, how
the software should look like inthe end, and then hoping that
three years later or one yearlater the software is ready to

(24:17):
use and also factors in all theregulatory requirements.
We know that this doesn't work,and especially doesn't work with
the fast-paced developmentenvironments and technological
advances that we have today; andso we have to factor in these
privacy problems also into howsoftware is actually built in
distributed teams, and we knowthat it's built more and more

(24:39):
frequently using DevOpspractices and the DevOps culture
in general, and so we try toexplain how you can do different
privacy tasks within such aDevOps cycle, and this includes,
of course, strategy discussionsor task distribution and
technology selections at thebeginning of such a life cycle,

(25:00):
where you also talk about whichprivacy enhancing technologies
you want to employ in yoursoftware development process,
but also introduces newchallenges and also deals with
the technological givens that wehave in the face of the
monitoring.
We take this as an example whereusually software developers are

(25:20):
already doing tasks likelogging, tracing, monitoring for
solving reliability or faulttolerance tasks, and do that
already because there arebusiness needs for that or they
are interested in the generalperformance of their system,
where both of these aspects playtogether and we say that, also

(25:40):
from a privacy standpoint,we can do and should do logging,
tracing, monitoring, forexample, because there are
established tool chains thatwork well together with existing
technological stacks and thecloud providers and the platform
givens and so on.
And, instead of coming with aprivacy checklist mechanism or

(26:01):
manual email communication withdata protection officer within
your company or something thatis very off from the actual
process, how the software isdeveloped, we should focus more
and more on privacy enhancingtechnologies that are built,
established and used, especiallyin many of the different DevOps
phases, such as the monitoringphases (I just explained it)

(26:24):
because then we can come up witha fast-paced development cycle,
as is, and also solid privacysolutions, and I think that just
makes sense to not introduceoverly complex processes next to
the course of the developments,the lifecycle of the developers
.

Debra J Farber (26:40):
Yeah, I think that makes a lot of sense.
Go with the processes thatdevelopers already know and
understand, and so it doesn'tfeel like you're adding so much
pressure on them to go and learnsomething new or do something
out of their normal processes.
So, it makes a lot of sense,and you might have already
started to answer this, but canyou tell us how DevPrivOps can

(27:01):
enable formal privacy- by-design (PbD) and default
strategies?

Elias Grünewald (27:05):
So, privacy- by- design and by- default is,
of course, about ensuring allthe privacy principles that we
mentioned before, and since theprocesses are so fast-paced for
development life cycles.
So, you probably know that fromAmazon, but we also know that a
lot of larger players that theyship components of their

(27:26):
software multiple times a day.
This is not how usually privacyprocesses work.
So, at least what I got as aresponse from so many different
companies or representatives ofthem to whom I talked, is that
there is a Legal department andmaybe even a Privacy department
that has technological experts,but they work on a very

(27:49):
different mode of operation, ifyou wanna say that, and usually
there are some things like audit, security audit, but also
privacy audits that happen maybeevery half a year or something
like that and come with a largepaper trail, are interview-
based or checklist- based oreven Excel sheet- based, where
you have to fill in, as adeveloper, what you did over the

(28:11):
last period of time, be it halfa year or even be it a week.
This already is so burdensomeand so off the practice that
developers want to work with orhow they work actually; and so,
I think it's better to alignthat with how the developers
work because they can make somany errors, but of course, also

(28:32):
shape how the whole thing isworking and can make it better.
So, we have to align with howthe software is actually
developed and also with how fastit is developed because if we
employ the strategies that weemploy for solving other
distributed system qualityrelated problems or challenges,

(28:55):
we already have establishedmechanisms, like metrics, for
example, that report back thecurrent performance of our
system and can do that on amillisecond basis or something.
But, we don't have theseestablished tool chains and
reporting structures ready yetfor privacy, but instead we have
manual process usually and thatcannot lead to a good overview

(29:19):
of a large system inherently.
So we have to do better andthink about better solutions for
that.

Debra J Farber (29:25):
Thank you so much for that.
That's really helpful.
And now I want to turn ourattention to the newly-
published paper that youco-authored.
It's called "Hawk (H-A-W-K,like the bird): DevOps Driven
Transparency and Accountabilityin Cloud Native Systems, which
was published this past June, sojust about two months ago,

(29:47):
where you outlined some of thechallenges between regulatory
requirements and DevOps- focusedsystem engineering, and you
propose a set of novelapproaches that you refer to as
the Hawk Framework that'sexplicitly tailored to specific
phases of the DevOps lifecyclethat are most relevant in
matters of privacy- relatedtransparency and accountability

at runtime (30:09):
so, release, operation, and monitoring.
So, let's unpack this.
First, can you shed light onwhy there is so much tension
between GDPR requirements tomaintain records of processing
activities (otherwise known asRoPAs) and the DevOps- focused
system engineering?
How does this impacttransparency and accountability?

Elias Grünewald (30:31):
Yeah, sure.
It's great that you mentionedthat it came out just very
recently.
I presented it in Chicago atthe IEEE International
Conference on Cloud Engineering,and what I showed the audience
there was basically how you dosuch a RoPA for a real- world
system.
And, the real- world system isactually the one that I
mentioned earlier in ourcoursework, which was the app

(30:54):
for dealing with personal dataabout mental illnesses of young
students and pupils; and, what Ithen showed them is how usually
a records of processingactivities sheet is being done,
and I took the example from theFrench data protection authority
, the CNIL, and what this is isa large Excel sheet where you

(31:15):
put in all the categories ofpersonal data that are being
processed, their purposes,storage limitations, access and
deletion rights, and so on.
So, all the differentdimensions of transparency that
you could have and, of course,also the accountability
information that is relevant fordemonstrating compliance, both

(31:35):
within a company or aninstitution that develops a
piece of software or a system,and, of course, also them
reporting their information todata protection authorities
(DPAs).
And, if we compare the systemarchitecture of justice system
that my students built with theExcel sheet and the requirements

(31:56):
from the GDPR (because we arehere in Europe, of course, and
dealing with Europeanregulation), then this largely
impacts how transparent and howaccountable we can design this
whole architecture, becausethere's so much information that
is flowing through that systemand the different data flows and
different settings in whichsuch a system could be deployed

(32:20):
largely influences how such aRoPA could look like.
Much of the information alsodepends on the runtime, because
if we scale such a system in adata center or across
availability zones, which thenaffects multiple countries and
introduces third- countrytransfers automatically without
any specific decision that weput in there before, then many

(32:45):
privacy- related activities dohappen at runtime that we
couldn't have seen before oronly guessed about what could
happen at a large load on thesystem, and that's why it's very
important to come up with newtransparency and accountability
focused tools and generalapproaches of how we can solve
these problems in a DevOps or aDevPrivOps fashion.

(33:08):
So, what we wanted to do thereis to dive deeper into the
transparency and accountabilityprinciples of the GDPR and
privacy engineering in general.
This is only these twodimensions of the ones that I
mentioned earlier, and it'sstill so complex to actually do
that, if you're willing to dothat in a real- world systems
engineering context, and that'swhy we urgently have to come up

(33:31):
with some best practices on that.
And, that's what we are workingon currently in that Hawk
Framework, but of course, alsobeyond that.

Debra J Farber (33:40):
Amazing.
So, what are some of thespecific challenges that
engineers are running into whenthey try to determine the
details of personal dataprocessing, as they're
responding to access requests ordeletion requests?

Elias Grünewald (33:54):
Yeah, so responding to access requests or
deletion requests is onlypossible if we have transparency
, full transparency about thewhole system in place.
And, if we're talking aboutcloud- native solutions, we then
see multiple microservices, forexample, interacting with each
other.
Every one of these microservicescould have their individual

(34:15):
database, which can followdifferent paradigms, so we can
have an SQL store here, a keyvalue store somewhere else, and
also these services couldprocess personal data, get it
from the client or get it fromthird parties and share it with
these third parties.
And, at the same time, next tothat inherent system complexity,

(34:37):
we have distributedresponsibilities.
So, different engineers areresponsible for development and
operation of different services;and, and if we don't have a
clear, transparent view of whatis stored where (and I'm
referring to storing here,because this is what, of course,
is most important about datathat is being stored for a
longer term), but also aboutdata in transit or data that

(35:01):
just gets shared occasionally tothird parties.
If we don't have thattransparency, we basically
cannot solve that access requestor deletion request.
And, that is why, before we caneven think about access or
deletion, we have to have an up-to- date inventory of all the
personal data that we have inour system or that is processed
through the system, and thatalso includes all the different

(35:23):
paths some data could go through.
Of course, everyone thinks aboutthe core functionalities of
your system and about the userdatabase or the shipping details
, but database if we are talkingabout an e-commerce scenario or
something like that.
But, many times we haveadditional auxiliary services
that process some fraction ofthe traffic to do analytics

(35:47):
stuff or to just ensure thecorrect functionality of our
system; and, if personal dataalso gets into these services,
then this is relevant from aprivacy and regulatory
perspective and this is, ofcourse, why we have to implement
some good communication andinventory measures to have an
overview about that all the time, also in light of the quick

(36:11):
changes that could happen to asystem.

Debra J Farber (36:14):
Yeah, that makes a lot of sense.
I mean, I've seen examples inmy career where some data has
been checked into you know aparticular repository or
uploaded to a system that didn'taccount for personal data, so
you didn't necessarily know howto find it again if you needed
to present it for, you know, anaccess request or to delete it.

Elias Grünewald (36:37):
To even add on that, there are new technologies
developed over time that thelaw couldn't know of some years
ago.
When we talk about unstructureddata stores, for example, where
different kinds of data land in(so both personal data and very
sensitive data of other kinds),but also very non-important
data from a privacy perspectiveand to dissect what is actually

(37:01):
stored there is a hard problemfrom a computer science
perspective, of course, andthat's also why we have to do
that.
Just to add on that.

Debra J Farber (37:10):
Absolutely, absolutely.
There's definitely discoverytools out there.
I even sit on advisory boardsfor some companies that work on
that.
But, it's definitely from whatyou said, from a computer
scientist perspective, you kindof want to plan for that, not
just be like let's try to finddiscovery tools later to figure
out what's personal data.
Right?
It just makes it more difficult, but that it is great to know
that there's new technology outthere.

(37:31):
So, okay, let's get into themeat of this paper.
Tell us about HAWK.
This is your approach to helpdata controllers align cloud-
native DevOps with regulatoryrequirements for transparency
and accountability; and, walk usthrough your distinct
approaches for implementingprivacy into each DevOps phase.
I'm going to state the phase -there's three of them) and give

(37:54):
like a brief overview, if youcan then go just dive a little
deeper and tell us more about it.
The first one is the releasephase, which you call HAWK
Release.
Actually, I'm not going todescribe it, I'm going to let
you describe it.
HAWK Release.

Elias Grünewald (38:07):
Yeah, quickly commenting on the name HAWK,
it's also a metaphor to having abird's- eye- view on your whole
system.
That's why we call it HAWK.
And then, the individualcomponents relating to the
DevOps phases, or DevPrivOpsphases if you want to call it
like that.
For HAWK release, the basicchallenge that we observe is the

(38:29):
one that I mentioned already -it's fast deployments.
So, we have many deploymentsevery day, and with every new
deployment of a new component wecan potentially have new
personal data processingactivities.
Or, of course, personal dataprocessing activities could
vanish in case we delete theservice or something.
The usual approach for deployinga new service to production is

(38:55):
using different deploymentstrategies.
We have out there for manyyears already approaches like
A-B testing or blue-greendeployments and so on, and what
we look at from a privacyperspective is 'canary releases.
' So, the general idea of canaryrelease is that you put in a
piece of software into the newsystem, the new version of a

(39:17):
service for example, and onlyshare a fraction of the current
traffic that goes through thesystem.
So, users make requests to thesystem and you say, "et 5% of
the users now use that newversion of the system and the
other 95% still use the oldversion of the service, but with
these 5% of users we can also,well we can already try out the

(39:40):
new functionality to see if itworks correctly.
That's the usual approach.
But, what we can also do ischeck, of course,
privacy-relevant information inthat deployment step or in that
release step because if weobserve that there is a new
personal data processingactivity happening because
people send personal data to thesystem and you have established

(40:02):
some transparency measures thatobserve that, then we can in
such a release process checkwhether that is a processing
activity that should be thereand that it is already secured
and that is written down in yourrecords of processing
activities and maybe even sharedin your privacy policy and so
on, and you can then very easilycheck if that is in there and

(40:26):
if everything works correctlywithout affecting all the users
of your current system andpotentially opening some
vulnerabilities or even harmyour users directly because some
data flows around services thatshouldn't processes that kind
of data.
And for the concreteimplementation, then we use some
state-of-the-art technologies.

(40:46):
Just to name them quickly, it'sFlux's Flagger, Kustomize for
Custom Kubernetes, templates.
So, we are working on veryrecent technology to, again,
align with the current stack oftechnologies that we have there.
The basic idea is to have thisdeployment of a new service
version with just a fraction ofthe users, and for that time

(41:09):
where we deploy that new versionfor only a fraction of the
traffic we collect metrics.
And, these metrics then are, ofcourse, privacy- related.
So, we use another transparencytool that we developed, which
is the Transparency InformationLanguage and Toolkit (TILT),
which allows us to documentwhich kind of data is processed

(41:31):
in a service; if we see thatthere are changes to our system
that affect the overall privacysituation, then we can roll back
that new version of the serviceor we can finish the release
process and say, "yes, this newversion is also compliant with
what we want to have in thesystem.
And, this is basically whatHawke Release is about.

(41:54):
It's using the conventionalrelease strategies, but apply it
to the privacy domain,basically.

Debra J Farber (42:02):
That's awesome.
I love to see it.
And the second DevOps phase iswhat you call Hawk Operate.
Tell us about that.

Elias Grünewald (42:11):
Yeah, Hawk Operate is basically about
labeling and tracking personaldata flows between running
microservices or also trafficthat comes from the outside or
goes to the outside of theservice with some dedicated
transparency measures.
So, what we first have to knowis that usually in such

(42:31):
microservice environments wehave polyglot services.
So, one service is written inJava, another one is written in
Python or JavaScript, and all ofthem use different kinds of
libraries to make HTTP requestsor even remote procedure calls,
or different kinds ofcommunication mechanisms to talk
to each other.
And then, traditional or formerprivacy enhancing technologies

(42:55):
tried to develop specificlibraries or tools that were
able to label some kinds ofoperations or methods within
your programming language, witha dedicated library, for example
.
But, using that approach, weneed libraries for many
different languages because insuch a complex architecture
there could be many languagesused, and then we have to

(43:18):
develop some libraries that arecompatible with each other and
so on, and that's very tediousand so hard to learn for the
engineers.
So, what we thought about isusing the service mesh paradigm,
which is basically aarchitectural idea to let
developers write their servicesas they do it in the usual way

(43:38):
and make requests to otherservices; but, what the service
mesh is then doing, it's hookingright into the communication
between services and puts inthere another level of
abstraction to ensure newfunctionality just for the
communication between differentservices.
Usually, it is used forsecurity measures or encrypting

(44:01):
the traffic, for example betweendifferent services without
employing the encryption anddecryption library - both ends
of the communication.
But we can, of course, also usethis for transparency and
accountability tasks, becausesuch a service mesh is then
capable of easily tracking allthe traffic that is flowing
around in your system.
And, if you hook into that -and we wrote an extension to one

(44:25):
of the most famous service meshimplementations, which is Istio
- and we wrote this extension,which is basically a plug-in
that you install and then it'sready to use, which is able to
then label personal data that isexchanged between services.
This service mesh extensionthen realizes if that personal

(44:45):
data flows around in yoursystem, and it of course
recognizes a structure ofpersonal data that you labeled
once, if it occurs multipletimes in your system.
And doing that, where you labelyour communication once and
then can observe directly atruntime where personal data are
actually exchanged, and you canalso collect metrics about that

(45:06):
that you know the email addressof Debra is used in that system
in 25% of the current cases andthe first name of all the people
using that system for adifferent percentage.
And that's very helpful becausethen you can of course have
more clear view about what ishappening in that system, with a

(45:28):
one-time labeling effort.
Also, to just not present it asif it would be perfect -
that, of course, has somelimitations currently, which is
the one-time labeling effort.
I think one can live with that,of course, but as the technical
people will know, servicemeshes also have some
performance impact.
That is clear.
But, when we talk abouttransparency and accountability,

(45:50):
we can also think about nothaving that approach on all the
time, but only for the times inwhich we implement new pieces of
software or run that regularlyin times where the current load
is not so high on the system, sothat we don't run into overall
performance problems, forexample.

(46:10):
But, we quantify that and ifpeople want to use it or want to
forever optimize that, then ofcourse they are invited.
And, we are more talking aboutthe general approach of going
away from individual librariesand augmentations to individual
services, but rather looking atthe interfaces of different
services with that service meshextension in Hawk Operate.

Debra J Farber (46:34):
That's so helpful.
That is really, really cool!And then, the last DevOps phase
is monitoring.
So, tell us about Hawk Monitor.

Elias Grünewald (46:45):
Yeah.
So, Hawk Monitor is basicallyconnected to what I just said,
because with the Hawk Operatecomponent we basically collect
the information that is flowingaround and where personal data
can be stored, and with HawkMonitor we are actually trying
to visualize, to aggregate thatinformation, to also query for
questions that you have as aData Protection Authority (DPA)

(47:08):
or a Data Protection Officer(DPO) or just a technical or
legal expert that is dealingwith the privacy challenges
within your company.
And, what we are doing there isbasically aggregate all the
information that we get from theHawk Operate component, but we
also have an API to input otherkinds of data - think of Data

(47:29):
Loss Prevention (DLP) systems orother transparency measures or
security measures that arealready out there to store that
transparency and accountabilityinformation in one place; and
then, on top of that, offer somequery language and
visualizations of that, whichbasically allows us to have a
dashboard with all the relevanttransparency information.

(47:51):
And also, you could generateout of that at least part of a
RoPA document or other legaldocuments that you have to
provide.
And, what is also interestingis that if we have these metrics
from the operate componentcollected and then stored within
the monitoring component, wecan of course feed these kinds

(48:12):
of information back into therelease component that we talked
about earlier, because if wehave historic information about
what happened in the system, wecan compare that to new versions
of services that come into asystems architecture and cannot
only draw service graphs orsomething like that that have
changed, but also actuallyquantify privacy-related

(48:35):
information.
I just mentioned personal datacategories that might have
changed or that I used for acertain fraction of the traffic
that comes into the system, butof course this could also be
applied to storage periods ofpersonal data that underlies
certain regulations that limitthe overall storage of personal
data.
And, with that component HAWKMonitor, you can actually prove

(48:57):
and demonstrate that you haveestablished measures that
frequently delete personal datafrom your system.
Or, for example, if you canthen show the historic
information that last week theinformation about Elias was in
the system or some certainrecords of a certain type were
in the system and now they arenot anymore.

(49:18):
And, I think that's a usefultool.
We propose this generalapproach again with also some
frequently- used technical toolslike Prometheus metrics and
Grafana dashboards andparameterized queries and so on.
But, it's more about thegeneral idea that we have to
collect runtime information anddisplay it in a way that

(49:38):
queryable, machine- readable andcan then be transformed into
the documents that we need inthe end.

Debra J Farber (49:43):
Yeah, that makes a lot of sense.
It's almost going to look likemagic when you do it right.
I'm wondering have you sharedthis approach with regulators?
Because obviously, your DPOs,your regulators - they're going
to want accountability metrics;they're going to want to see
proof of these things and withHawk Monitor and with all the

(50:04):
different DevOps phases puttogether, it sounds like you
could really demonstrate thatyou're doing data protection
compliantly and to the benefitof individuals.
Right?
You'll be able to assure thatyou're deleting data when you
say you have it and you're goingto delete it, right?
Just curious if any regulatorshave weighed in on this approach
.

Elias Grünewald (50:24):
Yes, of course, we are also in close contact
with data protection authoritiesand experts in that field.
I presented some work earlierthis year at the Privacy
Symposium in Venice, which was avery political and both
scientific event.
But maybe, even more importantly, we presented some of our new
DevOps-driven approaches at theCPDP Conference in Brussels in

(50:47):
May.
We organized a panel there,which you can check out on
YouTube as well, where we alsoshowed many of our new
approaches.
We don't want to force anyoneto use this approach now, or we
don't think that theseapproaches that we, as a very
small research group, in the enddeveloped are the silver bullet

(51:08):
now, or also far fromproduction-ready components, or
something like that; but, wewant to establish this new mode
and this new way of doing it,and I think many of the
regulators that we talked to arevery interested in that because
they are also overwhelmed bythe different structures, the

(51:29):
information that is relevantreaches them, or the variety of
systems that they cannot auditbecause they are so difficult to
understand.
I think we can solve this thenfrom two ends.

First, of course, bottom- up: we, as companies or people (51:41):
undefined
working in companies, try toestablish these tools for our
own comfort and our safety andfor establishing trust and so on
, but also from the regulatorsto guide users or developers
that are new in that sphere tocertain belt-suited tools or our

(52:04):
approaches in general that theycan then implement in their
systems.
This includes, for example, themachine-readable representation
of such information becausethat condenses one of the keys
that allows us for scalableprivacy in such a setting.

Debra J Farber (52:21):
Gosh, that makes sense and that's really
exciting.
So, for this HAWK paper, canyou sum up the general findings
of the research?

Elias Grünewald (52:29):
Yeah, sure.
So, the general approach seemsviable (and I think it's, first
of all, a good message foreveryone interested in DevOps
and privacy tools), and we alsothink that the phases that we
selected - so release, operate,and monitor - are belts-suited
for doing these tasks.

(52:50):
What we have not yet checked is, for example, what we could do
in the testing phase, so evenbefore deployment, but I think
there are many startups therethat are already working on that
regarding continuousintegration as well, and so on
and all these tasks.
So, I think these approachescould play well together, and we
just opened up this space fornew tools that we see,

(53:14):
hopefully, in the near futurefor these three specific phases
and what we summarize for eachof these phases: the general
challenges, our proposedapproach and, of course, the
limitations that come with ourcurrent implementation.
So, I already mentioned someperformance impact that, of
course, differs from system tosystem and on the kinds of data

(53:38):
that you have in there, and also, of course, on the complexity
of the overall system because wewere limiting our prototype to
some very well-used technologiessuch as REST APIs, with JSON
messages over REST that Iexchanged, but of course, there
are many more technologies inuse that we currently have not

(53:59):
covered with our framework andthat are there in real-world
system and that also have to becovered in future iterations of
these.
And, I think still the approachis viable and should be
compatible with even moretransparency- enhancing tools -
for example, all the logging,tracing, monitoring and tools

(54:20):
that I mentioned earlier - andwith that, we could really come
up with something powerful andalso open.
And, maybe also to mention thatall the developments that we
did here are open source.
You can check them out on ourGitHub.
I think that's a very good wayof doing transparency and
accountability in the new wayand bringing that forward.

Debra J Farber (54:43):
That's awesome! I'll put a link to the CPDP talk
, the YouTube link, as well asyour GitHub link, if you go
ahead and send me that right now.
We're getting close to the endof the conversation, but I
definitely wanted to bring upthis dedicated project on
transparency in cloud- nativearchitectures that you're

(55:04):
working on, called TOUCAN (likethe bird).
Does this build on the workyou've done with HAWK.
I know you have an open callfor collaboration, so just tell
us about TOUCAN and what you'relooking for.

Elias Grünewald (55:15):
Yeah, as you already realized, I have a
passion for birds a little bit.
And yes, TOUCAN is a projectfunded by the Federal Ministry
of Research and Education herein Germany.
I had the chance to hirecurrently four developers for me
that build upon the HAWKproject and also other projects

(55:37):
that we have around here at ourdepartment, in which is a very
dedicated research projectregarding transparency and
accountability, to bring thatforward.
What we want to do there is to,of course, research more phases
of the DevOps lifecycle, butalso to enable more and more
interoperability betweendifferent existing transparency

(55:59):
and talents and technologies andcloud platforms in general and
how to do that.
The open call forcollaboration, which I can only
repeat here and I would be veryhappy if anyone, in response to
what I just explained, would beinterested in setting up a call
or a collaboration is that we atTU Berlin have several research

(56:20):
projects about privacyengineering in general, also
about anonymization andstreaming environments, for
example, as a completelydifferent topic; but, for
transparency and accountability,we have this dedicated project
and are always looking for usecases and application areas, so
different companies that wecould involve or, of course,

(56:42):
data protection authorities thatare interested in our
approaches.
If we should summarize that orapply to a certain use case, you
are very welcome.
I would like to give just oneexample.
We have developed theTransparency Information
Language and Toolkit (TILT),which I mentioned earlier, which
is a machine- readablerepresentation of transparency
information - everything thathas to be in a privacy policy,

(57:06):
basically.
We already have an openrepository for many different
companies, and if you want to bein that repository, then feel
free to contact me or just openup a pull request on GitHub
again.
This is a great way tocollaborate with us, and we are
very open for follow- upprojects, panel discussions, and

(57:26):
so on.
So, what we as researchers, ofcourse, like to do to talk about
our new ideas and to validatethem in system context.
So that's, of course, alsosomething that we are looking
into.
I, as a Ph.
D student, of course, do nothave the access to multimillion
dollar companies or evencompanies from many different

(57:47):
domains, and if you're workingin one that is dealing with
these topics, then reach out andtell me why our approaches
couldn't work in there in yoursettings, and then we can figure
it out if we can do it better.
That's, of course, the goal ofthe whole thing.

Debra J Farber (58:02):
Awesome.
That's great! What's the bestway for people to reach out to
you?
Is it via LinkedIn or throughyour research email?

Elias Grünewald (58:10):
Yes, via email works.
LinkedIn works.
Github works.
X/Twitter, of course, alsoworks.
You can reach out.

Debra J Farber (58:16):
It's not a bird anymore, though.
It's X now instead of Twitter.
All right, well, I will put allof that in the show notes -
ways to contact you.
So, what advice do you have forprivacy engineers who want to
shift left in theirorganizations via a DevPrivOps
approach?
Basically, how can they bestconvince their Head of

(58:38):
Engineering and management toadopt a DevPrivOps approach?

Elias Grünewald (58:42):
Yeah, I think if you want to convince your
developers that you're workingwith, you tell them, "We can
keep up the speed of developmentand operations", because
everyone is proud of deploying anew piece of software or a new
version of service to theproduction system and a
developer that did somethinglike a new feature or

(59:05):
implemented some optimizations,they don't want to get annoyed
with burden some paperwork ormanual processes that just
impact the speed of development.
With DevPrivOps-driven orDevOps- driven transparency-
enhancing technologies andaccountability measures, we can
keep up that speed and don'tannoy developers, and at the

(59:27):
same time, establish more andmore trust within the
organization of what developersthat want to work on their own
and that want to be responsiblefor a certain piece of software
that they can actually do theirthing and report through
structured interfaces like ours.
If we have that, everythingworks well and this is something

(59:49):
promising, I think.

Debra J Farber (59:51):
I agree.
I'm really excited about it.
I'm excited about your approach.
The DevSecOps approach workedreally well for security, so
there's good reason to believethat if you invest in a
DevPrivOps approach, then, youknow, can really work well for
privacy within orgs.
Do you have any other pearls ofwisdom that you'd like to share
with the audience today beforewe close our conversation?

Elias Grünewald (01:00:15):
I think that the hint that for everyone who
is interested in that sphere -and I hope many of the people in
the audience are now hookedinto the topic of transparency
and accountability as wellbecause we have so many
solutions for security and dataminimization related tools and
problems, of course, but not fortransparency and
accountability; and, I thinkthat's something that I would

(01:00:36):
like to talk about with you, andthat is what the call for
collaboration stands for.
Also, as I saw in the list ofepisodes in this podcast, there
is much information about that.
So, that's great.
If we create a community - aglobal community - on these
topics, I think, if you all canengage in the conversation

(01:00:58):
openly, then this is great foreveryone.

Debra J Farber (01:01:02):
I agree, more information- sharing is
definitely going to help advancethe ball faster when it comes
to DevPrivOps, and so I thinkthat's really great advice.
This has been a wonderfulconversation.
I'm really excited by the workyou've done, and I hope you get
so many people reaching out toyou to collaborate.
I hope to have you on in thefuture as you do more research.
You're spot on; You're workingdirectly in the area where

(01:01:26):
there's a lot of desire to learnmore, I think, from privacy
engineers and those engineerswho want to get more privacy
knowledgeable.
So, you know, I look forward tohaving you on in the future.

Elias Grünewald (01:01:38):
Thank you so much also for inviting me and to
everyone who listened to thatepisode.
Thank you very much.

Debra J Farber (01:01:43):
Absolutely.
Thank you so much for joiningus today on Shifting Privacy
Left to discuss your work withDevPrivOps.
Until next Tuesday, everyoneone will be back with engaging
content and another great guest.
Thanks for joining us this weekon Shifting Privacy Left.
Make sure to visit our website,shifting privacy left.

(01:02:03):
com, where you can subscribe toupdate so you'll never miss a
show.
While you're at it, if youfound this episode valuable, go
ahead and share it with afriend; and, if you're an
engineer who cares passionatelyabout privacy, check out

Privado (01:02:17):
the developer- friendly privacy platform and sponsor of
this show.
To learn more, go to privado.
ai.
Be sure to tune in next Tuesdayfor a new episode.
Bye for now.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.