Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Victor Morel (00:00):
The idea would be
to build these privacy profiles
according to a longitudinalstudy so that you will have your
predefined profile - like yourprivacy pragmatic or privacy
guardian, for instance - andaccording to that, you will have
predefined choices about yourprivacy decisions.
So, it's always aboutempowering people, about
(00:23):
supporting their decision byproviding better and informed
notices, for instance, and thenwe will combine it with another
kind of automation, with on-the- fly privacy permission so
that people are not burdened toomuch with the decisions, but
only when needed, when required.
We're trying to like design itto solve this tension,
(00:45):
basically, between usability andlawfulness in this kind of
environment.
Debra J Farber (00:57):
Welcome everyone
to Shifting Privacy Left.
I'm your host and residentprivacy guru, Debra J Farber.
Today I'm delighted to welcomemy next two guests, Victor Morel
and Simone Fischer-Hübner, whowill be discussing their recent
paper, "Automating PrivacyDecisions where to Draw the Line
," where they outline the mainchallenges raised by the
(01:20):
automation of privacy decisionsand provide a classification
scheme that addresses thosechallenges.
Simone Fischer-Hübner has beena full Professor at Karlstad
University since June 2000,where she is the head of the
Privacy and Security ( PRISEC)Research Group.
She received her undergraduatedegree, PhD, and Habilitation
(01:44):
degree in Computer Science fromHamburg University.
Impressively, ChalmersUniversity of Technology awarded
Simone with an honorarydoctorate two years ago, where
she is now a visiting professor.
Simone has been conductingresearch in privacy,
cybersecurity, and privacy-enhancing technologies for more
than 30 years.
I can't list all of her manyaccomplishments and projects
(02:07):
right now, but I will highlighta few recent ones.
She's the Swedishrepresentative of the
International Federation forInformation Processing, a Board
Member of the Swedish DataProtection Forum, and Member of
the Board for the PrivacyEnhancing Technology Symposia
(otherwise known as 'PETS').
Victor Morel holds a PhD inComputer Science from INSA de
(02:32):
Leon in RIA on the PrivaticsResearch Team.
His research interests includeprivacy and data protection,
network security, usability andhuman-computer interactions,
applied cryptography, andtechnology ethics.
Victor is currently working inthe Security and Privacy Lab at
(02:52):
Chalmers University ofTechnology on usable privacy for
IoT applications; and, inaddition to his academic
activities, he also volunteershis time to educate others by
advocating for decentralization,privacy, and free software.
Welcome, Victor and Simone.
Thank you.
Victor Morel (03:12):
Thanks, Debra, for
inviting us.
Debra J Farber (03:15):
Delighted to
have you here.
So what motivated you both tofocus your research on the
automation of privacy decisions?
Simone Fischer-Hübner (03:23):
So, our
ultimate goal is to design
privacy decisions that areusable, where users are well
informed and do decisions thatalso match their preferences.
However, in practice, this isalways a challenge because users
are overwhelmed with lots ofprivacy decisions, consent
(03:45):
requests, cookies.
So, they barely have time toread through privacy policies
and make well- informeddecisions.
So, the question is whetherprivacy decisions can be
supported through automation.
For instance, that systems canbe supported by machine learning
(04:07):
, predict the users' privacydecisions, and make
recommendations or help users toset these decisions.
However, there are many legalquestions as we probably later
will discuss, and furtherchallenges.
So, maybe, Victor, you want tocompliment?
Victor Morel (04:27):
Yeah, it all boils
down to this cyber security
research project that we'reconducting together with other
people in Sweden, and we wantedto build a privacy system that
will manage these privacypermissions.
But, we were not sure about howwe could automate these privacy
decisions, and so we started tolike dig a little bit and ask
(04:49):
some questions to lawyers -people working in data
protection agencies - andeventually we realized that no
one really knew.
And so, all of our findingscould actually result in a
research paper.
So, this is also why we havethis article now, because we had
to provide the answersourselves, in a way.
Debra J Farber (05:10):
That makes sense
.
So, it's the initial researchfor the next step of you
building something that youwanted to build in a privacy-
preserving way, for this privacyassistant capability.
It also brings up that there isa regulatory perspective here.
What did you learn as yourrequirements for building this
out when you were speaking tothe attorneys?
If you could summarize therelevant European legal rules,
(05:31):
like GDPR and ePrivacy, that arerelevant for privacy decision-
making and automation, I thinkthat would be really helpful.
Victor Morel (05:39):
Yeah, so the GDPR
specifies a few requirements,
notably for consent, becauseconsent is one of the type of
privacy decisions that we dealwith in our paper.
So, it states that consent hasto be informed, specific,
freely- given and ambiguous, andalso that it entails a clear
statement and affirmative action.
(06:00):
It basically means that youcan't just like fully automate
consent.
This is like one importantrequirement.
It also says that you have toprovide the possibility to
withdraw consent, and it shouldbe as easy to withdraw as it is
to give.
It also specifies a few thingswhen it comes to 'explicit
consent.
' For instance, you have to askfor an explicit consent, so like
(06:24):
a stronger version of it, whenyou deal with sensitive data, so
philosophical data aboutreligion, et cetera, et cetera,
when you want to automate adecision for profiling, for
instance, and also when you haveto transfer data to countries
without adequate safeguards(which, still nowadays, include
(06:47):
the U.
S.
because of the Schrems II courtcase that invalidated privacy
transfer agreements between theEU and the U.
S.
Simone Fischer-Hübner (06:57):
So, it's
needed for so-called third
countries that transfers tocountries outside the EU.
The GDPR actually includesfurther requirements that are
important in regard toautomating privacy decisions.
For instance, the principle ofprivacy- by- design and default.
(07:19):
So, by default, the mostprivacy-friendly installation
settings should be installed.
So, this also has impact on howfar we can automate privacy
decisions.
Yeah, then there are alsofurther rights to object, for
instance, to opt- out, the rightto object to direct marketing
(07:43):
and profiling.
And then, in addition to theGDPR, we also have the e-Privacy
Directive, which is now underdiscussion to be replaced by e
-Privacy Regulation.
So, the Directive is governingthe electronic communications
sector; it is more specific thanthe GDPR, and in particular it
(08:06):
also regulates cookies ortracking and technologies, and
here also requires consent bythe data subjects.
So, this has led to all thecookie banners and consent
requests that we are confrontedwith daily.
So, here the ePrivacyDirective, or future Regulation,
(08:30):
also plays an important role.
Debra J Farber (08:33):
Thank you for
that.
I think both of you reallyhelped give a summary of, I
guess we call them theregulatory requirements,
constraints for as you'rebuilding out this classification
schema.
From your paper, your approachto this research seems to be
along two dimensions for theclassification scheme.
(08:53):
The first is the Type ofPrivacy Decisions that you
categorize.
Then the second is their Levelof Automation" whether it's
manual, semi-automated orautomated.
Since your focus here seems tobe a user's ability to control
disclosure of their personaldata and the conditions for that
(09:14):
processing of that data, canyou elaborate on this approach
and your research?
Victor Morel (09:19):
Yeah, our focus is
not only on the user's ability
to control their personal data,but it's also about usability,
because we want to providemeaningful control.
There might be a tensionbetween the usability and the
lawfulness between the two.
We're trying to clarify theterms of the debate here, which
(09:41):
turned out to be not so easy.
We first started to look atconsent, and this is also why we
have such a focus on it in ourpaper, and then realized that
not everything is just aboutconsent.
There are privacy permissions,privacy preferences, and when
you withdraw consent or when youopt out of data collection, in
(10:02):
a way, it's not exactly consent.
It doesn't call for the samelegal requirements.
It's about control and it'sabout usability in a way.
Simone, maybe you want to addsomething.
Simone Fischer-Hübner (10:14):
Yes, we
elaborated decisions which are
related to consent and thecontrolling to this closure and
processing of personal data.
Here, of course, privacypermissions are also important
decisions.
We are also confronted when weare installing apps, for
(10:34):
instance on mobile phones, toset privacy permissions.
This could be combined with theconsent, but there can be also
other legal grounds for privacypermissions - for instance, on
the legal basis of a contractfor an e-banking app.
There might be an overlap withconsent, but it might be also
(10:54):
not be based on consent.
Privacy permission settings isanother type of privacy decision
.
Then, there might be alsoprivacy preferences settings,
which are different to privacypermission settings, because
privacy preference we definedrather as an indication of
(11:15):
privacy choices, but it's onlyindicating the preferences of a
user without the privacypermissions set.
Privacy permissions are morelike access control.
All settings and privacypreferences are just the
indications of what the userswish.
There are also different toolsfor indicating privacy
(11:38):
preferences that have beendeveloped in the past and
currently so-called "privacypreference languages.
For instance, like in the past,p3p is a platform for privacy
preferences where users couldalso indicate their preferences.
Then, you have a website thatstates the privacy policy, so
(11:59):
both the privacy preferences andthe privacy policies of the
websites are stated inmachine-readable format and then
they can be matched so that youcan automatically detect how
far the user's preferencesmatches with a website's policy
and any deviations could then benoticeably displayed to the end
(12:23):
user to contribute to usertransparency, so that the user
doesn't have to read the wholeprivacy policy statement but
just get very noticeably thedeviations of his or her
preferences from the site'spolicy displayed.
So, these are privacypreference settings.
(12:44):
So, it's another type ofdecision in addition to consent
and privacy permission settings.
And then, as Victor said,decisions to reject consent or
revoke consent or to object.
So, these are also another typeof privacy decisions that we
can make.
Debra J Farber (13:05):
Yeah, and so
that's really helpful, and thank
you for giving us someillustrative examples.
Just to sum it up, I think (forthe audience here) that there
are four different types ofprivacy decisions that your
paper talks about (13:17):
privacy
permissions, privacy preference
settings, consents toprocessing, and rejection of
processing.
Is that correct?
Yes.
Awesome.
Did you have anything to add tothat, Victor?
Victor Morel (13:31):
Yeah, I just had a
few examples in mind.
For instance, mobile apppermissions are a good example
of what privacy permissions canbe.
This is something we'reconfronted with every day,
basically.
Typically on Android, you canbe asked to regulate which data
will be accessed by which app,and this is a typical example of
(13:54):
a privacy permission which,unlike a privacy preference, has
to be enforced.
I mean, it's like a bindingdecision.
So, for instance, a privacypreference will be just
something that you will indicatebut might not necessarily be
taken into account by yoursystem.
I think one old now example isDNT, which stands for Do Not
(14:16):
Track.
I think you mentioned it inprevious episodes.
So, people were able to definetheir preference" I want to be
tracked or not, but websitescould also choose not to take
that into account.
This is the main differencebetween privacy permissions and
privacy preferences.
One is binding and the otherone is not.
(14:36):
And then we also dig intoconsents.
Cookie banners are a goodexample, but we're also
interested in IoT settingsbecause I think it's going to be
a big thing at some point thatyou will have to deal with the
disclosure of your data in alegal way, which is the main
thing about consents.
It has a direct legalimplication and also what we
call under this umbrella term:
reject - so opting out. (14:59):
undefined
Interestingly, for instance,GPC, which is enforced in the U.
S.
by the CCPA in California, is agood example of a 'reject
decision' because it's anopt-out.
So, we can't really say thatit's about consent.
Consent is more of an opt-inand has a very clear and precise
(15:21):
definition, at least in the EU,and GPC will not be considered
as consent under our frameworkfor a good reason, but GPC will
be considered as a rejectbecause it's an opt-out decision
.
So just a few examples to maybegive the audience a feeling of
what we're talking about is notjust academic work.
(15:43):
Even though we're alsointerested in research projects
in the EU, the U.
S.
, etc.
, but there are also concretethings that we're discussing in
the paper.
Debra J Farber (15:54):
Absolutely.
I'll add a link to the paper inthe show notes so that folks
can go ahead and read that.
It's not too long - it's onlynine pages - but it's got a lot
of really great info in there.
In fact, you evaluate each typeof privacy decision and then
the implications of variouslevels of automation on those
decisions - so manual,semi-automated, and fully-
(16:16):
automated.
Please tell us about some ofyour findings when it comes to
these different levels ofautomation.
Simone Fischer-Hübner (16:24):
So,
manual decisions do not raise
any legal issues.
They are quite straightforward.
That is what we anyhow do everyday.
However, of course, there areusability challenges, as
discussed, because doing all thesettings - permission settings
or giving consent - alwaysmanually requires a high
(16:50):
cognitive load from the user anda lot of time.
There has been also researchthat just reading all the
privacy policies will takeseveral days a year.
So, users simply do not havethe time to really read all the
policies to do well informeddecisions.
So, in summary, manualdecisions do not raise any legal
(17:12):
issues,Semi-automated decisions are
those where decisions are madeat one time upon dynamically-
created requests or reacting ondynamically- created
recommendations.
So, here recently a lot ofresearch has been done on
(17:36):
so-called 'personalized privacyassistants' that, with machine
learning support, can predictthe user's privacy preferences
and then can dynamically suggestchanges to permissions so that
the permission settings arefitting to the real preferences.
(17:59):
So, the user is somehow nudgedin a way to change the
permission settings.
This can also raise someethical issues because privacy
nudges, even where it follows apositive intention, can also
impact the user's autonomy andcould manipulate the user.
(18:21):
So, there are also somediscussions around ethical
aspects.
However, still they can helpusers to make decisions that are
better matching the preferencesand also simplify making
decisions.
So, they have surely advantagesfor usability.
(18:43):
And when it comes to consent,there has been also work on
dynamic consent, where consentis evoked.
For instance, purposes for dataprocessing are changes or if a
context appears that the databecomes sensitive - so-called
(19:04):
'special categories of data' andpursuant to the GDPR (for
instance, location data aboutthe current location of the
user) can indicate that the useris visiting a hospital or a
church.
So, the data becomes medicaldata or data related to
religious beliefs.
(19:25):
Then, for special categories ofdata, explicit consent is
required, so this could beevoked.
Or, consent requests could beevoked dynamically.
Yeah, so we could also havedynamic consent as a form of
semi-automated decision- making.
(19:46):
So, there are different formsfor semi-automation.
But here, our findings are alsothat it mostly conflicts with
the GDPR because for consent,you need an affirmative action;
(20:06):
so, consent cannot be givenautomatically.
And also, for privacypermissions and privacy
preference settings, if they areset automatically, they might
contradict with default settingsthat implement privacy- by-
default, so they might changethe privacy- by- default
settings in a way that they aremore generously allowing data
(20:30):
processing.
So, it's not the most privacy-friendly option.
Here, we have then a conflictwith Article 25 of the GDPR: the
principle of privacy- by-default.
Our conclusion was also thatactually, for automated
decision- making, this onlyworks for the decision to reject
(20:54):
.
That means, for instance, forrevoking consent and for opting
out, this can be doneautomatically in line with the
GDPR, but all other decisionsare problematic to be done
automatically.
Debra J Farber (21:10):
That's a really
good summary of the research.
Do you have anything to add tothat, Victor?
Victor Morel (21:14):
Yeah, about the
full automation of privacy
decisions, it surprisinglyconnects to another paper we
recently published with PieroRomare and Farzaneh Karegar,
still in the same project (21:25):
the
CyberSec IT project.
We were interested in what werethe preferences and concerns of
users about a certain type ofIoT trigger action platforms,
such as IFTTT, which providesapplets to basically automate
some of your decisions.
So, you would link your IoTdevice to connect with your
(21:48):
cloud service, et cetera; andpeople were not really happy
about full automation of theirpersonal data.
So, they were interested by theautomation aspect, but they
still wanted to be in control.
So there are their legalrequirements, but there are also
expectations of everydaypeople; and basically, people
want to stay in the loop.
(22:09):
They want to have somefacilitation in their decision
making, but they don't want tobe left out.
So, it's interesting to seethat it is actually also a thing
from a user usability point ofview.
But, yeah, people want to behelped with their decision.
They want to feel in control,in a way.
(22:31):
This is something I wanted toadd.
Simone Fischer-Hübner (22:33):
Yeah, and
indeed there is also research
from the U.
S.
, and this proposal is toautomate consent or permission
settings but they also refer toresearch by others have shown
that users would like to stay incontrol.
And so, there are technologiesproposed for fully- automating
(22:55):
consent or other decisions -especially from the United
States - but, in the Europeancontext you can question whether
this is legal in reference tothe GDPR.
Debra J Farber (23:09):
From a
socio-technical perspective, I
think it's fascinating becauseit's almost like a sense of a
bigger idea of autonomy (23:15):
we
don't want people making
decisions about us.
We want to understand howdecisions are made.
Simone Fischer-Hübner (23:24):
Research
has shown that with 95%
prediction, you can predict theprivacy preferences of a user.
So, automation probably canwork very well.
And, there's also research thatif you have automation - and
the result will probably be thatyou have privacy permission
(23:44):
settings that are better atmeeting your privacy preferences
than if you let the user makethe decision under the
circumstances - that he or sheusually does not have the time
and dedication to really makewell-informed decision.
However, I stand, 95% soundsvery good, but there's also 5%
(24:06):
where there are deviations andthe system, besides that, I
think, automatically is that youdo not agree to; so, you will
be very unhappy.
I can also see that users stillwould like to have control.
Debra J Farber (24:20):
Right.
So, based on your findings, ata high- level, which conditions
enable the automation of privacydecisions while complying with
regulations, and which ones arejust like, "We should never use
these because they'll nevercomply?
" Is it just full- automation?
Is that the answer?
Or, do you have something elseto add there?
Simone Fischer-Hübner (24:39):
Yeah,
fully automated, except for the
decision type (24:41):
reject.
So, when you reject to giveconsent, revoke consent / opt
out, that can be done fully-automatically and still comply
with the GDPR.
Debra J Farber (24:53):
Oh, that's a
really good call out.
Victor Morel (24:55):
Yeah, and actually
the GDPR even mentioned that it
could be interesting toexercise your right to object to
profiling, for instance, usingautomated means.
So, it kind of goes in ourdirection that it should be
possible to fully automate thewithdrawal of consent, opting
out, and everything like this.
(25:17):
But, this is basically the onlyuse case, and this is also why
we created this weird category"reject in which we include opt
out, withdrawal and right toobject - because this is the
only good use case in which itis interesting to fully-
automate the decision in thatcase.
And, also because you don'tdisclose any data; you only
(25:39):
prevent the disclosure of data.
So, it's like a negativedefinition in that specific case
.
Simone Fischer-Hübner (25:45):
Yeah, so
the GDPR only regulates how you
have to give consent, but nothow to reject consent; and
actually there are also severaltools for reject, like Consent O
Matic, right, that rejectscookies by default.
Victor Morel (26:03):
Yeah, we found,
indeed, a few extensions for web
browsers that will help withdecisions with cookie banners,
and some are arguably notcompliant with the GDPR because
they will consent on yourbehalf, and this is not okay.
But, some will take a moreprivacy- preserving approach and
(26:24):
will only dismiss basically thecookie banners - such as
Consent O Matic, which has beendesigned by colleagues in
Denmark, and this web extensionwill look for all of the cookie
banners and will dismisseverything.
So, this is actually fine to doso, even in a fully- automated
way, because you won't discloseany data to the website.
Debra J Farber (26:47):
Oh, that's
fascinating.
So, to what extent canautomated privacy decisions
promote informed control in linewith the GDPR to benefit of
users?
I guess what I'm asking forhere is best practices for
informed control.
What would you recommend?
Victor Morel (27:05):
I think an
interesting line would be to
provide tailored recommendations.
So, this is a form ofautomation, in a way, because
you basically feed some data butyou don't take decisions on
behalf of users.
You just help them to make aninformed decision.
So, we think it's aninteresting way; it fits
(27:26):
typically in the semi-automatedcategory that we designed.
We also surveyed someartificial agent that will
provide an automated negotiationof privacy decision and it can
be done in a way that you willnever disclose data unknowingly.
So, this is also a veryinteresting line of approach.
(27:49):
And finally, we surveyed somework that will provide requests
on- the- fly.
It's also a form of automation.
Maybe, Simone, you can say afew more words, because you work
on projects that involve thison- the- fly privacy decision.
Simone Fischer-Hübner (28:06):
Yes, so
this is basically also part of
the semi-automated decisioncategory.
For instance, the user acts ormakes a decision which does not
match his or her privacypreferences that were previously
declared, then the user wouldbe asked whether the user wants
(28:33):
to anyhow give consent or rejectconsent; and then, if the user
decides at the same time, theuser can be asked on- the- fly
whether the user would now liketo update his or her privacy
preference settings,And, we also consider to
(28:58):
implement that for privacypermission settings in the
context of IoT trigger andaction platforms, that also
here, on- the- fly, the user canbe asked to change permission
settings.
For instance, in the contextwhen dynamic consent is evoked
because there are grounds torequire dynamic consent.
(29:21):
So, the user has previouslygiven consent, but now the
situation where dynamic consentis required - for instance, if
the data suddenly becomessensitive or, as Victor also
elaborated, data is transferredoutside of Europe, or if the
data is used for profiling -then the GDPR requires dynamic
(29:42):
consent; and in this context,the user can also be asked
whether (depending on how theuser answers), whether the
permission should be changedaccordingly.
So, they are set on- the- fly.
So, then the user does not needto be bothered constantly with
setting permissions.
(30:03):
But, the user can start withprivacy by default permission
settings, which are then updatedon- the- fly.
That means the user is askedwhether the settings should be
changed, in the context, whenthe user is anyhow asked to make
decisions.
Debra J Farber (30:22):
That makes sense
.
So, in your paper, and here,we're talking about privacy
decisions and automation acrossthree separate technologies: web
, mobile and IoT - how shouldorganizations think about
achieving usable and transparentprivacy with automation across
technologies through acomprehensive approach?
Victor Morel (30:45):
I think that,
first of all, it's important to
understand that if you want toachieve usable and transparent
privacy, you can't just check alist and think that you're done
with it.
I would say that it's a processand you have an overarching
principle like, typically,privacy by design and by default
, and you also have a veryconcrete indication that, well,
(31:08):
you can't just fully automateprivacy decisions because most
of the time it goes againstthese principles and legal
requirements.
And then, you have this blurryzone in between -
semi-automation that - youactively have to think about and
what could be done likeoptimally would be to conduct
(31:28):
user studies Is it actuallyusable?
Do people think that itimproves their decision-making
or not, or the usability pointof view?
And also, if you want to assertcompliance, you would probably
have to discuss it with a DPO(with a Data Protection Officer)
that will tell you whetheryou're actually compliant with
(31:50):
your local jurisdiction or not.
But yeah, it's a complicatedprocess.
You actively have to thinkabout every step.
Also, every situation isdifferent because in the IoT you
don't necessarily haveinterfaces with which you can
provide information andtherefore make an informed
decision, unlike the web,because if you access the web,
(32:12):
it's through a browser and youhave a big screen so you can
actually know what's going onand therefore provide
information.
So, every case is different andyou have to assess it.
It's like security you can'tjust check a list and think that
you're done with it.
You have to reflect upon everystep.
Simone Fischer-Hübner (32:32):
Yeah, I
would say, as discussed, fully-
automating privacy decisionsraises legal concerns, except
for the decision to reject.
Manual privacy decisions are inline with the GDPR; however,
they lead to usability issuesbecause users do not have the
(32:54):
mental capacity to make so manydecisions and be well- informed.
So, usability issues, in turn,lead to decisions that are not
well-i nformed and do notnecessarily meet the user's
preferences.
So, therefore, the middle way -semi-automation - is probably
(33:15):
the best way to go, and you haveto find suitable means for
semi-automatically supportingprivacy decisions while meeting
legal requirements of the GDPR.
In our papers, we also providesome examples for such
semi-automation.
Debra J Farber (33:37):
Excellent.
Now, I know from your paperthat your research is
illustrative and you're going tobe adding, as there's new
technologies, you're going to beadding to the categorization;
so, it's non-exhaustive and youhave plans to do a next phase of
this research, maybe around IoT.
I'd love to learn more aboutwhat you have planned.
Victor Morel (34:00):
Yeah, indeed.
We plan to work on the IoTbecause that was initially the
reason why we started this paper- because we want to build a
privacy assistant for the IoTand specifically for a trigger
action platform that use IoTdevices.
So trigger action platforms,like I mentioned before, can
(34:20):
connect every device andservices, so it can be a lot of
decisions to make for a randomperson, even for a knowledgeable
person, I would say.
So, the idea would be to buildthis privacy profiles according
to longitudinal study, so thatyou will have your predefined
profile, like your PrivacyPragmatic or Privacy Guardian,
(34:44):
for instance; and according tothat, you will have predefined
choices about your privacydecisions.
So, it's always aboutempowering people, about
supporting their decision byproviding better and informed
notices, for instance, and thenwe will combine it with another
kind of automation - with on-the- fly privacy permissions, so
(35:08):
that people are not burdenedtoo much with the decisions, but
only when needed - whenrequired.
Yeah, we're trying to likedesign it to solve this tension
basically between usability andlawfulness in this kind of
environment specifically.
This is basically the projectfor the upcoming year.
We'll see how it goes.
(35:29):
We have also anotherinteresting track because, as
you said, it's not meant to becomprehensive.
Actually, we're also trying tobuild a systematic literature
review now about privacydecisions and their relation to
automation.
We're starting to survey likeover 100 different papers
related to privacy permissions,privacy preferences, consent,
(35:51):
and reject and see what has beendone in the past: whether they
were accurate or not; whetherthey were using machine
learning, complex models, orsimple rules; which
environments; what was thesource of the data for the
automation; how it wasautomated; etc.
So, we're trying now to beexhaustive in a way.
(36:12):
So, it's still like a verypreliminary work.
We don't have much resultsexcept that we interestingly
found that many, many paperswere drafted in the 2010's about
recommendation systems forsocial networks.
So, it's not so novel whatwe're trying to achieve.
We're trying to do it in adifferent setting, which is
(36:35):
novel in a way, but people havebeen trying to do that before,
and they were not alwayssuccessful, let's say.
But yeah, this is one of themain funding that we have now,
about the comprehensiveness,let's say, of the study.
Simone, if you want to addsomething.
Simone Fischer-Hübner (36:54):
Yes, it
was nicely summarized.
I can also add that weconducted, with further
colleagues from Chalmers andKarlstad - also three focus
groups - to derive alsoqualitative research results
about the user's preferences andconcerns for IOT trigger action
platforms, and the results willalso allow us to implement a
(37:17):
semi- automated approach formachine learning supported
prediction of privacypreferences, which off course
has to be done in a privacy-preserving manner and can then
be combined with on-t he- flyprivacy permission management -
so an easy, automated approachfor asking users whether they
(37:41):
want to revise the decisions ina context when they anyhow need
to be asked to do decisions.
Debra J Farber (37:49):
Thank you.
So I know that you had donesome research when it came to
browsers and permissions andsettings, and even we're taking
a look at Mozilla.
Can you tell us a little bitabout your work there and any
calls to action?
Victor Morel (38:06):
Yeah, because when
we started to look at all the
types of privacy decisions andwe realized that, when it comes
to consent, people have beentrying to design tools to
automate privacy decisions -sometimes in lawful ways, but
not all the time - we found outthat Mozilla was providing, in
its better version, a way toautomate the cookie banners
(38:29):
management.
So, I do think that Mozilla istrying to do a good job when it
comes to privacy, so I don'twant to bash them here.
But, we also found that they'reproviding a solution that will
basically click on your behalfon "es I consent to cookie
banners if they can't find thesolution, and these goes against
(38:51):
the GDPR requirements.
So, I tried to contact them -the legal team - notably to try
to help them out with this andso that they could go for the
most privacy- preservingsolution.
Unfortunately, they haven'tanswered yet; so, if I could
profit from this episode toreach out to Mozilla so that
they can actually make the rightchoice and not go down the
(39:13):
slippery slope, which willbasically make this web browser
click on "es I consent on everytype of cookie banners, that
will be great.
Debra J Farber (39:23):
Let's see if
this platform can get you in
front of someone at Mozilla whocan answer the call.
So, I think that's a reallynoble goal.
Thanks for sharing that.
Do either of you have any lastwords of wisdom regarding this
research, anything you'd want toshare with my audience of
privacy engineers, or any opensource projects or conferences
(39:45):
or anything?
Simone Fischer-Hübner (39:47):
Just that
usability for privacy is one of
the most difficult areas toaddress.
So, we have very nice privacy-enhancing technologies, and the
same works for security, sothey're a good security solution
, but I think the majorchallenges are the human factors
(40:07):
and how to make privacy usable.
However, also the GDPR requiresusability, for instance Article
12, because you can onlyachieve transparency and form
decisions with usability; and, Ithink our research hopefully
contributes to this end.
(40:28):
But, there are still a lot ofchallenges remaining.
So, it's just a kind of firstcategorization and indications
of what directions to go, butstill a lot of research needs to
be done.
So, maybe that is my final word.
Debra J Farber (40:45):
Awesome.
So if anybody is working onusability versus lawfulness
research and wants to reach outto you, what's the best way to
reach out to you both?
Is it via LinkedIn?
Should I just drop emails inthe show notes?
Simone Fischer-Hübner (40:59):
I think
you can put a link to our paper,
and there should be our email.
Victor Morel (41:03):
Yeah, email
address.
Yeah, I stopped using LinkedInbecause of privacy issues, let's
say.
But, we also kept the websiteof the project updated.
So, if you want to stayinformed about what we're doing
in the cyber security project,you can just visit the
CyberSecIT website and you willhave all of the links to our
(41:24):
papers, which are alwayspublished in open access and
links to any kind of news.
It's always going to be on thewebsite and the link, I think,
will be provided in thedescription.
Debra J Farber (41:36):
Simone and
Victor, thank you so much for
joining us today on ShiftingPrivacy Left.
We had a great discussion onthe tension between usability
and lawfulness and theautomation of privacy decisions.
So, until next Tuesday,everyone one will be back with
engaging content and anothergreat guest - or guests.
(41:56):
Thanks for joining us this weekon Shifting Privacy Left.
Make sure to visit our website,shiftingprivacyleft.
com, where you can subscribe toupdates so you'll never miss a
show.
While you're at it, if youfound this episode valuable, go
ahead and share it with a friend.
And, if you're an engineer whocares passionately about privacy
, check out Privado (42:18):
the
developer friendly privacy
platform and sponsor of the show.
To learn more, go to provado.
ai.
Be sure to tune in next Tuesdayfor a new episode.
Bye for now.