Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Avi Bar-Zeev (00:00):
Especially now, in
the age of AI, t he computer
has so much of an advantage overus that we're going to have to
prevent that one way or another.
It's like walking into a roomwith the best salesman ever and
they have access to our entirelife history and can read our
mind practically, and no normalhuman is going to be able to
stand up to that kind of atreatment.
We're all susceptible to thatkind of manipulation, so let's
(00:22):
be clear about that and avoid it.
A nd, I think the thing weshould be trying to help each
other with as much as possibleis sharing the information,
sharing the data.
It's for everybody's benefit tofigure out where we messed up
in the past and be honest aboutit and open so that other people
don't have to make those samemistakes.
L et's try to foster that asteams and make it available.
Debra J Farber (00:38):
Hello, I am
Debra J Farber.
Welcome to The Shifting PrivacyLeft Podcast, where we talk
about embedding privacy bydesign and default into the
engineering function to preventprivacy harms to humans and to
prevent dystopia.
Each week, we'll bring youunique discussions with global
privacy technologists andinnovators working at the
(00:59):
bleeding- edge of privacyresearch and emerging
technologies, standards,business models, and ecosystems.
Welcome everyone to TheShifting Privacy Left Podcast.
I'm your host and residentprivacy guru, Debra J Farber.
(01:21):
Today, I am delighted to welcomemy next guest, Bar-Zeev, a true
tech pioneer who's been at theforefront of spatial computing
for over 30 years.
He first launched Disney'sgroundbreaking Aladdin VR ride
in the early 90s.
He crafted Second Life's 3Dworlds and co-founded Keyhole,
which became Google Earth, whichis a Mirror Earth 3D browser
around 2001.
He co-invented Microsoft'sHoloLens in 2010, helped to find
(01:46):
Amazon Echo Frames in 2015, andthen contributed at Apple on
some undisclosed projects.
Most recently, he's the founderand president of the XR Guild,
a new nonprofit membershiporganization that seeks to
support and educateprofessionals in XR on ethics
and the most positive outcomesfor humanity.
He's also a Member of MagicLeap's Board of Directors .
(02:10):
=Board, welcome! I t is adelight to have you here on the
show.
.
Avi Bar-Zeev (02:14):
Thank you.
Thank you.
It's great to be here.
I could add one thing.
You said undisclosed projects.
I think I wrote that bio when Istill couldn't talk about the
Apple Vision Pro, but that's atleast one of the things that
worked on at Apple was helpingon the Apple Vision Pro, so I'm
happy that I can finally talkabout it.
Debra J Farber (02:27):
Well, that's
exciting, that's really awesome.
I mean, I think that there's alot of people, a lot of touch
points, those technophiles inthe audience that have checked
out the Apple Vision Pro.
It's just kind of at theforefront of what's current
right now in VR, so that'spretty freaking cool.
You know, you have such a deepbackground in this space and I
want to ask you what you've seenover the years, but maybe it's
(02:50):
first easier to just talk about- when it comes to AR / VR /
metaverse, what privacy issuesare top of mind for you?
And then, how did you get thereover the course of 30 years?
How did you come to realizethat those are the top issues?
Avi Bar-Zeev (03:06):
Yeah, 30 years is
a long time to collect a lot of
mistakes and a lot of bad thingsthat have happened over that
time, and I'd say 30 years ago Iwould have sounded very much
like any average metaverseenthusiast of this is the future
we're all going to be living inonline worlds.
I actually thought in 1992,that was the time that, forget
(03:26):
2d, we're not going to be doingthese 2d web pages, everything's
going to be 3d immersive.
That's gotta be the way it goes, because that's all you know.
All the literature pointed inthat direction.
And and no, the 2d was actuallypretty good and there were a
lot of problems even with thatthat we haven't even solved in
30 years.
And so I've spent all that timelooking and finding all these
mistakes that we've collectivelymade and realizing, man, we
(03:47):
should have done better, we cando better, and in the future we
have to do better because wedon't want to let down our
customers.
We really want to do a good jobat this.
So, anyway, it's just acollection of that and I've just
learned over the time.
I'm not a lawyer, I was never aprivacy expert, but I've become
fairly close to being both atthis point after having
experienced all the tragediesthat we've had over the years in
(04:08):
this area, and what I'm worriedabout with the future is these
technologies are now with XR,spatial computing.
They're so much more powerfulthan the things we've dealt with
in the past that the benefitsare magnified, but the harms are
also magnified, and if I had toguess, I'd say at least 10x,
but could be even a lot more,based on how impactful these
technologies can be to ourperception and our emotion.
(04:31):
The chances for manipulationand exploitation are just
through the roof, so we have tobe even more careful.
Debra J Farber (04:40):
Thanks for that.
Are there any specific privacyissues?
Obviously, we want our productsto be safe and there's security
issues, there's misinformationissues.
But, what about we'recollecting all of this data?
Through these experiencesrequired to collect a lot of
data about the person in orderto make these 3D worlds work.
So, what ends up becoming thepotential harms by collecting
(05:02):
this data and what's unique tothe space?
Avi Bar-Zeev (05:09):
That's a great
point.
For many of these experiencesto work - for them to have the
benefits, we do need to collectdata.
I'm the kind of person whowould love to work on UX and
algorithms that are beneficialto people.
But, I know that if othercompanies to the same thing, but
poorly, it's going to impactall of us, and those things
might wind up getting bad.
This whole notion of privacy, Imean this may not be new to
people who listen to yourpodcast at all, but when people
talk to me and say privacy isdead all the things that they
(05:30):
like to cite I go back and sayjust look at the third
Amemendment think about theconstitution of the United
States and go back to thisperiod in time where people who
lived in the colonies were beingforced to have soldiers
quartered in their house.
Right, the whole amendmentsounds like it's completely
outdated.
We're not no one's asking us toput soldiers in our house, but
(05:52):
the reason that's there as anamendment is because if the
soldiers were living in theirhouse, then the soldiers would
learn.
If the person was a rebel,they'd learn what their politics
were, they'd learn who theyassociated with, they'd learn
how they think, they'd learnyour relationships.
They'd learn all yourweaknesses and your pain points
so that you could be manipulated.
And so they rightly said no, wecan't have that.
I wish that amendment had beenupdated to include companies,
corporations and the internetand things that they couldn't
(06:14):
have imagined yet, but that'swhy it was there and we're still
living with this.
There are people that want touse our data to exploit us and
take advantage of us, and Ithink we have to figure out how
to enable the good uses and stopthe bad ones.
And that is not just aboutprivacy.
Like you said, security comesinto that as well, but it
requires all of us to be socareful when we do these things
and to think so hard about whatcan go wrong, just like the
(06:37):
colonists didn't think aboutwhat could go wrong after
soldiers were no longer livingin your house.
There's still plenty of otherharms.
That's what.
spent a lot of time thinkingabout, and in the case of XR
specifically, there aretechnologies here that are
equivalent to mind reading, soin some sense, it's like having
the government or a companyliving in your mind, learning
things about you, and thosethings can be used for your
(06:58):
benefit and they could be ( usedagainst you.
The thing that perpetually Ibang my head against the wall is
why do we have so much talks)telling the difference between
experiences that are working foryou and there to benefit you
and those that are there forsomeone else's profit, so
they're profiting off of yourexperience?
We should be able to tell thedifference between those things,
but for some reason we havetrouble defining laws,
(07:20):
regulations and policies thatdifferentiate those, and that's
one of the things I spent a lotof time trying to focus on.
Debra J Farber (07:25):
That's
fascinating I and I have to say,
in my 19 years of working inprivacy directly, I have never
thought about the thirdamendment as a privacy risk, and
you're absolutely right.
" just kind of have menoodling on that today.
So thanks for expanding my mindthere.
I'd love to hear a little moreabout why is it and maybe this
(07:45):
is good to go into the nextquestion because I have watched
some of your other podcasts andtalks where you talk about
classifying eye tracking data ashealth data.
Can you explain, not only, whybut what is it about eye
tracking data that is sodifferent from just collecting
data that's known about a person?
Specifically, I'm talking abouthow you've about how the
(08:08):
different parts of the theenable y ou to can change things
in .
3D experience without the personin the experience knowing that
or detecting that a change hashappened because of the way that
the lever makes it seemseamless by tricking the person
based on their eye tracking.
Avi Bar-Zeev (08:23):
Yeah.
So, to get real concrete,here's an example of a
technology that would probablybe beneficial, but it also
highlights the danger.
So, there's a thing in VR thatwas invented called redirected
walking.
So just imagine you're in aroom let's say the room is 10
feet by 10 feet.
Y ou can't really walk straight.
Even if you went diagonally,you could only go about 13 feet.
R ight?
You really can't walk in a roomlike that without hitting
(08:48):
furniture and walls.
But, with redirected walking ina slightly larger room, you
could theoretically be walkingin circles, b ut it looks like
you're walking in a straightline.
For you, in the headset, yousee yourself as walking down a
road straight for miles andmiles and miles,.
For whatever reason, whatyou're actually doing is walking
in circles in the room.
You're reusing the same spaceand the technology can trick you
into t hinking that .
(09:17):
A you're walking straight byprogressively rotating the world
.
As you walk, it just rotatesthe world a couple of degrees
and you start going in a circleto compensate and you think
you're going straight.
But how could you rotate theworld without noticing?
That seems like everybody wouldnotice and they'd dizzy.
But, it turns out, whenever weblink or whenever we look away
and our eyes are constantlydarting around as we try to make
sense of the world around us.
Our eyes are constantly lookingeverywhere and trying to fill
in information.
Those are opportunities tochange the world during those
(09:38):
brief periods, And then, we areactually blind during those
periods.
There's a couple of videosonline that you could find
that'll show that that when oureyes are in what they call
saccadic movement, that's whenthey dart around or when we
blink, our brain is telling usthat we see the world, but it's
not actually seeing the world.
Y you can change things and wejust don't pay attention to some
of these other things.
There's great experiments wherean interviewer is talking to
(10:00):
somebody on the street and thentheir hired hands come along
with a billboard and they get inthe way of the two people
talking.
Then you swap out theinterviewer with a completely
different person, different race, different gender.
It didn't matter.
The person being interviewed,as soon a s they can see them
again just continues talking asif nothing has happened.
They don't have the continuityof mind to know that the
(10:22):
interviewer got swapped out whenthey couldn't see them because
they didn't pay attention.
Those are details that we oftenjust don't pay attention to.
It happens all the time andallwe're all vulnerable to
this.
So, this is t is reason thatillusions and magic are
[possible.
So, think about the dangerbeing.
You know, we might go to amagic show to be entertained and
the magician might steal ourwatch.
But now we're dealing withpeople who are figuratively
(10:44):
stealing our watches but notgiving them back at the end of
the show.
It's not for entertainmentpurposes, it's for them to make
money.
They're using the same tricksthat could allow us to be
surprised by sleight of hand, byjust being distracted looking
somewhere else.
Those same tricks can be usedagainst us in order to actually
make us more susceptible toadvertising, for example.
It's not hard to imagine howthis will work, because it's
(11:06):
already working today in a crude?
form All.
Right, is already there.
It just doesn't work well yet,but when it does look out
because we already know verywell how to manipulate people
emotionally You'll seecommercials GDPR that pull on
the heartstrings.
So what you really have to doif you really want to sell
things to people is get themworked up emotionally.
No-transcript in Africa, whereDebra entire country, the young
(11:54):
people in the country, wereconvinced not to vote and a
certain regime was elected as aresult of it.
So we're all subject to thesemanipulations, but the more data
they have about this and themore experiments they can run
live experiments the moresusceptible we are to it.
And now I think it's time totalk about it, because now we
have to kind of put a stop to itbefore they become
significantly entrenched in themoney-making machine, in the
(12:16):
same way that tech and socialmedia have become entrenched
already.
already.
t.
(12:37):
U.
.
,.
I Yeah, that's so muchinformation that you just shared
, like so many questions aboutthe ill future ic So the first
is I know, like eye trackingdata, you've advocated for being
classified as health data.
So in the US that might becovered under HIPAA, GDPR the
EU, gdpr.
Are you saying it's a biometricor it's so related to how our
bodies work?
Are we able to pick .
.
.
Deborah Farber out Oh basicallysay, oh, this behavior is
Deborah Farber's behavior, sotherefore biometric, but a
biometric.
But when you use lots .
of those things.
(12:58):
All of the above.
I'll try to list out some ofthose things.
So, first of all, in just avery literal sense, it's health
data because it can be used todiagnose certain conditions.
So, we've already shown thatyou can diagnose autism,
Parkinson's, concussions,potentially ADHD.
There are a whole variety ofconditions I don't want to say
diseases, because not all ofthem are.
You know, some people may arguethey're just ways of being, but
in general, things thatsomebody might care about.
(13:19):
What your diagnosis is and whocares?
Well, insurance companies care.
The insurance companies mightincrease your rates if they know
that you are predisposed to acertain condition.
That might cost more money.
Your employers might care.
They may discriminate againstsome of these things and in some
cases it is legal todiscriminate.
In some cases it isn't.
Not everything's covered underthe like, for example, the
American Disabilities Act, andthe government might want to
(13:41):
know as well.
There may be things.
There may be these things, thispiece of information may be
very private.
So that's just right off thetop, the reason why the raw eye
tracking data and the derived ifanybody's run the algorithms to
try to see what our diagnosisis that should also be covered
as health data.
But the raw data, because ithas the potential to be
diagnosed should cover that.
But now, even more than that, Idon't know if HIPAA is always
(14:01):
the right answer, because thatwould cause us a whole bunch of
other bureaucracy to be invoked,but the core notion is we
individually should be incontrol of our old data.
However the regime isimplemented whether it's HIPAA
or something like it we have tohave first have informed consent
, truly informed consent when itcomes to uses of the data.
And let's be really clear thatEULAs terms of service any
(14:22):
illegally they call these thingscontracts of adhesion right,
there are things that you didn'treally agree to.
All these EULA like things areneither informed nor their
consent, because the lawyersknow that nobody reads them.
So they're not informed andthey're not consent because you
already bought the product andyou're already in the experience
before you click yes and soit's just a performer.
People are just clicking yes toget into the product.
(14:43):
Nobody's really agreeing that.
Nobody really understands therisks.
Even the professionals don'teven understand the risks alone.
So let's just give up on thatidea that we have somehow have
true informed consent here.
But we have to have it.
If your information is going tobe used, you got to know where
it's going.
You have to be able to revokeit if you don't like the way
it's being used, and then it hasto be accountability across
that whole spectrum.
(15:03):
So that's very HIPAA-like, butit doesn't have to be the exact
HIPAA law that covers it,although, like I said, some of
this is real health data, so itwould overlap.
But what else can be done witheye tracking data is we can get
at your state of mind.
People like to look atbrain-computer interfaces, and
one of the books I read recentlytalks a lot about
brain-computer interfaces.
I think we're still a littlebit a ways away from that.
(15:29):
That's not immediate.
Brain-computer interfaces willbe there for people who have
physical disabilities that wereuseful for people who might be
paralyzed.
They're not there in a generalway yet.
We're still a little ways out,but eye tracking is already here
.
Eye tracking is here today andit can work like mind reading.
Think.
Think about it this way theonly part of your central
nervous system that is exposedto light are your eyes.
Your eyes are actually part ofyour brain, so we're seeing.
(15:51):
We're not seeing your synapses,but we're seeing effectively a
part of your brain system, yourcentral nervous system, when we
look at your eyes and as aresult of the way the eyes move,
of the way you blink thingslike that we can essentially
infer things about your mentalstate and what you're paying
attention to.
For example, if you wanted totell who somebody is attracted
to, you could look at their eyesand tell the kind of glances
(16:13):
they might steal and where onthe body they may look at
another person.
Do they make eye contact, dothey look at other body parts?
Those are things that will makeit clear whether or not this is
a person that you're attractedto or not.
And even the fact that youmight look away shyly is an
indicator that you might beattracted to them.
So it's not simply when youstare, but it's how you look.
Also, things that you'reinterested in.
(16:34):
Pupils will dilate.
They dilate for purposes ofregulating the amount of light
right, they're like camerairises and that they control the
amount of photons that get intoour eyes, to our retinas.
But when you control for that,they also dilate when you get
excited about things.
So if you were to tell me aboutsomething coming up that was
interesting, my pupils woulddilate naturally and I'd be
excited.
So it's a good signal to tellif people are interested or
(16:56):
disinterested.
Do you think the advertiserswant that?
Of course they do.
They want to know when theyshow you that car or that can of
soda, are you interested in itor not.
So they're going to want to seethat data, to know in advance.
And now, if you put people in atight loop, like we call a
Skinner box right, where you dostimulus and response and you
keep changing the experiment asyou go in the matter of minutes,
(17:18):
you could imagine the systemshowing me a hundred different
cars in a matter of minutes tofigure out which one I looked at
in a way that indicates that Ilike that car, and I won't even
notice that they're changing thecar because the thing we said
before when we blink and when welook away, we don't notice
subtle changes in the world.
And so therefore, they couldhave cycled through a hundred
different cars in one spot in myworld and figured out which one
(17:41):
I responded to, and thenthey're like oh, that's the one
he likes.
And do the same thing withpeople figure out which ones I'm
attracted to, which ones I'mnot, and at the end of the day,
you can imagine some versiveadvertiser making an
advertisement that takes the carthat I like and the people that
I like, putting them togetherespecially if they're people I
know, like my best friend andsomebody I might have a crush on
(18:02):
, or whatever.
Put them in the car, have thembuild a commercial around that
I'm driving off into the sunset.
You know that's going to pushmy buttons.
It's going to push everybody'sbuttons when they see the things
that they care about.
And so we're very close to aworld in which that is real and
the simplest form of this isjust going to simply be we've
all seen generative AI at thispoint and diffusion models.
How hard is it going to be toreplace that can of Coke in your
(18:25):
hand with a can of Pepsi, right?
So your social media feed inthe near future could very
easily unless we say no, couldvery easily become something in
which we're the advertisement toour friends and family.
We're the product placement andwe have no control over how
that's used.
So the technology is alreadythere.
Debra J Farber (18:42):
It's just a
matter of the people being
clever enough to exploit it andus being yeah, definitely
Amazing thought experimentsthere, and you can connect the
dots to see how that could beused.
It's interesting.
The last episode that Ipublished last week was around
embedding privacy into A-Btesting.
Let's talk aboutexperimentation for a second.
(19:03):
We didn't unpack what I'm aboutto talk about last week, but I
think you're the perfect personto talk about it with.
And that's when we designproducts and services for safety
in what we call highlyregulated industries whether
it's less transportation or evenhealthcare, real mission
critical or sensitive data wemake sure that we're not
(19:24):
experimenting with like do wethink this will fit on the
engine?
We test everything.
We're very, very.
We have protocols and then ifwe're going to experiment about
something, especially on people,like in healthcare even HIPAA
not to just bring up HIPAA asit's just coming up because
healthcare but there's a conceptof for research and
experimentation that there's aninstitutional review board that
(19:45):
will take a look at thesepotential harms maybe do a
little threat modeling and youcan be a little looser with the
data there, too.
You have maybe broader data setsyou could work on, as long as
you agree not to try tore-identify some things to allow
for some innovation there, butthere's usually a group of
people that are stewards onbehalf of human beings, almost
an extension of thedoctor-patient confidentiality
(20:06):
like social contract.
We see stuff like LLMs and howopen AI just rolled out
experiments on people and howwe've been doing this with A-B
testing and advertising, andwhere the technology is getting
more and more towards analyzingsensitive data and manipulating
people.
Is that an answer?
Do we need institutional reviewboards?
Is that something that we can'tdo in a decentralized way?
(20:28):
Do we need laws?
How can we best address theseexperiments?
At least that piece of themanipulation and privacy problem
.
Avi Bar-Zeev (20:37):
No, that's a great
series of questions, I think.
Just to go back to basics for asecond, because you mentioned
this happens in medicine, right?
If you got educated as a doctor, you had courses on ethics.
There's a Hippocratic Oath andyou learned the Hippocratic Oath
and you learn essentially itboils down to first do no harm,
but there's a lot of details inthere.
If you learn the Hippocraticoath, and you learn essentially
it boils down to first do noharm, but there's a lot of
(20:57):
details in there.
If you go read it, there arethings that doctors learn that
you should or shouldn't do, andexperimenting on humans is one
of those things.
Experimenting on children iseven more restricted because
children can't give you theirinformed consent.
So science progresses fairlyslowly when it comes to diseases
around children, because youwant to give them the treatment
you don't want to give a childthe placebo when you start doing
these medical experiments,right?
So you have to think about theethics of these situations.
So doctors get ethical training.
(21:18):
Who else gets ethical training?
Journalists get ethicaltraining.
Lawyers get ethical training,even though their ethics may be
a little different in terms ofdefending somebody you know is
guilty.
You wouldn't want a doctortaking that same approach.
But lawyers have a set ofethics, civil engineers we have
it around our professionalengineering Bridges shouldn't
fall down.
Computer science it'scompletely lacking.
I had no ethics training inschool whatsoever.
(21:40):
The closest thing I had wasphilosophy and that was not
about ethics.
That was about life and theuniverse and all that stuff.
What we need to do is starttraining our students in
computer science, especially inAI, but also in XR, spatial
computing, because it's probablya close second in terms of the
risks and the harms that canhappen, like in civil
engineering, if your softwarefails, it's like the bridge
(22:01):
falling down, so people get hurt.
There's liability that comeswith that.
And because there's liability,there needs to be training.
There needs to be things thatwe do so that we can at least
say at least we followed therules that we knew were the
right rules, even though theoutcome failed.
We have less liability becausethe process was done correctly.
It's just something that wecouldn't control might've failed
.
But if we didn't have a processin the first place, of course
(22:23):
we should be liable.
If we're going to just throwthings out there and not care
about the harms that can happen,which has been the motto of
certain companies.
Right has been literally do itand see what happens.
That's what we need to addressreally, really carefully here,
and so I think that's theargument is that we should have
we should first understandethics and what is ethical and
what isn't, and then you'reright, and then we have IRB
(22:45):
flows from that.
Once you have ethics, then oneof the realizations that you
make when you study ethics is Ican't be my own judge, jury.
Study ethics is I can't be myown judge, jury and executioner,
because I'm always going togive myself a pass.
I can't be the one thatevaluates my software.
I've always hired UXresearchers to come in and tell
me where my stuff sucks, becauseI'm not the best person to
judge it.
They watch people use it andlearn and then tell me where my
(23:05):
stuff is broken.
I should be good at that, butnobody's good at that.
Everybody needs a fresh set ofeyes to look at it.
So we need these IRBs of peoplewho are not.
Their salary doesn't depend onthe answer aligning with what
the company wants to do.
That's the key to an IRB isthat they have to be free to say
no.
That's a bad idea, and theproblem is that, as an employee
(23:25):
of a big company.
I forgot who said who was thequote.
No person likes to criticizesomething that would undermine
their salary, right?
It's kind of paraphrasing thatand it's true for all of us.
I don't criticize companiesthat I own a lot of stock in,
just naturally I want them to dowell.
I'm not going to go out andcriticize them every day, so
that's just normal.
And IRBs is a great way toapproach it and I think
ultimately we'll probably seesomething in computer science
(23:47):
that's like what doctors have,which is maybe licensing, but at
the very least boards thatwhere you can take issues where
you can say these people reallydidn't think this through and
they did a lot of harm.
And it isn't just that you haveto appeal to the government or
a civil suit, but that it mightbe in.
There might be groups withinthe industry that try to keep it
within the industry but alsosanction people who are doing
harm.
So doctors have that andjournalists have that and
(24:09):
lawyers have that.
You know you could be disbarred, right?
That's not a governmentfunction, it's a function of the
lawyers themselves.
Debra J Farber (24:15):
Indeed.
Yeah, it'll be interesting tosee how that shapes up, but we
do know that that's going totake time, because we're not
even close to that right now, soa lot of damage can be done.
Let's turn to maybe someprotections, right?
So what about anonymization?
Is making data anonymous enoughto protect people and their
personal data?
Is making data anonymous enoughto protect people and their
(24:35):
personal data?
And is it even possible toanonymize VR and AR data to the
point where it's still usablebut considered anonymous?
Avi Bar-Zeev (24:40):
Yeah, I think
anonymity is a lie.
I'll just be real blunt aboutit.
I think that it made some sensein the past that you could hide
certain details from, let's say, some patient's file.
You could take the name off andput a code in instead of the
name and that data was not goingto be identifiable because you
didn't know where they lived andmaybe you only knew their age.
But with the size of thedatasets that we have now,
(25:02):
nobody can really be anonymous.
We first saw that in earliermemories when it was AOL putting
out their search logs andpeople were able to reverse
engineer who was who by thequeries that they made to AOL.
Aol thought they anonymized it.
They pulled out the identifiers, but people were able to
reverse engineer it process ofelimination, figure out who was
who.
And when it comes to VR,researchers at Berkeley have
(25:24):
already shown that just given afew minutes of recording of your
body motion, it's unique enoughthat you could be re-identified
in seconds.
So if a company has beenrecording you for the last year
or two as you use their VRheadsets and if they kept that
data and they don't delete it,some friends who are employees
there say they don't keep it,but they don't promise not to
(25:46):
keep it, so who knows?
The fact is, they couldre-identify you even if they
said you're anonymous.
So I don't trust anything thatsays we're going to anonymize
you if it includes any stream ofbiometric data.
Your eyes are very individual,both in the way they look.
So many parts of the eyes areunique that they can be used for
better than fingerprint levelidentification.
Iris ID on the Apple Vision Prois pretty rock solid.
(26:07):
I'm not going to say exactlyhow it works, but you can
imagine the things that aredifferent about everyone's eyes,
including the irises, but theretinas are different.
The sclera we have bloodvessels on the eyes are
different in every person, andway easier to tell the
differences between people thanit is in fingerprints.
Fingerprinting is actually a lotmore in common and you never
get a full fingerprint.
It's always partial.
In any event, this stuff isvery revealing.
(26:29):
It's crazy that it's even justour walk, our walk cycle, the
way we move our bodies isrevealing.
So anybody who thinks thatthey're going to go use
anonymity as a tool to say, hey,this is totally private because
it's anonymous, no, don't trustit.
And then some of these devices,I won't even use them because
until they promise me thatthey're going to delete that
data, I don't want to give themthat headstart on being able to
collect something about me thatthey could use to re-identify me
(26:51):
later.
I'm not, I'm just like no, I'lljust use something else.
There's plenty of other options.
Debra J Farber (26:55):
Right, we think
about these things, right.
I worry about the public, whodoesn't even know what questions
to ask, to be worried or tomake the decision to give.
Avi Bar-Zeev (27:02):
They're putting
their kids in these things.
They could have a lifetime ofno anonymity with things, from
when they were 10 years old to13 years old, are going to be
able to be used later to figureout who they were, and that
stuff could be used in a goodway, right?
The company could say hey, werealized that the person who's
using the headset now is only70% of the height of the person
(27:24):
who bought it, and their armsare shorter everything is
shorter, and they move like achild according to our stats.
So maybe we should put theparental controls on.
Even if they're pretending tobe the parent, somehow they got
the parent's password.
No, sorry, you're notidentified as the parent.
So the positive of Iris ID orother biometric markers is the
device can be locked to us in away that it's safe, that only we
get to see our own data andkids can be protected from
(27:46):
things that they're too young tosee.
Debra J Farber (27:53):
For some reason,
the same companies who are not
protecting our data are also notidentifying the children using
them.
That then brings us to how canproduct and development teams
foster innovation in the spacebut at the same time minimize
harm right?
How do you strike that balancebetween the good use cases, the
ethical use cases, and the onesthat are like really scary, that
can cause harm to individuals?
Avi Bar-Zeev (28:12):
The most important
thing is to take that time in
the development process to thinkabout the answers.
I think we're all capable of it.
I mean, I'm glad there arepeople who are privacy
professionals, who study thisand learn and can teach other
people.
But ultimately all theengineers, all the designers,
everybody working on the productneeds to become as well-versed
in these issues so that theyknow how to discuss them.
They know how to ask thequestion of we choose A or we
(28:34):
choose B, what are the harms andbenefits of either choice?
And then make the right choice,even in the face of the boss
saying get it done.
The pointy-haired boss may besitting there going I don't care
A or B, I just want it to shiptomorrow, and you're left with
the decision of what's the rightthing to do.
We should have enough collecteddata, that ammunition, about
(28:57):
things that have been done inthe past, the mistakes that
we've made in the past.
If those were available to us,we'd go back to the boss saying,
hey, this other team on thisother project chose A in the
past and it really blew up onthem.
They didn't do it right.
So we should choose B.
We know B is a better option,even though it seems like it
might be more expensive, it'sactually cheaper in the long run
.
Where is that data?
Where do we go for thatinformation of the case studies
and the postmortems and theexperience that people developed
(29:18):
over 30 or 40 years of whatworks and what doesn't work?
It can't just be tribalknowledge, right, and it can't
just be a few people that havethat information and nobody's
ever going to be able to publishit in the form of a book of.
You must do exactly this,because every product is
different, every situation isdifferent and so much of it is
cutting edge.
What we shouldn't be doing ismaking the same mistakes.
If we have to make mistakes andwe will make mistakes, they
(29:39):
should always be new mistakes.
They should always be.
I like to say you know, don'tmake the same mistake twice,
always bigger and better, like,let's fail up, but let's not
keep repeating the bad thingsthat people have done for years,
and the best example of that iswith the metaverse.
Holy cow, you have people whobuild new metaverses in the last
five years that made the samemistakes that massive
multiplayer games have made for30 years.
(30:00):
Did they not play these games?
Did they not talk to thedesigners and engineers on those
games to figure out what goeswrong in terms of harassment and
griefing and safety.
We should not make that mistakeever again, and that's part of
the problem is people don'tlisten and they think that this
time is different and it's anego thing in some cases, and the
realization has to be look,we're the ones building these
(30:21):
things, we're all the bakers ofthese things.
We may be the decision makers,we may be designers, we may be
engineers, but all of our namesare going on that thing and we
want to be proud of what webuild.
We want it to not hurt people.
We care.
I don't know anybody in thefield who doesn't care about the
result of their work.
Right, they all care.
They just don't all have theinformation.
So let's spread the information, let's make sure it's available
and let everybody do their bestjob at making the best possible
(30:44):
products.
Debra J Farber (30:46):
I think that
makes a lot of sense.
So far, I'm hearing we need alot more education and to
developers and product folks andconversations with people like
yourself or listening to videosand talks and such, where we
could get that knowledge of whathas gone wrong in the past so
we're not keep repeating it inlike these tech cycles, but I
get funded and keep making thesame mistakes, but at larger
scale.
Now, before we go on, how do wethink differently about AR and
(31:10):
its set of risks versus VR andits set of risks?
Avi Bar-Zeev (31:14):
That's a good
question.
I will tend to argue there'snot a lot of difference between
AR and VR technologically, andthe Apple Vision Pro is a great
example of that.
Apple wants to call it spatialcomputing, but if you peel that
back you have a dial.
You can go between fully realworld with very little virtual
stuff, even though you're seeingthings through a display right
in cameras.
It's all virtualized to somedegree, but you're seeing the
(31:36):
real world one-to-one andthere's a lot of effort that
went into making that reallygood, making it one-to-one.
But you turn the dial all theother direction and now you're
in Joshua Tree National Park,completely immersed all around
you.
So AR and VR are in the sameexact device.
You could be at either end ofthe spectrum.
But there's a functionaldifference, not as much of a
technological difference.
And the functional difference isAR is best used for things that
(31:58):
are related to you and yourlife, things that are about the
here and now is the way I liketo say it.
So anything that is aboutimproving the quality of your
work, your social interactions,your cooking, looking at the
fridge and seeing what could Imake for dinner, those are all
AR type experiences.
They're all based in yourreality and your daily life and
think about how much time any ofus spend immersed in our daily
(32:19):
lives versus not.
I'd say it's probably 90% to10%.
We spend 90% of our daily livesdoing things that are just
pragmatic, like talking topeople, socializing, working,
whatever.
10% entertainment, you know.
Escape is watching a movie,playing a game.
Some people do more than 10%,but I think on average it's
about 90, 90 times is the split,and so VR at most is ever going
(32:42):
to probably be, on average, 10%of our time, and then we use it
for escapism as well.
It's really useful for doingthings that we can't do in real
life going places that don'treally exist or going places
that we can't easily get to, andthe kind of an AR experience we
would have if we were bothwearing third-generation Apple
Vision Pros.
We would be doing this and youwould see me in your office and
I would see you in my office orliving room or wherever you
(33:03):
happen to be, and we would feellike we're in the same space
together.
The goal is so that we feel theconnection of being able to
make eye contact and all thesocial cues work normally, but
we're in the same space together, but we're each in our own
space.
We don't have to travelanywhere.
We've just literally invitedyou over for a talk.
That's the goal, and thebenefit of that is it's going to
(33:24):
reduce travel a lot.
You won't have to commute, youwon't have to fly as much right,
that's going to reducepollution quite a lot, even
though these devices also spendenergy.
That's not free.
That's a lot less than anairplane.
It's a lot less than a car.
That's the future.
But AR is much more of thatsocializing and talking to
people and VR is much more ofthe escapism.
Now, when I'm 99% immersed in myown house, there's not a lot
(33:44):
you can change about the world.
You can add subtle things inthe world you could start to.
A good example is let's say, wewere in a meeting room and you
didn't want to have to have areal clock, but you wanted to
let people know when the meetingwas ending.
Well, we could start to justfade the color to sunset.
Right, the walls start becomingmore reddish as the meeting
gets towards its end.
So there's a real subtle cue toreality.
You don't have to change a lot.
(34:05):
You just can do very minimalthings to give us the important
information that we need.
In VR it's going to tend to bemore overpowering.
It's going to tend to be youagainst zombies, you going to
the moon or the top of theEiffel Tower things to the moon
or the top of the Eiffel tower,things that are much more
dramatic.
But there's a lot moreopportunity for really changing
the world around us.
And that thing I talked aboutcycling through a hundred
different cars would work muchbetter in VR, because in AR
(34:28):
there's an actual car on theactual street and it's going to
be a little while before youcould seamlessly replace that
car with another car and no onenoticed, right?
You'd have to erase part ofreality if you were going to put
a smaller car in the place of abigger car.
So that stuff is not going towork as well in AR.
But in VR, you know, all betsare off.
Like with the redirected walkingI talked about earlier, you
(34:48):
could really change somebody'sworld dramatically, and it
brings up all these issues ofidentity and harassment and
griefing.
None of those things come intoplay.
They're not hard issues.
When you're doing ARinteractions, like you coming
into my house, you're beingyourself, you're visiting me.
There's not a lot of safetyconcerns because we know each
other well enough to haveinvited the other person over.
(35:09):
But when you start talkingabout the metaverse whatever
that is now you're talking aboutthe mall or the world, which
contains all the good and bad.
Good behavior, bad behaviorthere's no police in the
metaverse, yet there's no rulesin the metaverse, yet it's the
wild west, and so it'sinherently unsafe.
I wouldn't send children intothat kind of situation without
(35:31):
parental supervision anytimesoon, because you just don't
know what is going to be goingon there.
It's like I wouldn't send my 14year old to the mall at this
point by themselves.
Even a mall with police in it,I probably wouldn't do, because
who knows what's going to happen?
16, 17, sure, why not?
But I think you know.
13, 14, I'd still have somequalms about it.
10, certainly not.
So this is what we're dealingwith, but yet it's happening.
(35:51):
All the time.
Parents are strapping HMDs totheir kids' heads at age 10, and
we don't even know where thekids are going and what they're
doing, and I think we have topay a little more attention to
that.
Debra J Farber (36:00):
Yeah, that is a
really great point, and thanks
for distinguishing virtualreality from augmented reality
and how you think about those.
Given that, do you have anyadvice for privacy technologists
and just technologistsgenerally who are building AR
and VR systems, especially whenit comes to privacy?
Maybe some just practicalguidance principles to keep in
mind?
Avi Bar-Zeev (36:21):
I think just the
fact of having principles.
I think it is important.
I think every project shouldprobably start out.
You know, at Disney, wheneverthey started a new movie, they
had a book they called the Bible, which is not the religious
text, but it was when we saypurple, this is the purple, the
exact color purple we need, andso the character is always going
to be the same color purple andeverybody gets the same
guidance.
So the Bible for any projectshould include well, here's what
(36:42):
we believe.
This is what we say is rightand wrong, and this is what
we're going to stick to.
Ideally, that flows through theentire product lifecycle, and
the customers are informed ofwhat that means too.
So the customers wind upunderstanding the principles.
These are the things we promiseand these are the things we're
going to stick to, and these arethe things we're going to fix
if we mess them up, and theseare other things that we didn't
necessarily promise.
(37:03):
So if a company is going toadvertise to us and use ad tech,
let's be clear.
That's how they're making theirmoney.
Tell us upfront, be clearYou're not our customer.
You are essentially, you arethe chips in the casino that are
a real customer, theadvertisers and they're betting
on you and they either make themoney when you buy their product
or lose the money when youdon't buy their product at where
(37:24):
the house.
You know, the companies thatare ad tech companies need to be
really clear that they are thecasino that's going to make
money, no matter what people bet, and we're the chips.
The humans are the chips inthis case.
We're the ones being bet onwhere the cards, or whatever you
want to say, and let's just behonest about that.
And so people can make theirchoice as to whether they really
want to be a part of that ornot.
And if you're doing other kindsof products that don't involve
(37:45):
ad tech, which I tend to benegative about let's tell people
about the benefits.
Let's espouse that, you know?
Look at a product like SecondLife.
I worked on it.
I didn't really understand theeconomics of it really well.
Philip Roselle understood itway better than I did back then,
but now I've listened to it alot more and it turns out Second
Life made more money per userthan Facebook does without any
advertising.
They made more money by justbuilding a world and letting
(38:07):
people build stuff and tradestuff and didn't have any ads
and it was much more lucrative,and so I don't know.
There's got to be a way.
There's got to be otherbusiness models out there that
people could find, that reallysupport the work, and and I
would encourage people to getcreative and figure those out
and not just be lazy and pickthe ones that that make a little
bit of money but aren't thebest.
I think this won't be the choiceof the ad tech companies, but I
(38:28):
think one of the things thatwe're going to find is that
it'll be necessary, at the endof the day, to figure out a way
to firewall the advertising fromthe personal data.
I think that's probably the keyto making it survivable is that
there was this thing called theGlass-Steagall Act that
protected Wall Street.
The banks were conservative andthey weren't allowed to bet
(38:50):
with your money.
They had to use it inconservative ways.
And then there's Wall Streetand you can take crazy bets on
things and make billions ofdollars, and they were kept
separate because we knew one wassafe and one wasn't, and then
we got rid of that and then wehad 2008.
We saw the consequence ofgetting rid of that separation,
and we haven't learned thatlesson yet, but I think we're
going to need the separationbetween the tech, which is
(39:10):
technically a first amendment,right?
I mean, advertising isprotected by our constitution.
No one's saying ban advertising,right, it's a right of a
company to put out there and saywhy they think we should buy
their product.
That shouldn't be good.
But should they be able to useour personal, private data to
manipulate us?
No, I think that's going toofar.
That's an asymmetric powerwhere the computer and
(39:31):
especially now in the age of AI,the computer has so much of an
advantage over us that we'regoing to have to prevent that
one way or another.
It's like walking into a roomwith the best salesman ever and
they have access to our entirelife history and can read our
mind practically, and no normalhuman is going to be able to
stand up to that kind of atreatment.
We're all susceptible to thatkind of manipulation.
(39:52):
So let's be clear about thatand avoid it, and I think the
thing we should be trying tohelp each other with as much as
possible is sharing theinformation, sharing the data.
It's for everybody's benefit tofigure out where we messed up
in the past and be honest aboutit and open so that other people
don't have to make those samemistakes and let's try to foster
that as teams and make itavailable.
Game developers are really goodabout this.
(40:12):
We can learn a lot from them.
They always have postmortemswhen the game is done.
They teach other gamedevelopers what went wrong.
They share best practices andbest ideas and it's not that
games are perfect.
There's a lot of practices ingames that people could
criticize, especially if there'sa lot of misogyny, gamergate
kind of stuff that goes on.
But there's a lot of positivesin the community and the culture
and the way it works that weshould find those and try and
(40:33):
reinforce those.
Debra J Farber (40:34):
Yeah, and my
understanding of games to the
successful ones is that theytake a lot of effort and money
and with scissors and justcreate your product and get
(41:00):
traction and then think aboutputting safety around it later
on.
I wonder if they could evenstop long enough to actually
even have a postmortem.
I'm not asking you to solvethis, but how do you see the
cycles seem different?
How do we put that ethicalperspective and learning from
one another and postmortems intothis process?
Avi Bar-Zeev (41:20):
I think that will
come with the liability when
companies actually realize thatthey can lose more.
Like a game developer will knowthis instinctively that if
their game is not fun to play,if they put out their crappy
game, they don't make any money.
They lost all this money theyput in development and they're
not going to recoup any of thisWith the big companies.
Somehow they have enoughcrossovers and connections that
(41:41):
they can put out crap andthey've survived it.
But is that going to be truewhen there's lawsuits and more
government sanctions and whenpeople have choice of going
between crappy product A andgood product B?
When they have that choice,they're going to vote with their
feet and these companies aregoing to realize that they can't
win by doing the cheap and easything.
That's not the way to get there.
(42:02):
It's a great way to prototypeand again, the game companies
will do quick and dirtyprototyping, but it's in-house.
The only people who aresuffering from that are their
game testers.
They're not going to hopefullyship that because they'll know
they'll lose, and so the bigcompanies should do tons of
prototyping in which fail fastand all the slogans you could
imagine that mean the same thingare all fine within your
(42:24):
prototyping group.
Try it, see how it feels, go,go quickly, but don't ship it.
Don't ship it until you get itright.
The people and customers arenot the guinea pigs.
They're not the beta testers,unless they sign up for it.
Stop making them to be cannonfodder, essentially for bad
decisions that we make in thecompanies.
Debra J Farber (42:42):
I think that's
great advice.
Avi Bar-Zeev (42:44):
I think it's now
also a great time to ask you,
Dan, to tell us a little bitabout the XR Guild no-transcript
(43:06):
said let me see what I could dohere, and it seemed like the
biggest gap was in the ethics,and not that people are
unethical people.
All the people I'm friends withand know are highly ethical
people.
But it wasn't organized and sowe said look, let's make a group
.
We'll call it a guild, notnecessarily in line with the
Hollywood guilds.
It's not a union per se wherethere's no collective bargaining
(43:26):
.
I feel like a gamer like guildsin a game Like guilds in the old
ages, though like more like themasons right or the carpenters.
It's more like we're a trade ofskilled people, whether we're
programmers or designers, it'sall just as important, and what
(43:46):
we can do is we can teach thenew people how things have gone
on, we can apprentice, we can doall the things that guilds have
done, and we can also create alibrary.
We set out to do all thesethings.
We're in the process ofbuilding a library of all this
information that people shouldknow about when they work on
projects.
We need volunteers, but wehaven't started going on at
libraryxrguildorg.
So there's some library workgoing on there and our plan with
the library is, when the AIstops being hallucinatory, we're
(44:09):
going to throw the AI on top ofthe library, so you have a
librarian that you can go andask questions say how do I solve
this particular dilemma andwe'll give you the actual
resources rather than making upsome BS.
We'll actually give you thepostmortems and the papers and
the things that have beenwritten about the right answers
to those questions.
But we also know that peopleneed support.
They don't just needinformation.
If you work at a company and theboss is putting pressure on you
(44:31):
to do something in a bad way oryour team is going in the wrong
direction.
We want you to have the supportof other people who you know
feel the same way as you do, sothat you can have the
conversations of how do we solvethis.
That should happen within theNDA boundaries, of people who
are working on secret projectsthat they can't talk about
outside.
You should be able to findother people in your company who
you can talk to safely withoutworrying about being fired or
(44:55):
ostracized for taking a stand onsomething.
But you should also havesupport outside the company if
you ever need it, so that youknow in the worst case, you can
find a more ethical job.
If you really can't fixsomething, you can tap your
network and say where can I dothe same work, but for a company
that actually cares about theethics, versus where I'm
currently working, because wedon't want anybody to have to
just up and quit.
That's one of the worst thingsto do is just have to take a
(45:16):
stand but not having their joblined up.
So we want everybody to feelsecure in their ability to work
on things that are ethical andin line with helping their
customers.
So we want to do all that.
We're creating a mentoringsystem so that you can partner
up with somebody moreexperienced than you in a field
to have that networking.
We're doing the library, we domeetups and talks and various
things.
We try to do all of that andthe challenge is it's very hard
(45:39):
to do that as a nonprofit.
We're trying, but nobody'smaking any money at this.
The companies that are actuallydoing this well actually are
for-profit companies.
They hire people to buildcurricula around the right way
to do it.
As long as the companies arewilling to pay for that, great
more power to them.
But what if you have to tellthe company something it doesn't
want to hear?
They tell the company somethingit doesn't want to hear.
They're not going to pay forthat.
So somebody still needs to fillin the gaps for these ethical
(46:01):
dilemmas that aren't always thecommon wisdom and need some
additional thinking andtire-kicking outside of the
company.
So we'll try to help whoever wecan help who's working on those
things and also plow ahead.
So we have partnerships withother groups and right now we're
just in a mode of trying to addas many members as we can and
grow the organization and get alittle bit of a financial base
(46:22):
so we can create the kinds ofmaterials that will help people.
That's been my mission for thelast.
Debra J Farber (46:27):
If people want
to find out more or become a
volunteer themselves, what's theURL for that?
Avi Bar-Zeev (46:34):
It's pretty easy
to remember.
It's just xrguildorg Orgs aregood for nonprofits.
Just remember org.
And xrguildorg Orgs are goodfor nonprofits.
Just remember org and XR.
Guild is all one word and it'ssuper easy to find.
We have some materials rightthere on the website.
People can start learning aboutus and what we do.
Debra J Farber (46:46):
Excellent and
before we close, are there any
books or papers or otherresources you would recommend to
the audience to learn about thehistory of the field and the
risks to safety, privacy,freedom, security, all of the
good stuff, anything topical orrecent, that you'd like to call
out?
Avi Bar-Zeev (47:05):
There's two books
that are pretty recent that I
would plug.
One is you know, some of ourmembers of the Guild actually
are writing these.
So the one that I just finishedmost recently is called Our
Next Reality, or Our NextReality, by Alvin Graylin and
Lewis Rosenberg, and I work withthem on a lot of these issues.
They wrote a book that does areally good job of explaining
the issues and walking peoplethrough it, and it's written as
a kind of they take oppositesides of the issues so they can
(47:27):
argue the issues, and so it'sreally interesting to see how
the back and forth.
And then the other one is thebattle for your brain, anita
Farhani, who is a law professor,I think at Duke, if I remember
right, and she's very much intothe brain-computer interfaces,
but I think she's learning moreand more about the spatial
computing and XR side of it aswell, and I've exchanged
comments with her about eyetracking as well, and I think
(47:48):
she's aware that it isn't onlyabout probes in the brain that
we have to worry about for thefuture of our mental autonomy.
I think the most importantthing about that book that
people should take away is, ifwe don't solve this, what we are
risking effectively is ourmental autonomy.
We can be manipulated in such away that if we don't stand up
now, we may not be in a positionto stand up in the future,
because we'll be convinced thateverything is fine just as they
(48:11):
are in the matrix.
The world is pulled over youreyes.
Debra J Farber (48:18):
We don't want to
get to that state where we've
lost our ability to say no tothe things that we're opposed to
.
No, we do not want to get tothat state.
Awesome, are there any lastwords of wisdom with which you'd
like to leave the audiencetoday?
Avi Bar-Zeev (48:26):
I'm sure you have
a mix of people who are both
interested in this and peoplewho consult for a living on
these issues.
It's all important, but I thinkthe thing that I would say my
opinion anyway is that it'severybody's responsibility, that
nobody should be hoarding thisknowledge.
Our goal should be educatingeverybody and, ideally, if we do
our jobs right, we make ourcurrent jobs obsolete and we
(48:46):
move on to the next, harderchallenge.
So we should hopefully not bedoing the same thing over and
over again of telling people thesame three points of things
they need to fix.
Everybody should get good atthose things and then we should
become experts in the next setof things to raise the bar even
higher and do even better.
Debra J Farber (49:02):
Oh, I love that.
I think that's great.
Avi, thank you so much forjoining us today on the Shifting
Privacy Left podcast.
Thanks for inviting me.
Avi Bar-Zeev (49:09):
It's been a
pleasure.
shiftingprivacyleft.
miss .
(49:42):
privato.
Debra J Farber (49:10):
Until next
Tuesday, everyone, when we'll be
back with engaging content andanother great guest.
Thanks for joining us this weekon Shifting Privacy Left.
Make sure to visit our website,shiftingprivacyleftcom, where
you can subscribe to updates soyou'll never missa show While
you're at it.
If you found this episodevaluable, go ahead and share it
(49:32):
with a friend.
And if you're an engineer whocares passionately about privacy
, check out Privato, thedeveloper-friendly privacy
platform and sponsor of thisshow.
To learn more, go to privatoai.
Be sure to tune in next Tuesdayfor a new episode.
Bye for now.