Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amy Tyler (00:25):
Amy, hello. Welcome
back to the red farm book
review. I am your host, AmyTyler, and today I am
interviewing one of the winnersof the Canadian book club awards
in the education and non fictioncategory, and his name is
Ignacio cofone, and he's writtena book called The Privacy
(00:48):
fallacy. So before we get tohim, I wanted to let you know
this is a little bit of a miniso a bit of a baby episode,
because he was supposed to be inthe last episode, but he's in
the UK, and there's somescheduling stuff. It's kind of
my fault, but the good news ishe gets his own episode so but
(01:09):
let me tell you a little bitabout the awards. The Canadian
book club awards are Canada'slargest Readers Choice Awards.
They're open to all authors,regardless of publishing type,
whether you're self published,traditionally published, or a
combination of the two. Andsubmissions for the 2025 awards
(01:32):
are open. So if you want tosubmit your book, you can and
I'll put a link in the shownotes. And what makes these
awards Canadian? You don'tactually have to be a Canadian
to win them, but they're read byCanadian read and voted on by
Canadian readers. So if you wantto be a verified reader for the
2025 awards, which will beannounced later this year, you
(01:55):
can do that. So I'll send, I'llput a link in the show notes to
that too. A little bit aboutIgnacio. He's a professor of law
and regulation of AI at theUniversity of Oxford, and he's
also affiliated with the YaleInformation Society project, and
he's a former professor atMcGill University, and his
(02:19):
interest is in AI and datagovernments and with a focus on
regulatory design and remedies.
So with that, we're going tomove over and talk with Ignacio.
It's so nice to meet you,Ignacio, and thanks for joining
the podcast.
Unknown (02:35):
It's very nice to meet
you too. Thank you so much for
having me here.
Amy Tyler (02:39):
I wanted to start
just the first question is, how
did you decide to specialize inethics and AI? It's super
topical, but how did that comeabout for you?
Unknown (02:51):
Yeah, so I became
interested in AI ethics by
working on privacy, as you mighthave noticed from the book. So
over time, when I was working onprivacy, both from a policy and
from a law angle, it startedbecoming clear to me that many
of the pressing issues regardingdata and privacy, like how
people are profiled, how they'renudged, how they're scored,
(03:15):
wouldn't really be understoodwithout also thinking about the
systems and technologies thatdrive those different data
practices. And so the book triedin different ways to bridge
those two ideas, to show howconnected thinking about AI
ethically and thinking aboutdata and privacy are. What drew
me in was, was seeing howtechnology and AI systems in
(03:39):
particular can concentrate powerin lots of invisible ways. They
can amplify power imbalancesbetween individuals and large
platforms. And because of thatincreased power imbalance, they
can often escape publicscrutiny, creating harms that
traditional frameworks fail torecognize and address. And so
(04:00):
that tension between what thelaw and what public policy sees
for AI and what actually happensand causes harm is what I try to
keep at the core of my work.
Amy Tyler (04:13):
Okay, so I want to,
I'm very curious about the
writing process for you. You're,an academic, you're probably
very comfortable writing bookchapters and scholarly articles.
You're obviously comfortablepublic speaking, presenting
material, but writing a fullbook is something different. So
(04:35):
I just wanted to know what, whatwas that process like for you,
and what, what was maybe tell ussomething that surprised you
about the process.
Unknown (04:45):
So I wanted to write
this in a way that would be
exciting for people who work inthe field, but it could also be
accessible to a generalaudience. Because writing a book
gave me a unique opportunity. Itgave me the opportunity to write
a narrative. Connects the dotsbetween many of the different
specific problems that I wasworking on. When you write an
(05:06):
article or a book chapter, youget to focus on a specific
aspect of a specific problem,but you don't always get to
share your broad view of a fieldor your broad view of why you
think something is a problemwhen the writing process was
different in the sense that Itried to pitch it to a different
(05:27):
audience. I didn't write it forfor a group of experts. I wrote
it for for a general audience,which meant that I read more
broadly than I normally read,both in terms of academic work
and non academic work. And Iworkshopped it differently than
how would normally workshop italso with with policy audiences,
with non academic audiences. Andthe exciting thing about it is
(05:48):
that it gave me space and theopportunity to ask not only why
something was a problem, butalso what caused that problem
and what avenues could wepotentially have to then fix
that problem?
Amy Tyler (06:03):
What do you mean by
workshop? So you would test out
your material? Would that bethrough a lecture? Or what do
you mean by that?
Unknown (06:10):
Sometimes yes. So
sometimes it literally means
going to a workshop andpresenting it, not only at an
academic conference, but also onan event that has legal non
legal professionals, forexample, or an event that has
lots of regulators, andsometimes it just meant informal
conversations with differentpeople. I sent it to friends who
(06:32):
are not academics and are notlawyers. I sent it to regulators
that I've met through throughother work on privacy and AI to
see what feedback, what theywould have. I sent it to people
working on the technology tomake sure that whatever I say
about it is realistic and is andis accurate. Because the
timeline was longer and theargument is more general than
(06:54):
other pieces that I'm used toright there was this pretty nice
opportunity to get input fromfrom several different groups.
Amy Tyler (07:00):
What's something that
someone told you you had to take
out or you had to change? Canyou give an example
Unknown (07:09):
something that I had to
take out or I had to change? I
used to have a very shortconclusion, and that's because I
tend to never read theconclusions of the books that I
read, which is probably I nownotice that terrible habit. I
think most people just read thebook and then skip the
conclusion, because it usuallydoesn't say much that is new.
(07:30):
And people told me that was aterrible idea and I should
actually write a longerconclusion. I know that I did.
I'm happy that I that I wroteit, because lots of things that
if I hadn't written it would bekind of implicit throughout the
book. I could make explicit atthe end about the more general
view. And I think that was partof the writing process, of
moving away from writingacademic papers, where at the
(07:51):
conclusion, you just want tosummarize or restate the
arguments that you had before,versus writing a book for a
different kind of audience,where you have to be more
explicit about what the generalview is,
Amy Tyler (08:04):
okay, so let's
explain to listeners what you
mean by privacy fallacy. Soexplain your book.
Unknown (08:13):
The Privacy fallacy is
the contradiction that I think
is at the core of how manylegislators, companies and even
sometimes advocacy groups treatprivacy. We often say that
privacy is very important socialvalue, but then when it comes to
protecting to protecting it, weoften find ourselves caring only
(08:33):
about the tangible andmeasurable harm that can happen
as a consequence, like energytheft or financial loss, but
saying that privacy is a socialvalue means saying that there's
something valuable in it, notjust around it, and if we forget
that there's something valuablein privacy, there are lots of
(08:54):
deeper harms that we stop payingattention to, like how people
can be manipulated. They can beexcluded from opportunities or
decisions about their lives canbe made by by opaque system. So
I thought the book needed toaddress and push back against
the privacy fallacy, because itis a book about the information
(09:15):
economy. The information economyis the system in which companies
profit, not only from the moneythat we give them, but also from
the data about us that theyhave. And so by addressing the
privacy fallacy, I wanted to cutthrough a number of
misconceptions that do interferewith the way that that we can
(09:36):
understand the informationeconomy. One of them, for
example, is the idea that if youhave nothing to hide, you have
nothing to fear. Well, it couldbe that you actually have
nothing to hide, and no negativematerial consequences will
happen to you if you don't hideor keep something private, but
it could be that you want tokeep that information private
(09:58):
nevertheless, and now. Anotheridea that cuts across the
information economy that relatesto this prosecu Is that consent
ensures safety, that if we askpeople to consent to data
practices, then everything isokay, but lots of bad things can
happen to people even if theyagree to certain data practices.
So I think taking the idea thatwe should not exploit people
(10:23):
through personal informationseriously helps us see lots of
aspects of the informationeconomy a lot better.
Amy Tyler (10:30):
So this is sort of a
what if question. If you were
writing a terms and conditionsfor a company and no one really,
I mean you might, because you'rea lawyer, I don't you know, I'll
just be in a hurry to dosomething, and I'll just check
it off. And what would you dodifferently? Or what would you
highlight?
Unknown (10:50):
Yeah, in this
hypothetical, would the company
let me write anything that Ilike? Ethical company? Well, I
would first thank this wonderfulcompany for being so ethical and
wonderful and open to new ideas.
And then I would put it soundsquite general, but I think it's
quite important. I would put inplace a corporate duty not to
exploit its users. And I thinkjust by having this idea of non
(11:16):
exploitation embedded at thecore, we can get through a lot.
Part of the problem that wehave, both with how we write
terms and conditions and how wewrite our privacy laws, is that
we try to be as specific aspossible, but data harms are
contextual, so when we try to beso specific and prohibit very
(11:37):
targeted practices, then lots ofharms end up falling through the
cracks because they're contextdependent, because we didn't
predict them. But if we tellcompanies don't exploit the
users that gave you theirinformation, then we can catch
all of those things, and lots ofthese things end up being common
(12:00):
sense. If I could go further,then I would also write some
disclosure requirements wherethe company discloses not just
the mechanics of data collectionand as terms and conditions of
them do, but also the potentialfor aggregation, for influence,
for harms at scale. And I thinkthat will be more helpful for
(12:24):
whoever ends up actually readingthe terms and conditions, if
anyone,
Amy Tyler (12:30):
if anyone, now, when
you say non exploitation, so
normally you get, you seesomething that says you have to
give permission to share yourdata. Yeah. How is this
different? Like, what'ssomething other than giving your
data to another company thatcould then market to you? What?
Give me another example, becauseyou're making it broader.
Unknown (12:50):
Yeah. So data sharing
can be very good in many
contexts and very bad in othercontexts. Data Sharing can be
something as simple as a techstartup that is very responsible
in how they handle data. Don'thave enough space in their
server to have all of the datathere, so they need to share
with a third party, becausethey're renting their servers,
(13:11):
and that's a totally fine way toshare data. Data Sharing can be
as bad as a company sellinghealth information about its
users to another company in away that exposes them to
discrimination and financialharm or that exposes them to
(13:32):
manipulation. So when we saydata sharing okay or not okay,
we're putting in the same baglots of things that are very
different, and it is impossibleto come up with a rule to say,
Oh, this company should do datasharing. This company should not
be do data sharing. And it iswrong to put on the shoulders of
(13:52):
users the information gatheringprocess and the decision process
to figure out which kind of datasharings are okay, and which
ones are not. Particularly whensaying data sharing is okay,
they're basically handling that,handing that company a blank
check to do the good types ofdata sharing and also the bad
types of data sharing.
Amy Tyler (14:14):
Okay? So I'm curious.
I think everyone's afraid of AI.
I mean, I am and and is thereanything in this space that
gives you hope or makes youoptimistic because it's it's
scary?
Unknown (14:30):
Yeah, I think we're
often scared of AI for the wrong
reasons a lot of people or weare sometimes scared of AI
because we think that you willbecome too intelligent and will
try to wipe us off the surfaceof the earth, and that's highly
unlikely. I'm not saying that noone should pay attention to
that, but that should be thecenter of attention. I'm more
(14:50):
worried about other types of AIharms. I'm worried about
misinformation, disinformationcreated by generative AI. I'm
wondering. I'm worried. Aboutthe different ways in which we
use predictive AI wrong and makedecisions about people that
aren't fair. But what does giveme hope is that I think the
conversation is shifting in anumber of ways, and we
(15:12):
increasingly see efforts, bothin the regulatory space and in
the activism space, that paymore attention to the question
of power. And the question ofpower is at the core of data,
and is at the core of AI.
Holding data about people isholding power over people, and
holding that power over peopleallows one to either do lots of
great things for them or toexploit them. And the ways that
(15:35):
we traditionally have to handlethat power just don't cut it
anymore. So when we see thesenew activists and new regulatory
efforts take power into accountseriously, and when we see
people filing lawsuits andtrying to to push back against
abuses of that of that power, Ithink that's the kind of shift
(15:56):
where meaningful change canhappen, and I am cautiously
optimistic about thingsimproving.
Amy Tyler (16:03):
Okay? And then I
guess the last question, just
sort of a fun question, is, howhave you what's the experience
been like for you to just enjoyyour book is done, so now you
talk to people or just have yourbook around, or what's that been
like for you being a firsttimer? Oh, it's
Unknown (16:21):
great. It's great fun.
Also, I think the publishingprocess gave me a wonderful
work, a wonderful break. I spentso much time editing in the last
couple of months that I think Icouldn't see it for a little bit
anymore, and just a couple ofmonths that a publisher takes to
have it come out was just thebreak that they needed. And
(16:42):
Cambridge University Press waswonderful. They did so fast.
They did so efficiently. Theywere they were really great to
work with. So, yeah, it was niceto get to talk about it with
people, to speak with groupsthat I don't usually get to
speak when I do academic worklike you and your listeners and
and to try to bring people intothe conversation a bit more, to
(17:06):
try to have this not be just aconversation between lawyers and
regulators about thetechnicalities of how to resign
AI regulation or privacy law,but rather to get input from
people working on technology oron the corporate sector or on
other fields about somethingthat I mean, it sounds distant,
(17:28):
but does affect everyone's dailylives in lots of hidden ways.
Amy Tyler (17:37):
Thank you so much.
Thanks for joining the podcast,and I really enjoyed our
conversation. Oh,
Unknown (17:41):
thank you. Thank you
for having me, and I enjoyed the
conversation as well. Thanks.
Amy Tyler (17:46):
So again, the book is
the privacy fallacy, harm and
power in the informationeconomy, by Ignacio cofoni And I
really enjoyed our discussion.
This is not a beach read, butit's a very well laid out book
that addresses a very topicalproblem that we're facing. And
(18:07):
really what he's talking aboutis power imbalances and and how
it's a fallacy that when we giveour consent, when we click, I
agree, that we're reallyprotecting ourselves, and that
we need to really look atstronger systemic regulation
that addresses kind of whycorporations have the power that
(18:29):
they do. But what I reallyenjoyed from talking with him is
you can feel his passion aboutthe subject matter, but also
that, you know, I personally ampretty I'm afraid of AI. I'm
nervous when I sign those formsthat I, as I mentioned, like I
think most of us, I don't readclosely, but he was more about,
(18:50):
let's just explain what's goingon and come up with ways to make
positive change. But it's goodto know that he doesn't think
that computers are taking overthe Earth anytime soon. So
hopefully that won't happen.
Anyway, it's thanks so much fortuning in, and I will be back
(19:10):
with you in a few more weekswith another set of interviews
from the Canadian book Clubawards. Thanks so much for
tuning in. Bye. You.