Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Allan (00:00):
Imagine it's a 2030.
Okay.
You walk into your smartkitchen, maybe just gonna make
some coffee.
Everything's connected downright.
So how much of that room thinkabout the smart meter, the apps
on your phone, how much of it isjust uh silently collecting
data on you?
We're doing a deep dive todayinto what the surveillance
economy really means.
And uh maybe more importantly,are we actually powerless here?
(00:23):
Yeah, because our sources, theydon't just look at, you know,
the gadgets themselves.
They're drawing these lines,connecting global digital
reliance, like who actuallycontrols the internet's plumbing
directly to the superpersonaldata coming from your devices.
So our mission today is tounpack the geopolitics of it
all, this big picture, and thenreally look at the tools and the
policies that might just mightput you back in control.
Ida (00:45):
Absolutely.
And the conversation, maybesurprisingly, starts right at
the top.
There's this dominant idea nowamong global powers that data,
well, data is the new oil.
Allan (00:55):
Right.
We hear that phrase a lot.
Ida (00:57):
We do.
And that perception that'selevated information from just a
business asset to a coregeopolitical thing.
It changes how states act, howthe big platform companies act.
So understanding your personaldata, what happens to it, means
you first have to grapple withthese massive global power
shifts, global control oversoftware, over infrastructure.
That's what lets someone makemoney off your smart fridge
(01:18):
data, basically.
Allan (01:19):
Aaron Powell Okay, that's
quite a leap connecting my uh
my toaster's habits to nationalsecurity.
So if data is this bigstrategic thing now, how does
that information power, as youcalled it, actually change how
countries behave?
Ida (01:33):
Well, it fundamentally
redefines what national security
even is.
You've got technologies like5G, you have advanced AI,
massive computing power.
These things force major powersto think about security in
terms of who controls the actualflow of information, which
brings us to this concept, uh,the digital dependent structure.
DDI for short.
Allan (01:51):
Aaron Powell DDI, okay,
break that down for us.
Ida (01:53):
So the DDI basically maps
out how uneven this digital
landscape is across the globe.
And maybe unsurprisingly, oursources consistently point to
the US as, well, the leastdigitally dependent nation.
Allan (02:06):
Aaron Powell Why is that?
Ida (02:08):
It's primarily because US
companies control so much of
that foundational softwareinfrastructure.
Think browsers, operatingsystems, social media platforms,
search engines, the core stuff.
China and South Korea aredefinitely gaining ground,
making strides, but most of theworld is lagging pretty far
behind in terms of controllingtheir own digital destiny, so to
(02:29):
speak.
Allan (02:30):
And you can really see
that vulnerability, that
dependence.
When you look at Europe, can'tyou?
Yeah.
The sources suggest theirautonomy gaff compared to the US
and China has actually gottenwider.
Ida (02:39):
That's right.
Allan (02:39):
So if you're a
policymaker, maybe in Brussels,
letting foreign tech giants runyour digital show, well, that
starts to look like you'rehanding over economic leverage,
maybe even politicalvulnerability.
It's being framed veryexplicitly over there as an
economic risk, a security risk,and a political one, too.
Ida (02:56):
Exactly.
And if we bring that huge macropicture back down to the
individual, back to youlistening, the control of that
core infrastructure, thesoftware dependence, we talked
about OS, social media, searchthat dictates who ultimately
gets to profit from yourpersonal data stream.
It doesn't really matter whereyou live.
This unequal setup, thisdistribution of digital
capabilities, it can even leadto what some analysts are
(03:18):
calling digital colonialism.
Allan (03:20):
Digital colonialism.
Ida (03:21):
Yeah.
Yeah.
Basically, where regions thatare digitally dependent find
their citizens' data isconstantly being extracted,
siphoned off, without thatregion getting the equivalent
economic or political benefitsback.
Allan (03:32):
Aaron Powell Okay, so
governments are fighting over
the internet's backbone.
Fine.
But what does this actuallymean for the average person?
Back in that smart kitchen in2030, how much data are we
really talking about?
Is it actually up for grabs?
Ida (03:46):
Oh, it's a staggering
amount.
And often from devices youwouldn't even immediately think
of as surveillance tools.
We need to look way beyond justsocial media feeds.
Take smart meters, for example.
They're pretty common now.
They aren't just clocking yourtotal energy use for the month.
No, they're tracking yourconsumption at a really granular
level, usually like every halfhour.
Allan (04:05):
And this is where it gets
both fascinating and frankly a
bit ridiculous, isn't it?
Researchers, they've actuallymanaged to figure out incredibly
personal stuff just from thosehalf-hourly smart meter
readings.
Ida (04:17):
Like what, specifically?
Allan (04:19):
Well, we're talking about
things like inferring people's
sleeping patterns, knowing theirapproximate location within
their own home, even telling ifthey're sitting down or standing
up.
Ow.
And get this, they could evenfigure out which TV channel
someone was watching.
Apparently, it's all deducedfrom these tiny electromagnetic
interference signals thatdifferent appliances give off.
Ida (04:39):
Okay, that's wow.
The moment you realize yourtoaster might know your TV
habits better than your bestfriend.
Allan (04:46):
Exactly.
It just perfectly shows howevery connected device basically
becomes another sensor loggingaspects of your life in ways we
couldn't have imagined, evensay, 10 years ago.
Ida (04:56):
And while the smart
appliances are maybe the
surprising data collectors,let's not forget the foundation
of this surveillance economy.
The social media and appecosystem.
Allan (05:07):
Right, their whole
business model.
Ida (05:08):
Their entire model hinges
on harvesting user data.
Yes, the content you post,sure, but maybe more
importantly, all the metadata.
Allan (05:17):
Explain metadata again.
Ida (05:18):
Things like your location
when you post, the time of the
upload, who you interact with,your scrolling patterns, how
long you look at something.
Most of that stuff is loggedautomatically behind the scenes.
Allan (05:28):
And when you put all
these different data streams
together, that's where the realdanger lies, isn't it?
Ida (05:33):
Aaron Powell Precisely.
That leads us straight towhat's called the mosaic effect.
This is a really sophisticatedthreat.
It's where sensitive, usefulinformation about you isn't
revealed by looking at just onedata set, maybe one that seemed
safe or anonymized on its own.
Instead, it's revealed bycombining, by layering together
multiple different data setsthat might have seemed harmless
(05:55):
individually.
Put them together though, andsuddenly you can see these deep,
highly sensitive patterns aboutspecific people or even whole
groups who might be vulnerable.
Allan (06:03):
That mosaic threat, that
combination, it makes you feel
completely exposed, especiallywhen you think about powerful AI
systems anerizing all this.
And since AI is such a biggeopolitical focus now, too, how
did these AI models themselvesbecome surveillance threats to
individuals?
Ida (06:19):
Aaron Powell Yeah, the
threat here gets quite
technical.
AI models, especially thepowerful ones, are often trained
on huge amounts of proprietary,sometimes very sensitive data.
Think medical images orfinancial transaction records.
Now, when these trained modelsare finished and deployed out
into the world, they themselvescan become vulnerable to certain
kinds of attacks.
For instance, if a model is uh,let's say overtrained, meaning
(06:43):
it fits the training data tooperfectly, it can actually risk
revealing granular details aboutthe original data it was
trained on.
Allan (06:49):
Aaron Powell Wait, hang
on.
So if I make my AI too good,too specific, the finished
product could essentially leakthe private information that
went into training it.
How does that happen if thedata was supposed to be
protected?
Ida (07:02):
Well, the leakage often
happens when attackers cleverly
combine information they getfrom the model's outputs with
other data they might haveaccess to, maybe public records,
other databases.
These are called linkageattacks.
But maybe the mostsophisticated threat is
something called a modelinversion attack.
Allan (07:16):
Aaron Powell Model
inversion.
Ida (07:18):
Yeah.
This is where an attacker, justby carefully studying the final
deployed AI model and how itresponds to inputs, can actually
start to reverse engineer andreconstruct significant chunks
of the original private trainingdata set.
It's like digital espionage,but targeting the AI's brain
itself.
Allan (07:35):
That genuinely sounds
like some kind of tech arms
race.
Attacks and offenses.
But you mentioned solutionsearlier.
If those are the attacks,what's the defense?
This brings us toprivacy-enhancing technologies,
right?
PETs.
Ida (07:48):
PETs, exactly.
They really are your besttechnical line of defense in
this 2030 scenario we'repainting.
PETs are basically a wholesuite of cryptographic and
statistical tools.
They're designed to letorganizations get the maximum
benefit from using data foranalysis, for research,
computation, whatever, whiledrastically minimizing the risk
that any information aboutspecific individuals gets
(08:10):
disclosed.
Sometimes people even call thempartnership-enhancing
technologies because they canbuild trust and allow
collaboration, even betweencompetitors, where maybe
mistrust would normally preventit.
Allan (08:20):
Okay, well, let's make
that concrete.
Can you walk us through maybethe core three PETs, starting
with uh secure multi-partycomputation, SMPC?
Ida (08:29):
Absolutely.
So, secure multi-partycomputation, SMPC.
It's a cryptographic technique.
What it lets you do is havemultiple parties, maybe
organizations that don't trusteach other, or even competing
companies, run a joint analysison their combined data without
any of them ever having toreveal their private individual
data inputs to anyone else.
(08:49):
They compute in the blind, soto speak.
Allan (08:52):
And there's a brilliant
real-world example of this,
isn't there?
With the smart meter privacyissue in the Netherlands.
Ida (08:58):
That's right.
They used SMPC to tackleexactly that problem.
They ran a pilot program wherethey could calculate the total
energy usage and the averageusage across six neighboring
houses.
The energy grid operator gotthe aggregate number they needed
for planning, but the analystswho actually ran the SMPC
calculation never saw theprivate energy consumption data
for any single household.
(09:19):
That's real privacy by designin action.
Allan (09:21):
Brilliant.
Okay, what's next?
Differential privacy.
DP.
Ida (09:25):
Differential privacy, DP.
This one's more of astatistical technique rather
than purely cryptographic.
How it works is by deliberatelyadding a carefully measured
amount of noise.
Think of it as randomalteration to the result of a
statistical query or analysis.
This added noise makes itimpossible to know for sure
whether any one individual'sdata was included in the
(09:46):
computation or what theirspecific contribution was.
And the cool part is the datacontroller can actually
mathematically quantify thelevel of privacy protection
they're providing using a metriccalled the privacy budget.
Allan (09:57):
Got it.
So DP protects the output ofthe analysis, makes it useful
for statistics, but individuallydeniable.
Okay, last one.
Federated learning, W, FL.
Ida (10:06):
Right, federated learning
or FL.
This is becoming reallyimportant in fields that deal
with distributed, highlysensitive data that you really
don't want to centralize.
Think healthcare researchacross different hospitals or
maybe fraud detection acrossdifferent banks.
FL lets you train a central AImodel collectively using data
that stays put on remoteservers, like on different
hospital servers, for example.
(10:28):
The raw patient data never hasto leave the local hospital's
control.
Instead, updates to the modelare sent back and aggregated.
So you get the benefit oftraining on diverse data,
improving the model foreveryone, but without the huge
risk that comes from creatingone giant centralized honeypot
of sensitive information.
Allan (10:46):
Okay, so the tech is
there.
SMPC, DP, FL, these toolsexist.
But you know, technology aloneusually isn't enough, is it?
We need policy, we need laws toreally change the underlying
incentives driving this wholedata economy.
What does a comprehensiveprivacy-first legal approach
actually look like, according tothe sources?
Ida (11:03):
Aaron Powell Well, it looks
like legislation that tries to
get right to the root cause ofthese surveillance harms.
And that really means tacklingthe business model itself.
One of the most crucialcomponents that experts
recommend for any strong modernprivacy law is an outright
prohibition on online behavioraladvertising.
Allan (11:20):
Aaron Powell Whoa, hold
on.
To ban behavioral ads, wouldn'tthat completely break the free
internet model that so muchrelies on?
Isn't that like an economicimpossibility?
Ida (11:32):
Well, that's definitely the
core tension, yes.
But the argument from privacyadvocates is that behavioral
advertising is the fundamentalincentive driving companies to
harvest such enormous quantitiesof personal data in the first
place.
It's the engine of thesurveillance economy.
So the idea is if you removethat specific incentive,
businesses would be forced toshift towards other models,
maybe contextual advertising,which is based on the content of
(11:55):
the page, not the user'shistory, or perhaps more
subscription models.
Proponents argue this wouldimmediately deflate the
surveillance economy and forcecompanies to practice real data
minimization.
Allan (12:05):
Data minimization, right,
that keeps coming up.
Only collecting the data that'sstrictly necessary to actually
provide the service the usersigned up for.
Exactly.
And mandating strong,unambiguous opt-in consent, none
of those confusing darkpatterns designed to trick you
into agreeing.
Ida (12:22):
Absolutely critical.
And speaking of fairness,another point our sources really
stress is the need to banpay-for-privacy schemes.
Allan (12:30):
What are those exactly?
Ida (12:31):
That's where a company
might offer you a basic service
that tracks you, but then saysyou can pay a premium fee for a
version that respects yourprivacy more.
The argument against this isthat privacy should be a
fundamental right, not a luxurygood that only wealthier people
can afford.
Your consent to data processinghas to be genuinely free, not
coerced because the privateoption costs more or offers a
(12:54):
worse service.
Allan (12:55):
Aaron Powell Okay, that
makes sense.
Privacy shouldn't be up forsale.
And we should probably affirmhere that users aren't
completely powerless even now.
Ida (13:02):
Yeah.
Allan (13:03):
Under the laws like GDPR
and others, you do have
established rights, don't you?
Ida (13:06):
Aaron Powell You absolutely
do.
Rights like the right to accessthe data companies hold on you,
the right to port it somewhereelse, the right to correct
inaccuracies, and crucially theright to delete your data.
Allan (13:17):
Aaron Powell But how
effective are those rights if
they're hard to enforce?
Ida (13:21):
That's the key question.
These rights can feel a bitlike empty promises if there's
no real mechanism forenforcement.
For users to truly feel likethey had some control back, many
experts argue they need what'scalled a private right of
action.
Meaning.
Meaning the ability forindividuals or groups of
individuals to actually suecorporations directly when they
violate these statutory privacyrights.
(13:43):
That's what gives the law realteeth, acts as a genuine
deterrent, and provides anecessary check on corporate
power.
So look, today we've reallytraced these two parallel
worlds, haven't we?
On one hand, you have thisworld of increasing global
digital dependence, largelyfueled by relentless data
harvesting.
And on the other hand, you havethis emerging world of
technical countermeasures, thePTs we discussed, and these
(14:04):
strong policy ideas designed totry and reclaim some individual
and national autonomy.
The push and pull is happeningright now.
Allan (14:11):
Okay, let's try and boil
this down for you one last time.
The bottom line seems to bethis the technology does exist
to protect you better.
SPC, differential privacy,federated learning.
And the policy levers, likebanning behavioral advertising
or mandating data minimization,would fundamentally change the
whole game, change the incentivefor collecting your data in the
first place.
(14:31):
But whether thesecountermeasures actually get
widely adopted, whether theselaws get passed, that seems to
come down entirely to publicawareness and ultimately user
demand pushing for it.
And remember, when we talkabout privacy in, say, 2030,
maybe it's less about somesingle ominous Big Brother
figure watching everything.
It's perhaps much more aboutthis vast complex marketplace
(14:54):
that's silently, constantlybuying and selling tiny
fractions of your attention,your habits, your preferences.
So when it comes to yourprivacy in this data economy,
ignorance really isn't bliss.
It's just, well, it's just freedata for someone else.
Now might be a good time to gocheck the privacy settings on
your favorite app, or maybe seewhat your smart device
manufacturer is reallycollecting.