All Episodes

December 19, 2024 17 mins

Podcast Description: This episode delves into the EDPB's opinion on balancing personal data protection with AI innovation. We cover AI anonymity, legitimate interest for data use, and the consequences of unlawful data processing, offering insights for tech enthusiasts and privacy professionals navigating GDPR's role in AI development.

Podcast generated using GoogleLM!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
[SPEAKER_00] (00:00):
All right, buckle up, everyone, because today we're diving headfirst into
AI models, and you guessed it, GDPR. A hot topic for sure. Especially if you're a system owner here in Norway, which I imagine a lot of you are.

[SPEAKER_01] (00:13):
Absolutely. This is something everyone needs to be thinking about, especially with all these new opinions and analyses coming out. Yeah,

[SPEAKER_00] (00:20):
and that's exactly what we're digging into today. We've got some fresh opinions from the European Data Protection Board. The EDPB. Right, the
EDPB. And some really interesting analysis from a Norwegian website, Santro.no. Both are focused on how we navigate this whole AI and data

[SPEAKER_01] (00:38):
protection landscape. Which can feel like a bit of a minefield sometimes, to be honest. Yeah, no kidding. Especially when you think

[SPEAKER_00] (00:44):
about all the cloud providers out there using AI in their solutions. It's a bit of a black box for a lot of people. So our mission today is to
unpack all of that, to give you, the listener, the knowledge you need to understand the potential pitfalls. And just as importantly, what you

[SPEAKER_01] (00:57):
need to be demanding from your cloud providers. Exactly. So let's jump right in. One of the things that really caught my eye was this idea that AI

[SPEAKER_00] (01:05):
models, even if they're not specifically designed to output personal data. They can still hold on to it. In a way that could get you into trouble
with the GDPR. It's not enough to just say, oh, my AI isn't spitting out names and addresses, so I'm in the clear, right? It's not that simple.

[SPEAKER_01] (01:21):
Not quite. It's more nuanced than that. So how do we even begin to figure out if an AI model is truly anonymous then? Well, the EDPB is placing a lot
of emphasis on case-by-case assessment. Which makes sense, I guess, because every AI model is different, right? Exactly. They're saying
you need to consider the type of data used to train the model. The model's design, the context it's used in. Even the potential for a third party to

(01:46):
access and, you know, maybe manipulate the model. It's a lot to consider. It is. And one thing Santa.no points out is that the EDPB is
really hammering home the importance of documentation. So cloud providers can't just tell us to trust them, right? Nope. We need to see

[SPEAKER_00] (02:01):
the receipts. Exactly. Show me the workings. Prove it. Right. And they've actually laid out some specific criteria for evaluating these

[SPEAKER_01] (02:08):
anonymity claims. Okay. Like what kind of things are they looking for? Well, they're asking some pretty tough questions. Get me. Like what
steps were taken to minimize the use of personal data during the development of the AI? Makes sense. Did they use any techniques like
differential privacy? Hold on. Differential privacy. Yeah. Not the most intuitive term. No. For those of us who aren't data scientists,

[SPEAKER_00] (02:33):
what does that even mean? It's essentially a way of adding a bit of noise to the data to make it harder to re-identify individuals. Think of it

[SPEAKER_01] (02:39):
like blurring faces in a photo. You can still see the crowd, but you can't make out any specific features. So you're protecting the individual's

[SPEAKER_00] (02:47):
privacy, but you can still use the data. Precisely. Interesting. So the EDPB is also looking at the design of the AI itself. Oh, absolutely. Are
there any safeguards built in to prevent personal data from leaking out? Yeah, that's a big one. They're pushing for rigorous testing.

[SPEAKER_01] (03:04):
Like, has this AI model been put through its paces? Has it been tested against all the known methods for extracting personal data? And how

[SPEAKER_00] (03:13):
well did it hold up? Exactly. This is crucial stuff for system owners here in Norway. Oh, for sure. Because if your cloud provider is using an
AI model that isn't compliant with GDPR, you're on the hook too, right? You're responsible. You got it. So you need to see that documentation,
those test results. Yep, transparency is paramount. And it's not just about seeing them. It's about understanding them. Right. Which brings

(03:36):
us to another big theme from the EDPB, transparency. A major one, yeah. Because AI, by its very nature, can be so complex and opaque, right? To

[SPEAKER_01] (03:45):
lack box for a lot of people, even for people who work with it every day. So the EDPB is saying that controllers need to be extra clear about how

[SPEAKER_00] (03:53):
they're using personal data in their AI models. Absolutely. It's not enough to just meet those basic GDPR information requirements. Okay,
so transparency is key, but what about the legal basis for using this data in the first place? Ah, good question. Santra.no brings up
legitimate interest as one way to justify processing personal data under GDPR. But it doesn't sound like that's a get out of jail free card.

[SPEAKER_01] (04:20):
Not at all. And Centro.no does a great job of breaking down the EDPB's three step test for proving legitimate interest. OK, walk me through

[SPEAKER_00] (04:27):
that. What are the steps? So first, you need to define what your legitimate interest actually is. OK, so you're saying is it to improve
cybersecurity? Is it to offer better customer service? Whatever it is, it needs to be a legitimate interest and it needs to be clearly defined.

[SPEAKER_01] (04:41):
Got it. What's next? Then you have to prove that processing personal data is absolutely necessary to achieve that interest. And Santra.no

[SPEAKER_00] (04:49):
highlights a really important point from the EDPB here. What's that? They're saying controllers need to consider alternative, less
intrusive methods. Yes. So you don't automatically go to the most data-heavy approach. So it's not enough to prove that processing
personal data is necessary. You have to prove that it's the least intrusive way to get the job done. exactly and the edpb actually gives

[SPEAKER_01] (05:12):
some good examples in their opinion oh like what well let's say you have a chatbot if it's just for basic customer service processing personal
data might be a bit overkill okay yeah i see your point but if that chatbot is using personal data to analyze customer behavior to identify and

[SPEAKER_00] (05:30):
prevent fraud, then the justification for using personal data becomes much stronger. Exactly. So the context really matters a lot. Right. And

[SPEAKER_01] (05:38):
that brings us to the third and final step, the balancing test. OK, so this is where we weigh the potential benefits, the legitimate interest

[SPEAKER_00] (05:46):
against the potential risks to the individual's rights and freedoms. That's the idea. And it's not always easy. You really need to carefully

[SPEAKER_01] (05:53):
assess those risks. And then demonstrate that your legitimate interest actually outweighs them. Again, documentation is crucial.

[SPEAKER_00] (06:00):
Sandra.no makes it very clear that the EDPB is serious about accountability. Very serious. You need to be able to show your work, so
to speak. Prove that you've gone through this whole process. Due diligence, essentially. Due diligence. You can't just assume you're
good to go. Right. This documentation is what's going to protect you if a supervisory authority comes knocking. Which brings us to another
interesting point from the EDPB. What's that? What happens if personal data was used unlawfully to develop an AI model? Ah, yes. The

[SPEAKER_01] (06:30):
consequences can be pretty serious. So let's say a cloud provider used personal data without proper consent to train their AI model. What

[SPEAKER_00] (06:41):
happens then? Well, the EDPB outlines three different scenarios. And in each scenario, there can be consequences for both the company that

[SPEAKER_01] (06:49):
developed the model and the company that's using it. So even if I, as a system owner, didn't develop the model myself, I could still be held

[SPEAKER_00] (06:56):
liable. That's right. And even if the AI model is later anonymized, that initial unlawful use of data can still have repercussions. So it's not a
quick fix. You can't just anonymize it later and pretend it never happened. Yeah. Nope. That's why due diligence is so crucial for system

[SPEAKER_01] (07:12):
owners, especially here in Norway. You need to know where your data is coming from and how it's being used, even if you didn't develop the model
yourself. And we can't forget that supervisory authorities have a lot of power. The Norwegian Data Protection Authority can investigate and

[SPEAKER_00] (07:27):
enforce the GDPR here in Norway. They absolutely can. And they have a range of tools at their disposal, like fines, limitations on

[SPEAKER_01] (07:35):
processing, even data erasure. So you don't want to mess around with this stuff. No, you really don't. And this brings us back to that

[SPEAKER_00] (07:42):
all-important point about transparency. Which is key. As a system owner, you need to demand that level of transparency from your cloud
provider. 100%. Are they being upfront about how they're using data in their AI models? Can they provide you with that documentation, those
test results? You have to ask those hard questions. Don't be shy about it. Absolutely. Because as Santro.no points out, it all comes down to

(08:06):
ethical and responsible AI development. Couldn't agree more. It's about protecting people's rights and it's about building trust.

[SPEAKER_01] (08:13):
Right. If data is the new oil, we need to make sure we're extracting it responsibly and ethically. And that responsibility falls on all of us,

[SPEAKER_00] (08:21):
not just the big tech companies. As system owners, as users, as citizens, we all have a role to play in shaping the future of AI. And that,

[SPEAKER_01] (08:31):
my friends, is something to think about.

[SPEAKER_00] (08:36):
Welcome back. We've been talking a lot about transparency and accountability in AI development, but let's shift gears a bit and talk
about something that often gets overlooked in these conversations. What's that? Data subject rights. What happens when someone wants to
know how their information is being used in all this AI magic? Right, right. We've been focusing on the technical and legal stuff, but we

[SPEAKER_01] (08:55):
can't forget about the individuals whose data is actually being used. Exactly. The GDPR gives people the right to access their data, correct

[SPEAKER_00] (09:03):
it, even have it deleted. But how do those rights play out in the world of complex AI systems? Yeah, that's a good question. Imagine trying to

[SPEAKER_01] (09:11):
explain to someone how their data was used to train some deep learning algorithm. It's not exactly dinner table conversation, is it? Not

[SPEAKER_00] (09:19):
really. Well, you see, your browsing history was fed into a neural network with 10 million parameters and... Eyes would glaze over before
you even finish the sentence. Totally. So what's the solution here? Well, the EDPB is stressing the need for clear, accessible

[SPEAKER_01] (09:33):
explanations. Okay, so meeting the basic GDPR information requirements isn't enough. Not in this case. Controllers need to go
further. They need to provide user-friendly explanations of how AI is processing personal data. This makes me think about all the data that's

[SPEAKER_00] (09:50):
often used to train these A.I. models, especially the stuff that's scraped from the Web. Oh, yeah. Publicly available data. Yeah. I mean,
there's tons of it out there. It feels like there's a bit of a gray area when it comes to using it. There is. And the EDPD actually addresses this

[SPEAKER_01] (10:04):
directly. Well, they do. Yeah. They're warning controllers against relying too heavily on what's called Article 14 viva B of the GDPR. OK.

[SPEAKER_00] (10:12):
And remind me, what is that? It's basically an exception to those information requirements in certain cases. So just because the data is
publicly available doesn't mean you can automatically use it to train your AI without informing the individual. Not necessarily. You still

[SPEAKER_01] (10:27):
need to have a strong justification for using it. And of course, document it. Of course, document everything. The EDPB is essentially
saying, don't try to hide behind this exception. Be upfront about your data practices. Exactly. And this goes back to our main goal today,

[SPEAKER_00] (10:42):
which is to give you, the system owner, the knowledge you need to have these conversations with your cloud providers. You need to be

[SPEAKER_01] (10:48):
empowered to push back and demand transparency. Ask those tough questions. Because remember, those supervisory authorities have a
lot of power. The Norwegian Data Protection Authority can investigate and enforce the GDPR right here in Norway. You don't want to be caught off

[SPEAKER_00] (11:04):
guard if they come knocking. Nope. Sandra.no is making a similar point. Understanding these complexities is crucial, especially when you're
dealing with cloud providers that are using AI. It all comes down to asking the right questions. Where is your data coming from? How are you
making sure that AI model is compliant? Are you respecting data subject rights? You got to have these conversations. Because transparency is

(11:26):
what builds trust. Absolutely. And as AI becomes more and more integrated into our lives, that trust is only going to become more

[SPEAKER_01] (11:33):
important. OK, so we've talked about transparency, but the EDPB's opinion also digs into how the context of a situation can actually

[SPEAKER_00] (11:42):
change our understanding of legitimate interest. Ah, yes. This is interesting. They're saying the way personal data is used in AI can
really affect whether or not something can be considered a legitimate interest. It's not always black and white. Can you give me an example of
how context changes things? Sure. Let's say you're using an AI model to analyze personal data, but it's for public safety reasons. Okay. On the

(12:05):
surface, that sounds like a pretty legitimate interest. It does, right. But what if that model was trained on data that was collected

[SPEAKER_01] (12:12):
without people's knowledge or consent? Then it becomes less clear-cut. Much less. So the context really matters. The EDPB is urging
controllers to really think about all these contextual factors when they're doing those legitimate interest assessments we talked about.

[SPEAKER_00] (12:27):
So it's not enough for me as a system owner to just know that my cloud provider has identified a legitimate interest. Right. I need to
understand the context of how that data is actually being used. Exactly. And you also need to think about potential future uses of that

[SPEAKER_01] (12:42):
data. AI models can be dynamic. Their purpose can change over time. Right. And this is where things get really interesting. The EDPB

[SPEAKER_00] (12:50):
actually talks about data subjects' reasonable expectations. Oh, yes. They're saying controllers need to think about whether a data
subject would reasonably expect their data to be used in a certain way. That's a tricky one, especially with AI, right? The way these models use

[SPEAKER_01] (13:05):
data can be so complex. It's hard for the average person to understand, let alone predict, how their data might be used. For sure. So how do you

[SPEAKER_00] (13:13):
even figure out what's reasonable? Isn't that complete subjective? It is, to an extent. But the EDPB does give some guidance. They point to a few

[SPEAKER_01] (13:23):
factors that controllers should consider. OK, what are they? Like, was the data publicly available? What's the relationship between the data
subject and the controller? Where did the data come from in the first place? So if the data was created from public websites, the person might

[SPEAKER_00] (13:39):
reasonably expect it to be used for research or analysis, but they might not expect it to be used to train a commercial AI model that's going to
target them with ads. Exactly. And again, this all ties back to transparency. If you're upfront about how you're collecting and using
data, it's more likely that a person's expectations will be considered reasonable. Right. Transparency, accountability, due diligence.

(14:03):
It's a lot to keep in mind. It is. But there's a reason for all of it. Yeah. These are the principles that help ensure AI is developed and used

[SPEAKER_01] (14:10):
responsibly and ethically. That's what we all want at the end of the day, right? Absolutely. An AI-powered future that benefits everyone by

[SPEAKER_00] (14:16):
respecting our fundamental rights. All right, we're back for the final part of our deep dive into AI models in GDPR. We've covered a lot of
ground, but before we sign off, I really want to zero in on what all this means for system owners, specifically here in Norway. Great idea. And

[SPEAKER_01] (14:33):
it's important to remember that while this EDPB opinion was sparked by a question from the Irish Data Protection Authority, its implications
extend far beyond Ireland. So what applies there applies here. Absolutely. We're all under the same GDPR umbrella, remember? Right.

[SPEAKER_00] (14:49):
So we're not operating in some kind of Norwegian bubble here. The same standards and expectations apply. Exactly. Even when we're talking

[SPEAKER_01] (14:55):
about these global cloud providers and the AI solutions they're offering. And this is something that Santor.no really emphasizes.

[SPEAKER_00] (15:02):
They're basically saying, don't just take your cloud provider's word for it. Right. Ask for proof. See those test results. Exactly. Because
at the end of the day, the responsibility falls on you. The system owner. To make sure that any processing of personal data within your system is
lawful. You can't just outsource that responsibility to a cloud provider and wipe your hands clean. No. Which brings us to another

(15:24):
important point for system owners in Norway. You need to understand the Norwegian legal landscape. Oh, absolutely. While the GDPR provides

[SPEAKER_01] (15:32):
the big framework, there might be more specific national laws or guidelines you need to be aware of. So it's not enough to know GDPR like

[SPEAKER_00] (15:40):
the back of your hand. Nope. You got to know how these regulations are interpreted and applied here in Norway. Because the Norwegian Data
Protection Authority has teeth, right? Oh, yeah. They can investigate and enforce those regulations. They'll hold you accountable. This
makes having a strong, transparent relationship with your cloud provider so important. It's essential. You need to be able to have those

[SPEAKER_01] (16:01):
open and honest conversations with them. You need to feel confident they take data protection just as seriously as you do. And that
confidence comes from transparency. Can they give you the documentation you need? Are they willing to answer your questions,
even the tough ones? Those are the things you got to consider when you're choosing a cloud provider. Don't be afraid to ask. Centro.no really

[SPEAKER_00] (16:23):
stresses the importance of clear communication here. Oh, yeah. Communication is key. They say don't be afraid to tell your cloud
provider what you need. Exactly. Tell them what you expect when it comes to compliance and transparency. Be proactive. Don't wait for a problem
to crop up. Set those expectations right from the start. And remember, this isn't a one-time thing. The world of AI is always evolving. The

(16:47):
regulations are evolving right along with it. It's a constant process of learning and adapting. Well, that's why we do these deep dives. We're

[SPEAKER_01] (16:54):
here to help you navigate this crazy, complex world. Exactly. So as we wrap up, I think the big takeaway here is that understanding this whole

[SPEAKER_00] (17:03):
interplay between AI models and GDPR, it's not just a one-time task. It's an ongoing responsibility. Stay informed. Ask those tough
questions. Demand that transparency from your providers. Because ethical and responsible AI development, it's about more than just
ticking boxes and avoiding fines. It's about building a future where AI benefits everyone. While respecting our fundamental rights.

[SPEAKER_01] (17:26):
Couldn't have said it better myself. Well, that's all the time we have for today. A big thank you to Santro.no for their insights. And to you,

[SPEAKER_00] (17:33):
our listeners, for joining us on this deep dive. Until next time, stay curious and stay informed.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.