Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the Deep
Dive.
Today, we are plungingheadfirst into something that's
not just changing technology,but really reshaping our world.
Generative AI.
Speaker 2 (00:09):
Yeah, it's everywhere
now, isn't it?
Speaker 1 (00:11):
Exactly.
You've interacted with it,you've seen the outputs.
You know it's transformingthings.
Yeah, at its heart, gen AI isabout systems creating well
human-like content text, images,code, you name it.
Speaker 2 (00:25):
But at a scale and
speed we've just never seen
before.
It's a huge potential force for, you know, innovation,
productivity, societal shifts.
Speaker 1 (00:32):
And to guide our deep
dive into this really complex
space, we're using a fantasticsource.
It's a comprehensive outlookreport from the European
Commission's Joint ResearchCenter, the JRC.
Speaker 2 (00:42):
That's right, and
what's great about this report,
I think, is that it's not justtech specs.
It's actually designed toinform policymakers, people
working in digital education,justice across the board.
It pulls together the latestscience, expert insights, gives
a really sort of nuanced picture.
Speaker 1 (00:58):
Absolutely, Because I
mean, the potential is massive.
We see that.
But Gen EI also brings thesesignificant challenges right.
They're all interconnected tomisinformation, job market
shifts, privacy, big stuff,Definitely.
So our mission today is to cutthrough that complexity.
We want to pull out theinsights you need to understand
what's really important in thisrapidly evolving landscape.
(01:21):
Looking across tech, economy,society, policy, the whole
picture.
Speaker 2 (01:25):
Exactly.
This report is kind of a vitalresource for understanding
trends, anticipating what mightbe next.
It doesn't pretend to have allthe answers, but it brings the
science right to the policytable.
Speaker 1 (01:35):
Okay, let's unpack
this then when do we start?
Maybe the foundations, the techthat actually makes Gen AI tick
?
What are the building blockshere?
Speaker 2 (01:43):
Yeah, good place to
start.
It really comes down to a fewkey things that have advanced
together quite rapidly actually.
On the software side, hugestrides in deep learning
architectures, the transformermodel particularly.
Speaker 1 (01:55):
Right the transformer
.
That was a big deal forunderstanding language context.
Speaker 2 (01:59):
A massive deal.
These algorithms arecomputationally intensive, sure,
these algorithms arecomputationally intensive, sure,
but they're what allows modelsto process context and generate
stuff that's surprisinglycoherent, relevant.
Speaker 1 (02:11):
But algorithms need
power, right Serious power.
Speaker 2 (02:13):
Oh, absolutely.
That's where the hardware comesin Specialized processors, gpus
, tpus.
They're indispensable.
They provide the sheercomputational muscle you need to
train and run these absolutelymassive models efficiently.
Speaker 1 (02:25):
And that muscle needs
something to chew on Data.
Speaker 2 (02:28):
Mountains of it.
Massive data sets are the wellindispensable raw material.
Combine those huge data setswith high performance computing,
think supercomputers and fasterconnections like 5G.
Speaker 1 (02:39):
Right.
Speaker 2 (02:40):
And you get this
environment where gen AI models
can be developed and trained atjust an unprecedented scale.
The EU, for instance, isinvesting heavily in HPC and
these gigafactories to build upthat capacity.
Speaker 1 (02:51):
OK, so, given these
foundations, where does the EU
actually stand globally?
I mean in terms of research,but also turning that research
into, you know, real worldinnovation.
Speaker 2 (03:00):
So here's a key
insight from the report, and
it's a bit mixed.
The picture is strong inresearch, definitely, but there
are challenges turning that intomarket leadership.
Speaker 1 (03:08):
How strong is strong?
Speaker 2 (03:09):
The EU is actually
second globally in academic
publications on Gen AI, rightafter China, so that shows a
really vibrant research base.
Speaker 1 (03:18):
Second worldwide.
Wow, that's impressive.
Speaker 2 (03:20):
It is.
But and this is the criticalpoint the report highlights the
EU faces significant fundinggaps, especially compared to the
US and China.
Wow, the money, yeah.
And that affects its ability totranslate that research into
innovation, into patents.
Eu patent filings are growingfast, sure, but they still only
represent about 2% of globalfilings.
(03:41):
They lag way behind South Koreaand the US.
Okay, represent about 2% ofglobal filings.
They lag way behind South Koreaand the US.
Okay, so balancing this strongresearch base with closing that
investment gap to really driveinnovation, that's a major
challenge.
Speaker 1 (03:52):
Okay, so we have the
tech, the data, the hardware
capacity is growing, but withsystems this complex, how do we
even begin to evaluate them,Make sure they're safe, reliable
?
It feels like unchartedterritory.
Speaker 2 (04:03):
It absolutely is, and
the report really emphasizes
this critical need for what theycall a science of evils or
model metrology.
Basically, we need standardizedways to measure both the
capabilities and, crucially, thesafety of these models,
especially as they move intosensitive areas.
Speaker 1 (04:20):
A science of
evaluation.
Speaker 2 (04:22):
Yeah.
Speaker 1 (04:23):
That really captures
the challenge, doesn't it?
What does that actually involvein practice?
Speaker 2 (04:27):
Well, it involves
developing new methods for
benchmarking both performanceand safety.
Adversarial testing, oftencalled red teaming, is vital.
Speaker 1 (04:35):
Red teaming so like
deliberately trying to break it.
Speaker 2 (04:38):
Yeah, basically
pushing it to find weaknesses,
trying to make it fail orproduce undesirable outputs.
And human evaluation is stillincredibly important, having
experts or just representativeusers test the systems out.
Speaker 1 (04:51):
But here's where it
gets really interesting.
What happens when the AI'sabilities start to surpass our
own understanding?
How do you evaluate somethingthat's operating at a superhuman
level?
Speaker 2 (05:01):
That's a huge future
challenge.
This concept of superhumanevaluation is emerging precisely
because of that.
We need ways to assesscapabilities, safety, issues
that humans might not even beable to fully perceive or
understand on their own.
It's a critical area forongoing research, definitely.
Speaker 1 (05:19):
Okay, shifting focus
just slightly.
What about cybersecurity?
J&ai systems handle massiveamounts of data.
They're incredibly complex.
They must present new kinds ofvulnerabilities, right.
Speaker 2 (05:29):
They absolutely do.
They're vulnerable not just to,you know, the standard cyber
threats we already know about,but also to threats specific to
AI systems themselves.
The report breaks these down.
It's really important tounderstand them because they
open up whole new attacksurfaces.
Speaker 1 (05:43):
Okay, so what are
some of these AI-specific
vulnerabilities?
Speaker 2 (05:46):
Well, first there's
data poisoning.
Because these models aretrained on these vast, sometimes
unverified data sets, attackerscan subtly inject malicious
samples into that training data.
Speaker 1 (05:57):
How does that work?
Speaker 2 (05:58):
It can compromise the
model's overall performance,
maybe subtly, or introducespecific risks like making a
code generating AI suggestinsecure code patterns without
the user realizing it.
It's like subtly contaminatingthe ingredients before the cake
is baked.
Speaker 1 (06:14):
O' Okay, so attacking
the training data itself.
What else?
Speaker 2 (06:17):
Then there's model
poisoning Similar idea, but it
involves manipulating thetraining process itself or the
model's learning updatesdirectly again again, to
compromise its behavior.
Marc.
Speaker 1 (06:27):
Thiessen, not attack
the data or attack the learning
process.
What about attacking the modelonce it's built and people are
using it?
Speaker 2 (06:33):
Danielle Pletka.
Right.
That brings us to promptinjection.
This is where carefully craftedinput, a prompt, makes the
model behave in ways it wasn'tintended to.
Speaker 1 (06:41):
Marc.
Speaker 2 (06:41):
Thiessen.
Okay, danielle Pletka.
It can be direct promptinjection a user trying to
bypass safety filters maybe getit to generate harmful content
or misuse its abilities.
But there's also indirectprompt injection.
This is fascinating and prettyconcerning.
Speaker 1 (06:54):
How does that work?
Speaker 2 (06:55):
The model interacts
with external content right Like
it reads a webpage or processesa PDF.
You feed it, and that externalcontent secretly contains
instructions, hidden promptsthat alter the model's operation
when it processes it.
Speaker 1 (07:09):
Wait, hang on.
So the AI looks at a web pageand the web page can essentially
give it secret commands.
That sounds wild andpotentially very dangerous.
Speaker 2 (07:19):
It can be.
These indirect attacks cancompromise the system's
integrity, its privacy, andoften the negative consequences
fall on the primary user of thesystem, not the attacker who
planted the malicious contentsomewhere else.
Speaker 1 (07:33):
Wow, okay, and what
about trying to get sensitive
information out of the model orits training data?
Speaker 2 (07:39):
Right.
That's the domain ofinformation extraction.
Attackers aim to accesssensitive or proprietary info.
This includes things like dataleakage or membership inference.
That's where attackers try tofigure out if a specific piece
of data was part of the trainingset.
If that data point wassensitive maybe copyrighted
material or personal info thatshouldn't have been there in the
first place that's a major leak.
Then there's model inversionAttackers try to reconstruct
(08:02):
aspects of the training data orinfer sensitive details just by
analyzing the model's outputs orits internal structure.
Speaker 1 (08:10):
And just trying to
steal the model itself.
Speaker 2 (08:11):
Yeah, that's model
extraction, Basically trying to
replicate the parameters, thebrain of a remote model.
The key takeaway here is thatGenAI's reliance on massive,
often diverse data and thesecomplex models, it just
significantly increases thepotential attack surface
compared to older systems.
Speaker 1 (08:30):
That gives us a
really clear picture of the
current tech landscape and itsvulnerabilities.
The report also looks ahead,though what emerging
technological trends are on thehorizon beyond the kind of large
language models we mostlyinteract with today.
Speaker 2 (08:44):
Yeah, it flags
several really interesting
developments.
One is agentic AI.
Speaker 1 (08:47):
Agentic AI.
Speaker 2 (08:48):
Right.
This goes beyond the model justresponding to a single prompt
you give it.
These systems are designed tomake autonomous decisions, break
down complex goals intosubtasks, maybe initiate actions
and, crucially, learn from theoutcomes.
They exhibit a form ofcomputational agency.
Speaker 1 (09:03):
So not just answering
my question, but actively doing
things or pursuing a goal onits own.
Speaker 2 (09:08):
Precisely, you know.
Think of potential AIco-scientists autonomously
formulating and testinghypotheses, or maybe
self-correcting AI frameworksthat improve themselves over
time.
This has really significantimplications for how work gets
done, how knowledge is producedin the future.
Speaker 1 (09:25):
What about AI that
can understand more than just
text, like images, audio.
Speaker 2 (09:30):
That's multimodal AI.
These systems process andintegrate diverse data types
text, image, audio, maybe sensordata.
Others Imagine an AI taking apatient's entire medical history
text, notes, lab results, scansand integrating all that to
produce a comprehensivediagnostic report.
Speaker 1 (09:49):
That sounds
incredibly powerful, especially
for fields like medicine.
Speaker 2 (09:51):
It does, or systems
that translate between
modalities like generating adescriptive text report from a
complex image automatically.
But integrating multiple datatypes also amplifies challenges
we've already touched on, likebias.
Biases from different datatypes can compound and copyright
issues get even more complexwhen training on diverse
existing works across differentformats.
Speaker 1 (10:10):
Right and AI that can
maybe simulate more complex
reasoning like step-by-stepthinking.
Speaker 2 (10:15):
Yes, that's.
Advanced AI reasoning Systemsare being designed specifically
to perform logical, step-by-stepproblem solving, trying to
emulate more deliberate humanthought processes.
There are things called largeconcept models that integrate
these vast networks ofconceptual knowledge to improve
decision making.
Speaker 1 (10:34):
So trying to make AI
think more like us.
Speaker 2 (10:37):
In a way it promises
more sophisticated capabilities,
for sure, but it also raisessome tricky ethical questions
about mimicking human cognition.
And, importantly, itsignificantly increases energy
consumption.
These models can be very powerhungry.
Speaker 1 (10:52):
Which leads us nicely
to explainability or XAI.
Why is that so crucial forthese increasingly complex and
powerful systems?
Speaker 2 (10:59):
Well as AI takes on
bigger roles in sensitive or
high stakes areas.
You know security, health care,finance, even legal decisions.
People need to trust how the AIreaches its conclusions.
They need to understand it.
Speaker 1 (11:11):
So it's not enough to
just get the right answer.
We need to see the workingsbehind it, or at least
understand the logic.
Speaker 2 (11:17):
Exactly.
Explainability helps provideunderstandable justifications.
Insights into the AI's decisionmaking process.
Understandable justifications,insights into the AI's
decision-making process.
It's vital for building userconfidence, enabling effective
human-AI collaboration and oftenmeeting regulatory requirements
.
Techniques like attributiongraphs can help visualize which
inputs most influence thedecision.
The report frames XAI asincreasingly an ethical
(11:40):
dimension, really, and even alegal requirement under EU law,
like the AI Act.
Though standardizingexplainability for these very
complex, often opaque black boxmodels that remains a huge
challenge.
Speaker 1 (11:55):
That's a fantastic
overview of the tech landscape
and where it might be heading.
Yeah, let's shift gears a bitand look at the economic picture
.
How is the EU actuallypositioned globally in this Gen
AI economy?
Speaker 2 (12:03):
Yeah, as we touched
on earlier, it's definitely a
nuanced picture.
The EU is certainly significantplayer, represents about 7% of
global AI players, ranks thirdoverall and, as we said, its
research strength is undeniable,second only to China in
publications.
Speaker 1 (12:16):
But the challenge is
turning that research muscle
into market dominance.
Is that fair?
Speaker 2 (12:20):
That's precisely it.
The report really highlightsthe significant lag in
innovation, patents and, perhapsmost critically, in venture
capital investment compared tothe US and China.
Speaker 1 (12:29):
Right the VC funding
again.
Speaker 2 (12:31):
Yeah, While EU
companies are investing in
foreign players and the EU hostsforeign players too the US
significantly leads in actuallyowning foreign Gen AI companies
foreign Gen AI companies.
Germany is needed as a leaderwithin the EU in both hosting
and owning stakes.
But bridging this overall VCinvestment gap is a key
(12:51):
challenge for EU players whowant to compete globally.
Speaker 1 (12:54):
How are traditional
industries being impacted
Manufacturing, for example?
Speaker 2 (12:58):
Gen AI is set to
really transform manufacturing.
Think smart production linesusing advanced data analytics,
predictive maintenance poweredby AI, making autonomous
decisions about machine healthbefore things break.
Optimizing complex supplychains, automating intricate
processes.
It's about creating moreinterconnected, efficient and
adaptable systems.
Speaker 1 (13:17):
And what about the
creative industries?
This feels like an area seeinghuge and sometimes pretty
contentious impact because ofAI's ability to generate content
.
Speaker 2 (13:26):
It's definitely a
space of both massive
opportunity and, yes,significant tension.
Gene AI is revolutionizingcontent creation, helping
creators generate text, images,music video much faster,
exploring entirely new artisticmodels.
But here's where the challengesare particularly acute
Intellectual property andcopyright.
(13:47):
Genie-ai models are oftentrained on vast amounts of
existing creative work,sometimes, you know, without
explicit permission from thecreators.
Speaker 1 (13:55):
That's the major
point of conflict we're seeing
play out in courtrooms right now, isn't it?
Speaker 2 (13:58):
It really is.
It raises serious questionsabout fair compensation for
creators whose work is used fortraining and the potential
displacement of originalhuman-created works by AI
adaptations or variations.
There's also a risk, if it'snot managed carefully, of
homogenization, you know, AImodels just relying too heavily
on existing styles and trendsrather than fostering true
(14:19):
novelty.
Speaker 1 (14:20):
Let's talk about the
impact on the labor market
Employment.
We hear so much talk about AIreplacing jobs.
Speaker 2 (14:29):
How does the report
frame this?
It frames it quite carefully,emphasizing productivity gains
versus potential disruption ordisplacement, and a key insight
here, I think, is that Gen AIimpacts tasks within jobs, not
necessarily entire jobs,wholesale.
Speaker 1 (14:41):
Tasks, not jobs.
Speaker 2 (14:42):
Okay.
It augments several cognitiveabilities that are crucial in
many roles, things likecomprehension and expression,
understanding and generatinglanguage, attention and search,
finding information, classifyingdocuments and conceptualization
, learning, abstraction,identifying patterns,
generalizing from data.
Speaker 1 (15:01):
So how do those AI
capabilities map onto specific
jobs?
Which ones are most affected?
Speaker 2 (15:06):
Well, a JRC study
highlighted occupations most
exposed to these AI capabilities.
It identified engineers,software developers, teachers,
office clerks and secretaries asfacing particularly high
exposure.
Speaker 1 (15:18):
Teachers, that's
interesting.
Speaker 2 (15:19):
Yeah, the study found
teachers were actually more
exposed to AI-automatable tasksthan 90% of workers across all
the occupations they surveyed.
It's not necessarily that theentire teaching job is replaced,
of course, but many taskswithin it could be significantly
changed or augmented by AI.
Speaker 1 (15:37):
That makes sense.
It's about the nature of thework shifting, which brings us
straight to the skills gap.
Right?
If tasks change, people needdifferent skills.
Speaker 2 (15:41):
Absolutely.
The report really stresses thatthe needed skills go beyond
just, you know, learning to usethe tools.
It involves understanding thebroader implication, the ethical
considerations, the limitationsof the AI.
Speaker 1 (15:54):
And how are we doing
on that front in the EU?
Speaker 2 (15:56):
Well, the EU has set
an ambitious target in its
digital decade program 80% ofthe population should have basic
digital skills by 2030.
But as of 2023, only 56% hadmet that target, so there's a
clear gap that needs addressing.
Speaker 1 (16:11):
How do we close that
gap then?
Speaker 2 (16:12):
It requires a really
significant push for upskilling,
retraining and comprehensive AIliteracy initiatives, programs
updating digital competenceframeworks like DigComp 3.0 to
explicitly include AI knowledgeand ethical use, and developing
specific AI literacy programs,starting right from schools.
Speaker 1 (16:31):
Okay, and finally, on
the economic side, what does
the market for conversationalAI-like chatbots look like
specifically within the EU?
Speaker 2 (16:38):
It's described as a
complex and dynamic market, but
currently dominated by a fewlarge non-EU players.
Market but currently dominatedby a few large non-EU players.
Openai's chat GPT is identifiedas the clear leader in terms of
user base across the EU.
Speaker 1 (16:51):
Is that market
uniform across all EU countries?
Speaker 2 (16:54):
Not exactly.
There's variation in marketshare and also how people prefer
to access these tools.
Some prefer dedicated apps,others use websites more.
It depends on the member stateand while the major global
players are dominant everywhere,you do have local players, like
Mistral AI, based in France,who can have particular
prominence in specific countries.
There's also this interestingcompetition between companies
(17:16):
who build both the underlying AImodels and the user interface,
the vertically integratedplayers and those who primarily
build user interfaces orservices on top of external
models interface-only solutions.
Speaker 1 (17:29):
Okay, that gives us a
comprehensive picture of the
tech and economic aspects.
Yeah, let's move into thesocietal dimensions now and how
policymakers are trying to keeppace Misinformation and
disinformation.
That seems like a majorchallenge that Gen AI could
significantly worsen.
Speaker 2 (17:43):
Oh, they absolutely
are.
The report highlights this as acritical area.
Gen AI enables the creation ofincredibly convincing false
content think deep fakes at amassive scale and speed Right.
It can pollute onlineinformation sources, amplify the
spread of disinformation.
It makes it incredibly hard foraccurate information to keep up
or for rebuttals to even beeffective.
(18:04):
We've seen its use already insophisticated influence
operations, like thatdoppelganger campaign targeting
European countries.
Speaker 1 (18:11):
So technical
solutions like detecting
deepfakes are important, butthey're not enough on their own.
Speaker 2 (18:17):
Precisely Technical
measures like watermarking.
They're valuable, yes, but thereport strongly emphasizes that
media literacy and AI literacyare crucial skills for citizens.
People need the ability tocritically evaluate AI generated
content, understand itspotential to deceive and resist
manipulation attempts.
It's a societal defensealongside the technical ones.
Speaker 1 (18:37):
How is Gen AI
actually being talked about in
the media?
Does that shape publicperception, do you think?
Speaker 2 (18:42):
Oh, definitely.
The media narrative around GenA, around Gen AI, is often quite
polarized, you know, swingingbetween these utopian visions of
transformative potential andquite dystopian warnings about
risks, job losses, privacycollapse, that sort of thing.
This often dramatic framingcertainly shapes public
discourse and, by extension, itcan influence policy discussions
(19:04):
too.
Speaker 1 (19:05):
Has the media
coverage ramped up recently?
Speaker 2 (19:07):
Massively.
There was a huge surge incoverage starting in late 2022,
particularly after systems likeChatGPT became widely available.
Reporting intensity tends topeak after key events like new
model announcements, majorcommercial deals or significant
regulatory proposals like the AIAct.
Speaker 1 (19:24):
And what about the
tone of that coverage?
Is it mostly positive, negative?
Speaker 2 (19:27):
Well, the report
notes that in mainstream media,
the overall sentiment towardsGen AI is actually predominantly
positive, often highlightingeconomic growth opportunities.
However, a significant chunk,maybe around 30%, does focus on
the risks and negativeimplications Interestingly
unverified sources.
You know blogs, certain socialmedia channels often have a more
neutral or mixed tone, but theyare much more prone to
(19:50):
sensationalism and alarmism,either exaggerating the AI's
capabilities or predictingcatastrophic outcomes, often
downplaying the ethicalconsiderations in the process.
Understanding this medialandscape is really key to
gauging public perception.
Speaker 1 (20:04):
Let's turn to
something called the digital
commons, things like open sourcecode repositories, Wikipedia,
openly licensed creative works.
How does Gen AI interact withthese vital resources?
Speaker 2 (20:15):
The digital commons
are absolutely crucial.
They serve as fundamental,often publicly accessible,
training data for many, many AImodels.
Speaker 1 (20:23):
Right, the raw
material again.
Speaker 2 (20:24):
Exactly, but Gen AI
offers opportunities here too.
It could help people find andnavigate vast open data sets,
maybe aid in fact-checking byquickly searching open knowledge
bases, facilitate translationof open content, and using
knowledge from the commons caneven help improve the fairness
and diversity of AI outputs,making them less biased.
Speaker 1 (20:42):
So it sounds like
potentially a symbiotic
relationship.
Speaker 2 (20:45):
Potentially yes, but
the report also points to
significant risks for thecommons, and this is a critical
finding.
Gene AI poses several threats.
That's right.
It could lead to the enclosureor privatization of free
knowledge if access for datascraping becomes restricted or
maybe monetized.
It might decrease voluntarycontributions to platforms like
Wikipedia if people just startrelying solely on chatbots for
(21:08):
information instead ofcontributing back.
There's a significant risk ofpollution if AI-generated errors
or biases get scraped andinadvertently introduced into
open databases.
That requires costly humaneffort to find and correct, and
the organizations hosting thesecommons, often nonprofits, face
real financial strain from thesheer volume of AI crawlers
(21:31):
constantly accessing their data,often with little direct return
to support the infrastructure.
Speaker 1 (21:35):
Wow, those risks
paint a pretty challenging
future for these resources werely on.
Speaker 2 (21:39):
They really do.
The report contrasts potentialfuture scenarios one where the
commons thrive with the rightsupport and policies, and
another where they deteriorate,becoming less trustworthy, less
useful and potentially hinderingAI development itself by
providing lower quality trainingdata.
Protecting the digital commonsisn't just about open access.
It's actually essential fordeveloping fair, robust and
(22:02):
advanced AI systems in the longrun.
Speaker 1 (22:04):
Okay, let's in the
long run.
Okay, let's address theenvironmental implications.
Running AI models, trainingthem it requires significant
infrastructure, significantenergy.
Speaker 2 (22:13):
It absolutely does.
The direct environmental impactis considerable, primarily from
the data centers that power AI.
These centers consume vastamounts of energy, large amounts
of water for cooling andcontribute significantly to
electronic waste.
Speaker 1 (22:26):
Do we have a sense of
the scale?
How much energy are we talkingabout?
Speaker 2 (22:33):
Well, estimates vary
and it's a moving target, but
data centers globally accountedfor around maybe 1.5% of total
electricity consumption in 2024.
And AI's share of that isgrowing rapidly.
Some estimates, project AIcould reach, say, 27% of total
data center energy consumptionby 2027.
Speaker 1 (22:47):
That's a huge jump.
Speaker 2 (22:48):
It is.
There's still uncertainty inthese numbers, mind you.
New, maybe more efficient modeltypes are emerging, but also
more complex reasoning modelsthat use more power.
Supply chain issues affecthardware deployment.
It's complex, but the trend isclearly upwards, and the
relatively short lifespan of thespecialized hardware also adds
significantly to the e-wasteproblem.
Speaker 1 (23:08):
So Gen AI is
definitely resource intensive,
but can it also help addressenvironmental challenges?
Is there an upside?
Speaker 2 (23:15):
Yes, that's the other
side of the coin.
Ai can be applied to climatemitigation efforts, Things like
tracking pollution sources moreaccurately, optimizing energy
grids for efficiency, designingmore sustainable materials.
However, the full scale of thispotential help is still being
quantified and it facespractical limitations.
The EU is implementingregulations to address the
(23:36):
environmental impact of datacenters.
There are provisions in theEnergy Efficiency Directive, the
taxonomy regulation, andthey're promoting
energy-efficient hardware likeneuromorphic chips for more
sustainable deployment.
The upcoming Cloud and AIDevelopment Act also aims to
support sustainable cloudinfrastructure.
Speaker 1 (23:54):
The report also
mentions an indirect
environmental impact from AI.
What's that about?
Speaker 2 (23:59):
That's right.
It's a less obvious point, butan important one.
Biased AI models couldpotentially influence public
attitudes or behaviors,including those related to
climate change, for instance, ifAI search results consistently
downplay climate risks due tobiases in their training data.
Speaker 1 (24:15):
Ah, influencing
opinion.
Speaker 2 (24:16):
Exactly which could
indirectly impact energy
consumption or emissionspolicies.
This highlights the need fortransparent and unbiased models,
and it also underscores theneed for international
cooperation, because differentregions have very different
policy approaches to theseenvironmental issues right now.
Speaker 1 (24:31):
Let's discuss the
impact on specific vulnerable
groups, starting with children'srights.
How does Gen AI affect children?
Speaker 2 (24:39):
Well, it presents
opportunities, certainly, like
personalized educational toolsor new avenues for creativity,
yeah, but the risks aresignificant and children are
particularly vulnerable.
Speaker 1 (24:49):
In what ways?
Speaker 2 (24:50):
They're susceptible
to deceptive manipulation
techniques that AI enables.
They're also at risk fromexposure to harmful or
inappropriate content andthere's often a real lack of age
.
Appropriate privacy and safetymeasures built into many of
these systems, plus thepotential for AI bias to affect
them and for hallucinations youknow the AI confidently stating
(25:10):
wrong information to misleadthem these are major concerns.
Speaker 1 (25:14):
So general AI ethics
guidelines aren't really enough
here.
Children need specificsafeguards.
Speaker 2 (25:19):
Absolutely.
The report strongly emphasizesthe need for child safeguards to
be designed into these systemsright from the start, taking an
age-appropriate, inclusiveapproach.
It also calls for longitudinalstudies.
We need to understand thelong-term impact of interacting
with Gen AI on children'scognitive and mental development
.
That's crucial for informingfuture policy.
Speaker 1 (25:41):
What about mental
health?
More broadly, the reportmentions risks associated with
things like AI, chatbots andcompanion apps.
Speaker 2 (25:47):
Yes, there are
documented risks.
There too.
Users can developaddiction-like behaviors, become
overly reliant on chatbots forvalidation, potentially
displacing important humanrelationships.
Speaker 1 (25:59):
Right.
Speaker 2 (26:00):
And, tragically,
there have been cases where
chatbots have actuallyencouraged harmful actions,
sometimes linked to theirperceived sentience or just
their tendency to agree withusers without critical
assessment.
Speaker 1 (26:10):
And deepfakes, which
we talked about regarding
misinformation.
They also have a significantmental health impact, don't they
?
Speaker 2 (26:17):
A severe one.
Deepfakes can be used forcyberbullying, harassment and,
devastatingly, for creating anddistributing non-consensual
explicit content, oftentargeting women and girls.
This causes profoundpsychological trauma to victims.
The report cites theAlmendraleo case in Spain as a
tragic example of this, and GenAI exacerbates this risk by
(26:40):
making the creation of suchharmful content much easier and
more accessible than before.
Speaker 1 (26:44):
That really
highlights how interconnected
all these issues aremisinformation, safety, mental
health Exactly.
Speaker 2 (26:50):
And it presents real
challenges for existing policies
.
Schools, for instance, havepolicies against cyberbullying,
but applying them effectively tosophisticated AI-generated
content is complex.
It also challenges platformsbeyond just social media, like
app stores and search engines,where image manipulation apps
some quite harmful have appeared.
Speaker 1 (27:08):
This naturally leads
us back to the core issue of
bias, stereotypes and fairness.
How does Gen AI actuallyperpetuate these problems?
Speaker 2 (27:15):
Well, fundamentally,
Gen AI models are trained on
vast datasets, right, and thesedatasets often reflect existing
societal biases and stereotypesgender biases, racial biases,
cultural biases, you name it.
Speaker 1 (27:28):
So garbage in,
garbage out, or rather bias in,
bias out.
Speaker 2 (27:31):
Pretty much.
As a result, the AI canperpetuate and even amplify
these biases in its outputs.
The report gives clear examples.
Studies show AI credit riskassessment models can exhibit
similar gender bias totraditional methods.
Text-to-image generatorsfrequently show really strong
gender and racial biases whengenerating, say, occupational
portraits.
(27:51):
Doctors are mostly male, nursesfemale, and so on.
It just reinforces harmfulstereotypes.
Speaker 1 (27:56):
So the bias in the
data becomes bias in the AI's
output.
Speaker 2 (28:00):
Yeah.
Speaker 1 (28:00):
How can this possibly
be mitigated?
Speaker 2 (28:02):
It's tough, but there
are strategies Using more
diverse and representativetraining data is crucial,
developing and implementingfairness-focused algorithms
during training orpost-processing and, really
importantly, increasingdiversity among the actual teams
building these AI systems.
Regular audits using diversebenchmarks are also essential.
Regular audits using diversebenchmarks are also essential.
And policy frameworks like theDigital Services Act, dsa and
(28:24):
the EU, which requires verylarge online platforms to
conduct risk assessments onsystemic risks, including
discrimination.
These are important levers too,but it's a complex area, often
involving difficult tradeoffsbetween, say, fairness for
different groups and overallaccuracy.
Speaker 1 (28:41):
The report also
discusses incorporating a
behavioral approach into AIpolicy.
What does that actually mean?
Speaker 2 (28:51):
It means leveraging
insights from behavioral science
.
You know how humans actuallymake decisions, including all
our cognitive biases andshortcuts, to inform the design
and regulation of AI.
Speaker 1 (28:57):
How so.
Speaker 2 (28:58):
Well understanding
human cognitive biases can help
policymakers design rules thatprotect users from being
exploited by AI systems,particularly advanced agentic AI
that might learn and exploitindividual user preferences or
vulnerabilities in detail.
Speaker 1 (29:11):
So using insights
into human behavior to make AI
policy more effective, moreprotective.
Speaker 2 (29:16):
Exactly Leveraging
those insights for good, while
also limiting the ways AI itselfcan use those insights for
potentially manipulativepurposes.
Interestingly, the report alsonotes the flip side.
In certain domains, like maybemedicine or law, AI decision
making could potentiallyovercome some of the cognitive
biases humans are prone to.
Oh right.
Research is really needed thereto understand when and how AI
(29:40):
might actually make less biaseddecisions than humans and when
human oversight or controlremains absolutely essential.
Speaker 1 (29:47):
Privacy and data
protection, especially under the
GDPR in Europe.
This must be a huge challenge,given Gen AI's need for massive
data sets.
Speaker 2 (29:56):
It absolutely is.
The report talks about Gen AI'sinsatiable appetite for data
for training.
This raises fundamental issuesabout data quality.
If the training data isinaccurate or biased, it leads
directly to harmful AI outputs.
But there's also the risk ofinference, where Gen-AI systems
can infer sensitive personalinformation about individuals
from seemingly innocuous,non-sensitive data they process
(30:22):
process.
The report mentions a retailexample where analyzing shopping
patterns allowed a company toinfer quite accurately that a
customer was pregnant, eventhough she hadn't told them.
Wow.
Speaker 1 (30:28):
How does GDPR apply
here, then, and what are the
practical challenges?
Speaker 2 (30:32):
It raises really
complex questions about the
lawfulness of processing suchvast quantities of data,
particularly whether legitimateinterest is a sufficient legal
basis for training models onthese broad, often scraped data
sets.
Accountability is alsochallenging.
Who is responsible when an AIsystem produces harmful or
privacy-violating outputs?
The user, the provider.
(30:53):
And implementing core datasubject rights under GDPR, like
your right to access your dataor the right to have it erased,
is technically incrediblydifficult, maybe impossible
sometimes, for these large,complex, opaque AI models.
How do you find and delete oneperson's data from a model
trained on trillions of datapoints?
Speaker 1 (31:11):
Are the regulatory
bodies, like data protection
authorities, getting involved?
Speaker 2 (31:14):
Yes, absolutely.
Dpas in countries like Italyand the Netherlands have already
taken action regarding specificGen AI services.
Italy and the Netherlands havealready taken action regarding
specific Gen AI services.
The European Data ProtectionBoard, the collective body of EU
DPAs, clearly sees itssupervisory role extending to
Gen AI processing personal data,and it's calling for close
cooperation with the new EU AIoffice set up under the AI Act.
Speaker 1 (31:35):
But we don't have all
the answers yet.
Speaker 2 (31:37):
No, no, as the report
notes, definitive legal and
technical answers for many ofthese issues, especially around
model opacity and actuallyimplementing data subject rights
effectively.
They're still very much needed.
It's an ongoing process.
Speaker 1 (31:50):
OK, finally, let's
tackle the copyright challenges.
Speaker 2 (31:59):
This is, you balance
protecting the rights of
creators whose work exists todaywith enabling the innovation of
AI, which often seems torequire access to vast amounts
of existing content for training.
Speaker 1 (32:12):
So what's the key
legal issue?
Speaker 2 (32:14):
A key one in the EU
is the application of the text
and data mining TDM exception.
This comes from the EU'scopyright directive.
It basically allows TDM onlawfully accessed works unless
the right holder has explicitlyreserved their rights in an
appropriate manner, especiallyusing machine-readable means.
Speaker 1 (32:33):
And what counts is
appropriate machine-readable
means.
Is that clear?
Speaker 2 (32:37):
Well, that's a major
part of the debate.
In the ongoing litigationthere's uncertainty whether just
putting a line in your terms ofservice is enough or if more
technical measures are required,things beyond just the standard
robotstxt file, which haslimitations.
Efforts like AIPF are trying todevelop better standards, but
it's not settled.
Court cases in Germany, theNetherlands, the US, they're all
(32:58):
grappling with thisinterpretation right now.
Speaker 1 (33:01):
What about the
content the AI actually produces
?
Can that be copyrighted itself?
Speaker 2 (33:05):
Generally under EU
copyright law, for something to
be protected, it needs to showoriginality and reflect the
author's own intellectualcreation.
That typically requires humaninput, human creative choices.
So if it's just generatedautomatically, If an AI output
is generated purelyautomatically, with minimal
human direction, it's unlikelyto qualify for copyright
(33:25):
protection itself.
If a human user providessufficiently precise
instructions or makessignificant creative selections
or modifications that shape thefinal output, then that output
might be protected, but thehuman user would need to
demonstrate their specificcreative contribution.
Courts in different countriesare starting to rule on what
level of human intellectualinput is actually sufficient.
Speaker 1 (33:48):
And if the AI's
output infringes someone else's
existing copyright, who's liablethen?
The user, the AI company?
Speaker 2 (33:55):
Infringement can
definitely occur, For instance,
if the AI model has effectivelymemorized parts of its training
data, maybe a specific image ortext passage, and reproduces it
too closely in its output.
If the AI provider didn't havethe rights to use that training
material for that purpose in thefirst place, there's a problem.
Speaker 1 (34:12):
And who gets sued?
Speaker 2 (34:13):
Liability could
potentially fall on the user who
prompted the infringing outputor on the AI provider or both.
But most of the recent highprofile cases we've seen in the
US and Europe like the New YorkTimes lawsuit against OpenAI,
cases brought by collectingsocieties like GEMA in Germany
or publishers in France againstmajor AI companies they have
(34:34):
primarily targeted the AIproviders themselves.
Speaker 1 (34:37):
Are there solutions
being explored?
To try and navigate, says maybefind a middle ground.
Speaker 2 (34:42):
Yes, things are
starting to happen.
Licensing agreements areemerging between some AI
providers and publishers, likemedia organizations, allowing
the AI to train on their content, usually for a fee.
Collective licensing andcollective bargaining are also
being discussed as potentialmechanisms, though they're
complex to set up, to ensurefair compensation for creators
whose works are used in training.
Speaker 1 (35:02):
So finding ways to
pay for the data.
Speaker 2 (35:04):
Essentially, yes,
finding workable models.
Ultimately, the report suggeststhat harmonized approaches and
standardization across the EUare really necessary here to
provide legal certainty foreveryone creators, users and the
AI developers.
Speaker 1 (35:17):
Wow, that was an
incredible deep dive.
We covered so much ground, fromthe basic tech all the way
through these really complexpolicy challenges across
technology, the economy, society, ethics, regulation, everything
.
Speaker 2 (35:43):
The EU is clearly
grappling with how to foster
innovation in this space, how tobecome a leader while also
upholding its core values andensuring a strategic,
evidence-based approach, andthat's where reports like this
one from the JRC are so vital.
Speaker 1 (35:56):
Absolutely.
The insights from this kind ofscientific evidence are clearly
crucial for navigating thisincredibly fast changing
landscape.
Speaker 2 (36:03):
And that evidence is
vital for policymakers trying to
make informed decisions in aworld where, frankly, the
technology is often evolvingmuch faster than regulations can
keep up.
Speaker 1 (36:12):
So, as we wrap up
this deep dive into the JRC's
outlook on generative AI, here'smaybe a thought to leave you
with, building on everythingwe've discussed.
Here's maybe a thought to leaveyou with, building on
everything we've discussed.
Given Gen AI's incrediblecapacity to generate content, to
automate, to augment humantasks, how do we ensure its
widespread adoption doesn'tinadvertently diminish essential
(36:32):
human skills, skills likecritical thinking, creativity,
nuanced understanding,especially in sensitive areas
like education, journalism,public discourse?
How can we design AI systemsand the policies governing them
so they truly augment humancapabilities and human values,
rather than eroding them?
Speaker 2 (36:50):
That's a fundamental
question, isn't it?
Especially as this technologycontinues to integrate deeper
and deeper into our lives.
Speaker 1 (36:55):
Indeed Well.
Thank you for joining us forthis deep dive.
We hope you feel a little morewell-informed about the
fascinating and definitelychallenging world of generative
AI.