All Episodes

September 19, 2025 26 mins

Artificial intelligence stands at a crossroads of breathtaking innovation and urgent need for responsible guardrails. Every breakthrough brings questions about safety, fairness, and accountability that can no longer be afterthoughts. The European Union has responded with the AI Act – the world's first comprehensive legal framework for artificial intelligence – and its General Purpose AI Code of Practice has already secured commitments from tech giants like OpenAI, Google, Microsoft, and Anthropic.

We unpack what this means for anyone building, deploying, or investing in AI systems. The EU's risk-based approach categorizes AI into four tiers, from banned practices (social scoring, emotion detection in workplaces) to high-risk applications requiring strict oversight (recruitment, medical devices) to systems needing basic transparency. For general purpose AI models, key requirements include detailed documentation using specific templates, energy consumption reporting, comprehensive copyright compliance including respecting robots.txt opt-outs, and robust security measures.

The stakes couldn't be higher – violations can trigger fines up to €35 million or 7% of global annual turnover. This isn't just another compliance exercise; it represents a fundamental shift in how organizations must approach AI governance. We outline a practical roadmap for implementation, from urgent model inventories to establishing cross-functional AI risk councils and integrating these requirements into existing risk management frameworks aligned with standards like NIST AI RMF and ISO 42001.

Whether you're a CFO allocating budget for new compliance measures, a CRO assessing emerging risks, or a developer navigating technical requirements, this deep dive provides actionable insights to transform regulatory challenges into strategic advantages. The tension between rapid innovation and responsible deployment defines our AI future – understanding these new rules provides essential context for shaping that future wisely.



Don't forget to subscribe on your favorite podcast platform—whether it's Apple Podcasts, Spotify, or Amazon Music.

Please contact us directly at info@wheelhouseadvisors.com or feel free to connect with us on LinkedIn and X.com.

Visit www.therisktechjournal.com to learn more about the topics discussed in today's episode.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sam Jones (00:00):
Welcome back to the Deep Dive.
So imagine you're steering theship at an organization, right,
You're pushing the boundaries,with AI innovating like crazy.
But then suddenly you'restaring at this maze of new
regulation.

Ori Wellington (00:14):
Yeah, it's a real challenge.

Sam Jones (00:15):
It really feels like every single week there's some
amazing new AI breakthrough, butall that excitement it brings
this really urgent need forguardrails, you know.

Ori Wellington (00:25):
Absolutely.
How do we manage all this powerresponsibly?
That's the core question.

Sam Jones (00:29):
Exactly, and today that's what we're diving deep
into the EU AI Act andspecifically its General Purpose
AI Code of Practice, or GPAcode for short.

Ori Wellington (00:41):
The EU's really laid down a significant marker
here and, yeah, it's definitelymaking waves already.

Sam Jones (00:46):
It truly is.
And look, this isn't just aboutus reading some dry legal text.

Ori Wellington (00:50):
No, not at all.

Sam Jones (00:51):
It's about unpacking what this code actually means
for you, listening, whetheryou're a developer actually
building these things, or abusiness leader deploying AI, or
maybe an investor scouting thenext big opportunity.

Ori Wellington (01:01):
The implications really do stretch far and wide.

Sam Jones (01:04):
So our mission today is pretty clear we want to take
all the complex details of thisEU AI code of practice based on
some solid expert analysis we'velooked at and just boil it down
.

Ori Wellington (01:15):
Get to the core insights.

Sam Jones (01:16):
Right, give you the shortcut to being properly
informed.
We'll highlight some surprisingbits, the really crucial
operational stuff you need toknow, but without hopefully
getting everyone bogged down injargon, makes sense.
So if you're building,deploying, investing in or
honestly even just curious aboutAI, understanding these new
rules isn't just ticking acompliance box.

(01:37):
It's really about havingstrategic foresight in this
whole AI landscape.

Ori Wellington (01:41):
Couldn't agree more.
Shall we start laying thegroundwork.
Let's do it in this whole AIlandscape.
Couldn't agree more.
Shall we start laying thegroundwork.

Sam Jones (01:44):
Let's do it so the code.
It sits within the bigger EU AIAct.
Where do we start with that?

Ori Wellington (01:50):
Okay, so the EU AI Act itself.
It officially entered intoforce August 1st 2024.
The goal is full applicabilityby August 2nd 2026.

Sam Jones (02:00):
Right two years.

Ori Wellington (02:01):
But and this is key it's not just one big
deadline way off in the future.
It's actually a very carefullyphased rollout.
There are critical milestones,some of which are already passed
or coming up very quickly.

Sam Jones (02:11):
OK, so these aren't just dates to circle on a
calendar.
They're real deadlines withreal consequences.

Ori Wellington (02:15):
Certain AI uses are just outright prohibited and
some basic AI literacy dutieskicked in.
Okay, then, from August 2nd2025, which, as you say, is
practically upon us that's whenobligations for general purpose
AI providers really start,things like new transparency

(02:37):
rules, copyright requirements.
They become active then.

Sam Jones (02:40):
Got it and then looking further ahead.

Ori Wellington (02:42):
Fast forward to August 2nd 2026,.
That's when the rest of the actbecomes fully applicable.
Got it, and A grace periodMakes sense Now you mentioned,

(03:10):
the EU's approach is risk-based.
Yes, and that's reallyinteresting.
Instead of a sort of one sizefits all rule, they've tried to
tailor the regulations based onthe actual potential harm an AI
system could cause.
It's tiered.

Sam Jones (03:22):
Tiered how?
What's the top tier?

Ori Wellington (03:24):
Top tier is unacceptable risk.
These are AI practices justflat out banned.
Think things like governmentsocial scoring or untargeted
scraping of facial images tobuild databases, or using AI to
detect emotions in workplaces orschools.
Basically, stuff deemed tooinvasive or dangerous.

Sam Jones (03:42):
Okay, so those are just off the table, completely
Correct.
So what about AI that isn'tbanned outright but still
carries, you know, significantrisk?

Ori Wellington (03:50):
That's the high risk category and these systems
face really strict requirements.
We're talking high data qualitystandards, very thorough
documentation, traceability,mandatory human oversight,
robustness the works.

Sam Jones (04:04):
And what kind of AI falls into that high-risk bucket
?

Ori Wellington (04:07):
Think AI used in critical infrastructure, energy
grids, transport or medicaldevices, even things like
recruitment software or creditscoring systems stuff that could
seriously impact someone'ssafety, livelihood or
fundamental rights.

Sam Jones (04:19):
Right Makes sense.
Okay, so unacceptable high risk.
What's next?

Ori Wellington (04:23):
Below high risk you have limited risk systems.

Sam Jones (04:25):
Yeah.

Ori Wellington (04:25):
Here the main thing is transparency.

Sam Jones (04:27):
Transparency meaning.

Ori Wellington (04:29):
Meaning you need to make it clear when someone's
interacting with AI.
So a chatbot has to say it's achatbot.
Ai generated content likedeepfakes needs to be labeled.
It's about ensuring peoplearen't misled.
Gotcha.

Sam Jones (04:43):
And the lowest tier.

Ori Wellington (04:44):
That's minimal risk, and these systems are, for
the most part, unregulated.
The idea is to let innovationhappen where the risks are
really negligible.

Sam Jones (04:53):
Okay, that tiered approach seems logical, so let's
zoom in now on the code ofpractice itself, this voluntary
document right Publishedmid-2025.

Ori Wellington (05:00):
Exactly Published July 10th 2025.
It's a voluntary, generalpurpose AI code of practice put
together by 13 independentexperts after a lot of
stakeholder discussion.

Sam Jones (05:09):
And what's its main job, this voluntary code?

Ori Wellington (05:12):
Well, its core purpose is to give GPAI model
providers a practical way toshow they're complying with
certain key parts of the AI Act,specifically Articles 53 and 55
.
It acts as a sort of bridgeuntil the official harmonized EU
standards are fully developed.

Sam Jones (05:28):
So signing up helps companies, how it basically
streamlines things.
If you follow the code, yourinteractions with the central AI
office should be smoother.
It reduces the administrativeheadache compared to, say,
having to submit completelycustom documentation every time
to prove you're compliant.

Ori Wellington (05:47):
Okay, and this sounds important.
It's not a get-out-of-jail-freecard, right?
It's not a legal safe harbor.

Sam Jones (05:52):
Absolutely crucial point.
It is not a legal safe harbor.
It doesn't automatically meanyou are compliant or give you
immunity.
It's more like a recognized,structured method to demonstrate
your compliance efforts.
A guide, not a shield.

Ori Wellington (06:06):
Right, a way to show you're playing by the
expected rules.

Sam Jones (06:09):
Precisely, and to help with that, the commission
also put out some guidelines toclarify what counts as GPAI and,
importantly, a mandatorytemplate for summarizing your
training data publicly.

Ori Wellington (06:19):
A mandatory template.
Okay, that sounds prettyconcrete.

Sam Jones (06:22):
It is.
It's a big step towards moretransparency about what's
actually gone into trainingthese models.

Ori Wellington (06:26):
Interesting, and who's actually signed up to
this code so far?

Sam Jones (06:29):
Any big names.

Ori Wellington (06:30):
Oh yeah, quite a few heavy hitters OpenAI,
google, microsoft, mistral,servicenow, anthropic IBM,
amazon, cohere they're allsignatories.

Sam Jones (06:42):
Hmm, anyone holding out?

Ori Wellington (06:43):
Well, interestingly, XAI only signed
the chapter on safety andsecurity.

Sam Jones (06:48):
Oh, so what does that mean for them?

Ori Wellington (06:50):
It means they'll need to prove their compliance
on transparency and copyrightusing other methods which might
be, you know, more work or lessstraightforward than just
following the code structure forthose parts.

Sam Jones (07:02):
That decision kind of hints at some underlying debate
, doesn't it?
Is everyone happy with thiscode?

Ori Wellington (07:07):
Not universally.
No, there has been somepushback.
Groups like CCIA Europe forexample, have raised concerns
about the burden, the timing,questioning if it's all
proportionate, especially partsof the safety chapter.
They worry it might stifleinnovation.

Sam Jones (07:20):
Yeah, I can see that tension.
Is it too much red tape or isit just the necessary price for
building trust and safety in AI?

Ori Wellington (07:26):
That's the million dollar question, isn't
it?
The EU perspective is clearthese guardrails are vital for
public trust and preventing harm, which ultimately helps AI
adoption.
But the industry concern aboutbalancing compliance speed with
innovation speed is also veryreal.

Sam Jones (07:42):
So the code is trying to sort of thread that needle,
provide a path.

Ori Wellington (07:47):
That's the idea, a clear the voluntary pathway
forward in the interim.

Sam Jones (07:51):
Okay, so let's define terms.
What exactly counts as generalpurpose AI or GPAI under this
whole thing?

Ori Wellington (07:58):
Good question.
Basically, it's an AI modelthat shows significant
generality, meaning it's prettyversatile, can be plugged into
lots of different downstreamsystems and adapted for various
tasks.

Sam Jones (08:09):
Is there a technical threshold?

Ori Wellington (08:10):
There's a practical indicator.
The commission suggests, yeah,training compute.
If a model took more than 1023FLOPs to train, that's a massive
amount of computation combinedwith having certain advanced
capabilities like complexlanguage understanding or
generation, it's likelyconsidered GPAI 10 to the 23.

Sam Jones (08:26):
Wow, Okay.
And then there's an even higherlevel GPI with systemic risk.

Ori Wellington (08:30):
That's right.
This is for the real frontiermodels.
A model is presumed to havesystemic risk if its training
compute hits 125 FLOPs, so ahundred times more compute than
the GPI indicator, or if thecommission designates it because
it has a similarly huge impact.

Sam Jones (08:48):
And if you build one of those?

Ori Wellington (08:49):
Then you have a strict notification duty.
You must tell the commissionimmediately Well, within two
weeks anyway when you hit thatthreshold, or even when you
anticipate hitting it.
It's a mandatory heads up.

Sam Jones (08:58):
No wiggle room there.
What about open source AI?
Is there any kind of break forthem?

Ori Wellington (09:08):
There is an open source exception.
Yes, it applies to some of thetechnical documentation duties
in Article 53.
If you release your model undera free and open source license
and you make the weights,architecture and usage info
public, you might be exempt fromthose specific documentation
requirements.

Sam Jones (09:20):
Ah, but there's a catch a bit.

Ori Wellington (09:21):
There's a big catch too.
Actually, this exception doesnot apply if the model has
systemic risk and, crucially, itdoes not get you off the hook
for copyright compliance orpotential product liability.
Open source isn't a free passon everything.

Sam Jones (09:35):
Got it.
So pulling this together, thisvoluntary kind of seems like a
useful roadmap for navigatingthe act helps reduce some
uncertainty, maybe.

Ori Wellington (09:45):
Exactly.
It provides a recognized way toapproach compliance, which is
valuable but, like we stress,understanding its limits.
That it's not a legal shield isabsolutely key.
It's about building a legalshield is absolutely key.
It's about building adefensible, transparent approach
.

Sam Jones (09:58):
Okay, let's get down to the brass tacks.
Then the code moves fromprinciples to actual practical
actions, doesn't it Likedocumenting energy use, handling
, copyright?
Let's break this down, startingwith transparency.

Ori Wellington (10:11):
Right On transparency.
Providers need to be prettymeticulous.
They have to use this specificmodel documentation form.

Sam Jones (10:18):
And what goes in that form.

Ori Wellington (10:19):
A lot Detailed specs of the model,
characteristics of the trainingdata used, what the model is
intended for and, importantly,what is not designed for the
out-of-scope uses, plus thecompute power consumed during
training and this is quitenotable the energy consumption.

Sam Jones (10:33):
Energy consumption.
That's interesting.
Why mandate that specifically?

Ori Wellington (10:36):
Well, it signals a broader focus beyond just
function.
It forces consideration of theenvironmental footprint and you
know, looking ahead, it couldpotentially feed into future
carbon pricing or green AIincentives.
It makes sustainability part ofthe performance picture.

Sam Jones (10:54):
Hmm, makes sense, and this documentation isn't a
one-off.

Ori Wellington (10:57):
No, it has to be kept up to date.
Yeah, and you need to be readyto share it with downstream
developers who integrate yourmodel and with the AI office, if
they ask.
Though, there are provisions toprotect legitimate trade
secrets, of course.

Sam Jones (11:09):
What if you don't know the exact energy figure?
Maybe for an older model?

Ori Wellington (11:13):
Estimations are allowed in that case, but you
have to be transparent about it.
You need to disclose the methodyou use for the estimate and
point out any gaps in your data.
The key word is stilltransparency.

Sam Jones (11:23):
And supporting downstream users.

Ori Wellington (11:25):
Yeah, that's important too.
Providers need to giveintegrators good info on the
model's capabilities, itslimitations, how to integrate it
safely.
And there's a clear point aboutfine-tuning If someone
downstream significantlymodifies your model, they
effectively become the providerfor that modified version,
inheriting the responsibilities.

Sam Jones (11:44):
Right Passing the baton responsibly.
Okay, that covers transparency.
Now let's tackle the big onecopyright compliance Always a
thorny issue with AI.

Ori Wellington (11:53):
Indeed.
The code requires providers tohave a solid internal copyright
policy.
This needs to cover how theylawfully get training data, how
they respect opt-outs, how theybuild safeguards into the
model's outputs to try andprevent infringement, and how
they handle complaints.

Sam Jones (12:08):
And respecting opt-outs.
How specific does it get?

Ori Wellington (12:11):
Very specific.
It explicitly mentionsrespecting machine-readable
opt-outs like the standardrobotstxt file websites use.
If a site says don't crawl forAI training, you have to honor
that when gathering web data.
That's a big operational changefor many.

Sam Jones (12:25):
Yeah, that sounds like it requires significant
technical adjustments.
What about summarizing thetraining data you mentioned?
A mandatory template?

Ori Wellington (12:32):
Yes, the mandatory template from the
commission.
Providers must publish asummary of the content used for
training.
It needs to be detailed enoughto actually help rights holders
understand what might be inthere.
Think identifying major datasets used, listing top domain
names that were scraped, thatkind of thing.

Sam Jones (12:51):
Oh for models trained on.
You know the vastness of theInternet over years.
Pulling that summary togethersounds incredibly challenging.

Ori Wellington (12:59):
It absolutely is , especially for older models
where record keeping might nothave been as rigorous, but it
represents a fundamental shift.
We're moving away from rightsholders having to guess and sue
towards providers having toproactively disclose and justify
their data sources.

Sam Jones (13:14):
Yeah.

Ori Wellington (13:14):
It really empowers rights holders.

Sam Jones (13:16):
And complaint handling.

Ori Wellington (13:17):
Also required.
You need designated contactpoints and clear procedures so
rights holders can actuallyreach out, file a complaint
about potential infringement andget a response.

Sam Jones (13:26):
Okay, transparency, copyright.
What's the third pillar?
Safety and security right,Especially for those high
compute systemic risk models.

Ori Wellington (13:35):
Exactly.
This chapter really zeroes inon those most powerful,
potentially riskiest models.
The obligations here are quitedemanding.
Such as.
Providers need to conductthorough model evaluations,
including adversarial testing,often called red teaming
basically trying to break themodel or find harmful
capabilities before release.
Red teaming basically trying tobreak the model or find harmful

(13:56):
capabilities before release.
They need ongoing processes toassess and mitigate systemic
risks post-deployment.

Sam Jones (14:00):
And reporting issues.

Ori Wellington (14:06):
Yes, Mandatory tracking and prompt reporting of
any serious incidents to the AIoffice and relevant national
authorities, Plus ensuringstate-of-the-art cybersecurity,
not just for the model itselfbut for the whole infrastructure
it runs on.
And again, those notificationduties kick in if you hit the
1025 FLOP's compute threshold.

Sam Jones (14:19):
Okay, so taking all this in, what's the real?
So what for businesses?
We're talking major changes,right?
It sounds like AI governance isreally moving out of the tech
basement and into the boardroom.

Ori Wellington (14:27):
That's the absolute bottom line.
This fundamentally shifts AIgovernance from being just an IT
or maybe a legal problem tobeing a core line of business
responsibility.
C-suite needs to be involved.

Sam Jones (14:40):
And for, say, the chief risk officer or the CFO.
What are the concreteoperational impacts?

Ori Wellington (14:46):
Huge impacts.
Think about disclosure andattestation.
You now need repeatableevidence for things like
training, data, origins, compute, usage, energy consumption.
So the CFO needs to find budgetto actually build the systems
to measure and assure this data,potentially aligning it with
existing ESG reporting orinternal controls, and they need

(15:07):
to be ready for the AI officeasking tough questions.

Sam Jones (15:10):
So it's not just reporting, it's funding the
measurement infrastructureitself.

Ori Wellington (15:13):
Precisely, and copyright compliance.
That becomes a real cost centerin the controllership function.
You need budget for crawlercontrols, for that robotstxt
compliance, potentially forlicensing data sources, for
filtering out illegal content,for running those complaint
workflows.

Sam Jones (15:27):
And pushing it down the supply chain.

Ori Wellington (15:29):
Yes, Contracts with suppliers.
Data providers, cloud providersneed to be updated to flow
these responsibilities down.
You need assurance they'recompliant too.

Sam Jones (15:38):
And for companies working with those really big
systemic risk models.
What's the budget hit there?

Ori Wellington (15:43):
They need to brace for significant spending
on independent evaluations,those intensive red teaming
exercises, setting up seriousincident response teams and
playbooks and seriouslyhardening the cybersecurity
around these critical AI assets.
Coordinating those computethreshold notifications with
cloud providers also needscareful planning and process.

Sam Jones (16:03):
Wow, Okay.
And if companies?
Well, if they get it wrong, thepenalties we talked about
earlier are Truly serious.

Ori Wellington (16:09):
We're talking maximum fines up to 35 million
euros or 7% of global annualturnover, whichever is higher,
for using prohibited AI orbreaching certain other core
obligations.
That's GDPR level stuff.
It could be existential forsome businesses 7% of global
turnover.
Yeah, and other major breacheslike violating GPA obligations
can hit 15 million euros or 3%.

(16:30):
Violating GPA obligations canhit 15 million euros or 3%.
Even just providing incorrectinformation to authorities could
cost 7.5 million euros or 1%.
These fines have real teeth.

Sam Jones (16:39):
And just to recap the timing on those fines, the GPII
rules themselves start August2025, but the commission's power
to actually levy fines for GPIIbreaches starts August 2026,
right.

Ori Wellington (16:50):
Correct August 2nd 2026 for the fines related
to GPI obligations and rememberthose legacy models have until
August 2nd 2027 to comply beforefacing fines.

Sam Jones (17:01):
So, given how high the stakes are financially and
reputationally, businessesreally can't just see this code
as another compliance checkbox,can they?

Ori Wellington (17:08):
Absolutely not.
It's far beyond that.
It demands a fundamentalrethinking of operational
strategy, especially for riskand finance leaders.
It's pushing toward a much moreproactive, integrated way of
managing AI risk across theentire organization.

Sam Jones (17:21):
Right, that integrated approach.
Let's talk about how toactually achieve that, because
the code, the guidelines, thatmandatory template, they're not
just ideas anymore, are they?
They're about creatingauditable proof.

Ori Wellington (17:30):
Exactly.
It shifts the whole game fromtalking about AI principles to
demonstrating auditableprocesses and artifacts.
It makes providers accountablefor managing risk throughout the
AI lifecycle.
You have to show your work.

Sam Jones (17:42):
And you mentioned integrated risk management.
Irm is the way to do this.
How does that help structurethings?

Ori Wellington (17:48):
Yeah, irm really provides the practical
framework, the operatingbackbone to weave all these new
duties into how a companyalready manages risk.
It connects the dots betweenenterprise risk ERM operational
risk, orm, technology risk, trmand governance risk and
compliance GRC.

Sam Jones (18:06):
Does it align with other standards people might
already be using?
Yes, perfectly.

Ori Wellington (18:09):
It aligns very well with established frameworks
like the NIST AI RiskManagement Framework, which is
widely respected globally, andalso ISO 42001, the
international standardspecifically for AI management
systems.
So you're building onrecognized best practices.

Sam Jones (18:23):
Can you give us a concrete example?
How would IRM handle, say thatmodel documentation form
requirement?

Ori Wellington (18:29):
Sure, so within an IRM framework.
That model documentation formrequirement?
Sure, so within an IRMframework, that model
documentation form isn't justsome standalone document
floating around.
It gets tagged against core IRMobjectives like assurance and
compliance.
Okay, then it plugs intospecific risk functions.
It becomes an input fortechnology risk management,
helping manage the AI model as adocumented asset.
It informs GRC processes,ensuring policies around model

(18:51):
development and use are beingfollowed.
The end result is you build thecentral, connected register of
your models, their compute logs,their energy use, making
everything much easier to track,audit and manage.

Sam Jones (19:01):
That makes a lot of sense, connecting it into
existing structures.
So for the CFOs, the CROslistening right now, feeling
maybe a bit overwhelmed, what'sa practical starting roadmap?
What should they be doing likenow?

Ori Wellington (19:13):
Okay, let's break it down In the first 30
days or so, urgently startinventorying your AI models
which ones touch the EU market,identify potential GP AI models
and flag any candidates forsystemic risk.
At the same time, startdeploying ways to measure
compute and energy use, or atleast document your estimation
methods clearly, as the codeallows, and, if you are working

(19:34):
on frontier models, get thatsystemic risk documentation,
evaluation, planning andnotification process sketched
out now.

Sam Jones (19:41):
Okay, that's a busy first month.
What about the next threemonths?
The first quarter?

Ori Wellington (19:44):
In the next 90 days, the focus shifts to
governance and policy.
Stand up a cross-functional AIrisk council.
Get finance tech, risk, legalproduct ops leaders in the room
together.
Give them ownership ofoverseeing the model
documentation form process andthat public training data
summary.
Critically publish yourinternal copyright policy.
Get your web crawlersconfigured to respect those

(20:05):
machine-readable opt-outs likerobotstxt, and set up your
intake mechanism for rightsholder complaints.

Sam Jones (20:11):
Right Getting the core policies and processes in
place and looking further out.
Six months a year.

Ori Wellington (20:17):
Over the next six to 12 months.
It's about embedding this.
Integrate the modeldocumentation and the training
data summaries into your regularinternal audit cycles and your
board reporting packs.
Start formally aligning yourinternal IRM controls with the
NIST AI RMF structure and maybebegin looking at ISO 42001,
readiness to show maturity andbeyond the first year, etc.

(20:52):
To explicitly flow down thesetransparency and copyright
requirements.
Make sure your partners arealigned.

Sam Jones (20:58):
That's a really clear , step-by-step approach,
excellent.
So if we had to boil thisentire deep dive down to just a
few key takeaways, the absolutemust-do actions, what would they
be?

Ori Wellington (21:08):
Okay, four key recommendations.
One fund the basics now.
Seriously Allocate budget formodel inventories, going those
model documentation forms filledout, setting up compute and
energy metering and establishingthat public summary process for
training data.
Don't wait.

Sam Jones (21:22):
Okay, number two.

Ori Wellington (21:23):
Two institutionalize copyright
compliance.
Make respectingmachine-readable opt-out
standard practice.
Set up clear channels forrights holders to contact you.
Implement output filtering.
Make sure someone is clearlyaccountable for this across the
organization.

Sam Jones (21:38):
Got it Third.

Ori Wellington (21:39):
Three plan for systemic risk, even if you think
it doesn't apply to you today.
Design your model evaluationprocesses, your adversarial
testing plans, your incidentresponse runbooks now so they're
ready to scale if your modelsor models you rely on from
suppliers cross those computethresholds later.
Be prepared Makes sense.

Sam Jones (21:58):
And the final recommendation.

Ori Wellington (21:59):
Four adopt IRM as your operating backbone.
Don't treat this as a separatesilo.
Map these new code requirementsdirectly onto your existing
integrated risk managementobjectives Performance,
resilience, assurance,compliance.
Integrate them properly acrossERM, orm, trm and GRC.
Make it part of how you alreadymanage risk.

Sam Jones (22:24):
Fantastic.
So there we have it.
We've really journeyed throughthe weeds of the EU AI Act and
its code of practice today, fromthe rollout phases and risk
levels right down to the nittygritty of transparency,
copyright and safety rules.

Ori Wellington (22:31):
Yeah, and the key point is this isn't just
more regulation for the sake ofit.
It's driving a fundamentalshift.
It demands that integrated riskmanagement approach and, yes,
some significant operational andfinancial adjustments.

Sam Jones (22:44):
But getting ahead of the curve, adopting that
proactive IRM approach early,that could actually turn these
compliance hurdles into a realstrategic advantage, couldn't it
?
Building trust, makingoperations more efficient.

Ori Wellington (22:56):
I definitely think so, and you know, this
whole EU effort really throws aspotlight on a massive global
tension, doesn't it?
How do we keep AI innovationmoving at this incredible pace
while also making sure it's safe, accountable and stays within
ethical lines?

Sam Jones (23:12):
That's the core challenge.

Ori Wellington (23:18):
The EU has drawn its line in the sand here, and
you can bet the ripples fromthis will influence AI
development everywhere.
What does that mean forcompanies trying to navigate
different rules in differentcountries?
How will it shape the design ofAI systems themselves going
forward?
Lots to think about.

Sam Jones (23:28):
Absolutely Well.
Hopefully, this deep dive hasgiven you, our listeners, the
knowledge you need to starthaving those vital conversations
inside your own organizations.
We really encourage you tothink about your own AI
practices and how these insightsmight shape your strategy.

Ori Wellington (23:45):
Definitely food for thought.

Sam Jones (23:46):
Thanks so much for joining us for the deep dive
today.
We'll catch you next time foranother essential exploration.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.