Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sam (00:00):
Welcome to the deep dive.
You know, almost every majorenterprise today is rapidly
integrating artificialintelligence into its core
operations.
It's happening everywhere.
It really is.
But I think the biggest shiftisn't just uh using AI, it's
actually empowering it.
Empowering it.
How so?
Well, we're seeing theemergence of what some call
autonomous integrated riskmanagement or autonomous IRM.
(00:23):
These are like self-learningagents that are independently
monitoring risks, flaggingissues, and sometimes even
making decisions.
Right.
And that idea where the systemitself makes consequential
decisions, that opens up a hugegovernance challenge, doesn't
it?
A massive one.
If these AI agents areoperating independently, then
executives, boards, theyfundamentally need some kind of
(00:46):
structural guarantee.
Trevor Burrus, Jr.
A guarantee that these
systems are trustworthy and
compliant.
Exactly.
Trustworthy, compliant, andreally aligned with the
company's strategic goals.
That's the core problem.
Okay.
So providing that guarantee,but without drowning the C-suite
in complex regulatory details,that's our mission for this deep
dive.
We've looked at sourcescovering the three really
(01:07):
critical global AI governanceframeworks.
Ori (01:10):
That's right.
We're going to do a comparativeanalysis of ISOIE 42001, the EU
AI Act, and the NIST AI RiskManagement Framework, the RMF.
And we want to cut straight towhat matters for you listening.
What does executive leadershipabsolutely need to prioritize?
And then how can youroperational teams actually
implement these rules across allsorts of different systems?
(01:31):
And crucially, how does eachframework specifically tackle
that really uniqueaccountability challenge posed
by an autonomous AI agent?
That's key.
Definitely.
Okay, let's maybe startunpacking this with the
strategic view, kind of thetop-down perspective.
If you're in the C-suite orperhaps on a board, these
frameworks probably look quitedifferent, at least on paper.
They do.
Let's begin with ISO IEC 40201.
(01:54):
Now, this is described as avoluntary standard that
immediately sounds, well, softerthan a mandatory law.
So why should executives reallyinvest serious time and let's
be honest, capital in adoptingit if it's just voluntary?
Yeah, that's absolutely the
right question to ask.
So ISO 42001, it's the AImanagement system standard or
(02:16):
AIMS.
And yes, it's voluntary, butthe key thing is it is
certifiable.
Sam (02:20):
Okay, certifiable.
Ori (02:22):
And the strategic payoff, I
think, is really twofold.
First, it actually requires topmanagement involvement.
They have to formally integrateAI governance right into the
existing business processes.
It's about building thatnecessary internal scaffolding.
Sam (02:35):
Yeah, I see.
So it's less a complianceburden and maybe more an
organizational enabler, helpsstructure things internally.
Ori (02:41):
Precisely.
And the second payoff is allabout demonstrability.
Adopting it sends a clearsignal to your stakeholders,
customers, investors,regulators, too.
It shows you're committed toresponsible AI.
And critically, if you'replanning to comply with, say,
the EU AI Act down the line.
Trevor Burrus, Jr.
Sam (02:56):
Which many global companies
will have to.
Trevor Burrus, Jr. (02:58):
Right.
Ori (02:58):
Then 42001 gives you a
ready-made auditable management
system.
You can use that to map yourprocesses and actually prove
compliance.
It significantly lowers thatfuture friction.
Okay, that makes sense.
Now let's contrast that withthe EU AI Act.
You mentioned it, and it soundsanything but soft.
No, definitely not.
Sam (03:14):
This is a binding law.
It's coming soon, based on risktiers.
What's the absolutenon-negotiable strategic demand
from this legislation?
Ori (03:23):
Well, the board simply must
treat AI risk as a core
enterprise-level risk.
Full stop.
The strategic imperative here,honestly, feels very similar to
the introduction of GDPR a fewyears back for privacy.
Trevor Burrus, Jr.
Sam (03:35):
GPR, right.
If you use or you provide whatthe Act defines as high-risk AI
systems, and it's quitespecific about what those are,
you absolutely must ensurecomprehensive compliance from
day one.
There's no grace period,really.
Trevor Burrus, Jr.
Why the comparison to GDPR?
Was it the potential impact,the fines?
Exactly that.
The financial penalties arestructured specifically to
command board-level attention.
(03:56):
Noncompliance could triggerfines up to $35 million, or, and
this is the kicker, 7% ofglobal annual turnover,
whichever figure is higher.
7%?
Wow.
For a large global company,that's that's not trivial.
It's absolutely not a roundingerror.
For many, it could be acatastrophic risk.
So executives really have toestablish proper AI oversight
(04:18):
governance now, primarily toprotect the firm's bottom line
and of course its reputation.
Okay, understood.
And the third one, the NIST AIRMF 1.0.
This comes from the U.S., alsovoluntary, like ISO, but you
mentioned it's not certifiable.
So what's its strategic valuethen?
Yeah, NIST RMF plays a reallycrucial role, particularly in
places, you know, withoutbinding AI laws yet, it's
(04:39):
rapidly becoming a kind ofglobal benchmark for best
practice.
For boards, adopting the NISTframework demonstrates a
proactive stance.
It shows they're fulfillingtheir fiduciary responsibility
around managing emerging risks.
So it helps guide them.
It guides leadership to ask theright, quite specific questions
about their AI risk exposure.
And what are those rightquestions generally centered
around?
What's the focus?
(05:00):
Fundamentally, trustworthiness.
The NIST framework is reallybuilt around ensuring AI systems
are fair, secure, transparent,and accountable.
Leadership can use it todevelop a common language, a
shared understanding around AIrisk within the organization.
It essentially provides ablueprint for self-regulation,
helping evaluate potential harmseven before specific rules
(05:23):
exist.
So it structures the
conversation at the highest
level.
Ori (05:26):
Precisely.
It gets everyone on the samepage about what good looks like.
Sam (05:29):
Okay, that lays out the
strategic foundations well.
Now let's shift gears a bit.
Let's talk about where therubber meets the road
implementation readiness.
What about the operationalteams, the risk, compliance,
legal, IT folks?
How do they take thesestrategic mandates and turn them
into like an actual operationalchecklist?
Ori (05:47):
Good question.
For teams implementing ISO42001, it's mostly about process
integration.
The standard gives you astructured AI management system,
the aims using that familiarPlando Check Act model that
allows for a continuousimprovement.
Sam (06:01):
Aaron Powell PDCA, right?
Many teams know that.
Ori (06:04):
Exactly.
And the real efficiency boostcomes if your teams are already
following, say, ISO 273001 forsecurity or maybe 27701 for
privacy.
Sam (06:14):
Ah, so they're not starting
completely from scratch.
They can sort of layer in theAI-specific controls onto
existing systems.
That's the idea.
The teams get 38 specificcontrols and a set of required
AI policies.
That provides a really clearobjective checklist for auditing
and making improvements.
And the organization doesdecide to go for certification.
That certificate acts as apowerful objective benchmark
(06:37):
that's recognized globally.
It proves you've done the work.
Okay.
Now preparing for the EU AI Actimplementation sounds well
significantly more demanding.
Ori (06:45):
Uh-huh.
Sam (06:46):
What are the immediate
actions for operational teams
who know they're facingmandatory compliance soon?
Yeah, the workload isdefinitely heavy there.
For any system deemedhigh-risk, teams have to
implement a continuous AI riskmanagement system.
And that's across the entire AIlifecycle.
Plus, they need a full qualitymanagement system wrapped around
it.
This means really robust datagovernance.
(07:07):
What does that entail
practically?
Ori (07:09):
Things like auditing your
data sets for bias, making sure
your data quality pipelines aresound, detailed record keeping,
logging all system activity, anddefining very rigid human
oversight protocols.
Sam (07:21):
Okay, so if you're on a
risk or legal team listening
right now, what should you bedoing like today to prepare?
Ori (07:29):
Honestly, conduct rigorous
gap assessments immediately,
like right now.
You absolutely must identifyall the systems that are likely
to meet the high risk criteriadefined in the act.
Think about systems impactingcredit scoring, hiring
decisions, insuranceeligibility.
Sam (07:46):
High stakes areas.
Ori (07:47):
Exactly.
Once you've identified them,the teams need to start
designing those human in theloop or human on-the-loop
mechanisms and ensure there'scomprehensive cross-functional
training involving legal, IT,risk, everyone.
Enforcement is getting closer,and this requires serious
resource allocation startingnow.
Sam (08:03):
Got it.
And what about the operationalside for teams adopting the NIST
AI RMF?
You mentioned a playbook, whichsounds quite practical.
It really is the most flexibleof the three, I'd say.
NIST is designed to beimmediately usable and highly
tailorable.
You can adapt it to pretty muchany organizational context,
doesn't matter the industry orsize.
That adaptability is key.
Ori (08:22):
And it deliberately comes
with practical resources like
that playbook you mentioned, andalso crosswalks showing how
NIST maps to other standards,like ISO or even principles in
the EU AI Act.
This is hugely helpful foroperational teams who need, you
know, usable guidance, not justdense legislative text.
But how easily can, say, myexisting risk management teams
(08:45):
who might be more used totraditional frameworks actually
pick up and use NIST RMF?
Does they need specialized AIengineering training?
That's actually one of thestrengths of its design.
It's structured around fourhigh-level sort of
process-oriented core functions:
govern, map, measure, and (08:56):
undefined
manage.
Teams usually start by mappingtheir AI systems to these
functions.
This allows for a stagedrollout.
You can pile out the frameworkon maybe one or two use cases
first.
Sam (09:09):
Learn as you go.
Ori (09:10):
Exactly.
Refine your internal processesfor things like bias testing or
model validation on a smallerscale before you try to apply it
everywhere.
It helps build that readinessorganically.
Sam (09:20):
Right, builds the muscle
memory.
Okay, that makes perfect sense.
Now let's move to what feelslike the real cutting edge here.
Section three.
Governing the autonomous AIagent.
When systems are making theseself-learning risk decisions,
potentially without direct humaninput moment to moment, how do
these frameworks possibly ensureaccountability?
Ori (09:40):
Yeah, this is where the
philosophies of the three
frameworks really start todiverge, I think, quite
significantly.
ISO 42001 tends to addressautonomy through its focus on
continuous life cyclemanagement.
It mandates ongoing monitoring,auditing, and improvement
processes, specifically becausethose self-learning models adapt
over time.
Sam (09:59):
So it tracks the evolution.
Ori (10:00):
Right.
And for an autonomous agentunder ISO, it requires that
basically every decision itmakes must be explainable, fully
auditable, and demonstrablyfree from prohibited biases.
That's needed to satisfy bothinternal reviews and potentially
external auditors.
Sam (10:15):
So ISO demands that even if
the decision is autonomous, the
process has to bereconstructible and justifiable
after the fact.
That's a good way to put it.
Okay.
What about the EU AI Act'sstance on unchecked autonomy?
It sounds like they might bewarier.
Trevor Burrus, Jr.
Ori (10:30):
Much warier.
Sam (10:31):
Yeah.
The EU AI Act puts afundamental check on autonomy,
especially for anythingclassified as high risk.
For any high-stakes AIdecisions in those categories,
it explicitly mandates humanoversight protocols.
Human oversight meaning.
Ori (10:47):
Meaning either human in the
loop, where a person has to
actively approve the decisionbefore it's executed, or human
on the loop, where a humanretains the ability to step in
to intervene and stop oroverride the system's decision.
So it sets very clear,non-negotiable boundaries on
just how much eponymous isactually permitted in sensitive
areas.
Absolutely.
And beyond just oversight, theact goes further.
(11:07):
It outright prohibits certainautonomous use cases entirely,
things deemed just too dangerousto societal norms.
Sam (11:13):
Like what?
Ori (11:14):
Things like autonomous
social scoring by governments,
or systems designed specificallyto manipulate human behavior in
ways that could causepsychological or physical harm.
Those are banned.
Wow.
And furthermore, becauseautonomous systems can, by their
nature, evolved unpredictably,the act requires providers of
high-risk AI to implement reallyrigorous post-market
(11:36):
monitoring.
They have to watch how itperforms in the real world and
report any serious incidentsimmediately.
Sam (11:42):
Constant vigilance
required.
Okay.
And how does the NIST frameworkapproach governing these
autonomous risk agents?
Ori (11:49):
NIST definitely
acknowledges the unique risks
here, particularlyunpredictability and the sort of
black box problem where youdon't always know why the AI did
what it did.
It strongly urges organizationsto view the AI agent not just
as a piece of software, but as asocio-technical system.
Sam (12:04):
Sociotechnical.
Meaning it involves people andprocesses around the tech.
Ori (12:08):
Precisely.
This holistic view means theorganization has to implement
strong governance guardrailsaround the agent.
Things like assigning clearaccountability to specific human
owners, implementing mandatorysecurity measures, setting up
rigorous bias evaluationpipelines before deployment, and
having strong contingency plansready in case the agent fails
(12:28):
or, you know, driftsunexpectedly off course.
Rigorous testing is critical.
Sam (12:34):
Testing for fairness,
robustness, explainability, all
those trustworthiness elementsagain.
Ori (12:39):
Exactly, before you let it
run autonomously.
Sam (12:41):
Okay, that frames the
governance challenge really
well.
Let's try to bring it alltogether now by looking at maybe
three quick operationalscenarios.
I think this will helpillustrate how you might need a
hybrid approach in practice.
Ori (12:53):
Sounds good.
Let's start with, say,technology risk.
Imagine a global bank usinggenerative AI and maybe large
language models to help flagsophisticated cyber threats in
real time.
This system obviouslyintroduces significant risks if
it's wrong.
Sam (13:06):
Okay, how would they govern
that kind of system using these
frameworks?
Ori (13:10):
Well, they'd likely use the
NIST AI RMF as their
foundational blueprint.
That helps them establishrigorous controls for robustness
and accuracy.
They'd specifically need totest the AI with adversarial
scenarios, trying to fool it tominimize both false positives,
which could shut systems downunnecessarily, and false
negatives, which miss realthreats.
Sam (13:31):
So NIST for the technical
rigor.
Ori (13:33):
Right.
Then they'd probably layer ISO42001 on top.
That helps structure thecontinuous monitoring processes
and ensures the internalfeedback loops for improvement
are correctly set up andaudited.
Sam (13:45):
And the EU AI Act?
Ori (13:46):
Well, even if they aren't
legally required to comply yet
in a specific jurisdiction,their compliance team would
likely ensure alignment with EUAI Act principles.
That means meticulouslydocumenting the AI's decision
logic and making sure theymaintain defined human
intervention capabilities forany really critical security
decisions.
It's about future-proofing andbest practice.
Sam (14:07):
Okay, that's a very
practical layering approach.
Let's take scenario two.
Operational risk.
Maybe an insurance companyautomating parts of its claims
assessment process, perhapsusing AI to check for potential
fraud.
Given the high impact onpeople's finances, that sounds
like it would almost certainlybe classified as high risk under
the EU AI Act, right?
Ori (14:24):
Oh, definitely.
That's a classic example.
So here, the EU AI Act controlsbecome non-negotiable and
mandatory.
They'd need that formal riskmanagement system we talked
about, continuous data qualitychecks, specifically looking for
biases that could unfairly denyclaims, and absolutely
mandatory human oversight forany contested claim decisions.
The AI can't have the final sayif disputed.
Sam (14:47):
So the law dictates the
core requirements.
How do the others help?
Ori (14:51):
To actually make this
operational, they'd probably
lean on ISO 402001.
It helps structure the requiredAI impact assessment and guides
the implementation of specificbias mitigation measures within
their workflow.
And finally, the NIST RMF wouldprovide valuable guidance to
their technical teams on how toconduct validation testing,
making sure the fraud flags aregenuinely reliable and that the
(15:12):
system is technically robust andsecure.
Sam (15:15):
Interesting.
So the binding law, the EU Act,sets the the what must be done.
And the voluntary standards,ISO and NIST, help define the
auditable how for the teamsexecuting it.
Ori (15:25):
That's a great way to
summarize it.
And yes, it means managingpotentially three overlapping
sets of requirementssimultaneously.
That's just the reality formany global enterprises now.
Sam (15:34):
Okay.
Final scenario.
Let's look at the GRC spaceitself: governance, risk, and
compliance.
Imagine an autonomous IRMassistant, an AI agent that's
autonomously standing, say,employee communications to flag
potential policy violations.
Aaron Powell Right.
Ori (15:50):
This involves both
operational risk like data
handling and also HR complianceconsiderations.
Very sensitive.
Sam (15:56):
So how do you govern that?
Seems tricky.
Ori (15:58):
It requires extreme clarity
on accountability.
Here, the NIST AI RMF wouldlikely form the bedrock
foundation.
It ensures clear accountabilityis assigned up front, meaning a
specific compliance officer, ahuman, must be named as the
ultimate owner in oversightmechanism.
Sam (16:13):
So a human is always
responsible.
Ori (16:14):
Always.
That human reviews anyhigh-stakes alerts generated by
the AI maybe overridesdecisions, ensuring the AI's
outputs remain subject to humanjudgment, especially for
disciplinary actions.
Then the organization's ISO402001 management system would
require that the agent'sunderlying algorithms are fully
auditable and transparent.
(16:36):
This allows internal auditteams, for example, to confirm
the system is only looking forwhat it's explicitly authorized
to find and not overreaching.
Sam (16:44):
What about the regulatory
angle, like the EU AI Act
principles?
Ori (16:48):
Aaron Powell Well, even for
an internal tool like this, the
company might choose tovoluntarily follow key EU AI Act
principles.
Things like maintainingdetailed documentation of how
the system works and ensuringtransparency to affected
employees about the monitoringprocess itself.
Sam (17:01):
Why do that voluntarily?
Ori (17:02):
It provides essential
ethical reassurance both to the
board and to employees.
And frankly, it also preparesthe firm for potential future
regulations, expanding intothese sensitive internal use
cases.
It's prudent.
Sam (17:13):
Okay, that makes a lot of
sense.
So synthesizing all thisinformation from our sources,
for the executive listener,what's the single most important
takeaway message here?
Ori (17:23):
Aaron Powell Yeah, I think
the common, really critical
theme across all theseframeworks and scenarios is the
absolute necessity of buildinggovernance structures from the
top down.
But combining that with genuinecross-functional readiness on
the ground, you simply cannotdelegate AI governance entirely
to the tech department or assumeit'll just happen.
It needs deliberate structure.
Sam (17:43):
So leadership has to drive
it.
Ori (17:45):
Absolutely.
Executives must prioritizeestablishing some kind of
foundational framework, whetherthat's going for ISO 40 2001
certification, buildingcomprehensive compliance
programs for the EU AI Act, ormaybe adopting the NIST RMF as
the core internal guideline.
And crucially, this meansdefining crystal clear roles:
who on the board has oversight,who are the designated risk
(18:06):
owners, which technical teamshandle compliance and mandating
continuous monitoring andreporting.
Sam (18:12):
That clarity seems like the
only way you can gain the
confidence needed to actuallyharness the power of autonomous
AI effectively and safely.
Ori (18:18):
I believe so.
Sam (18:19):
But, you know, it does
raise a final really
thought-provoking question foryou, the listener, to perhaps
mull over this week.
If organizations areincreasingly relying on these
sophisticated AI agents forautonomous risk management tools
that, as we've heard, arespecifically required to be
auditable and transparent, whatnew ethical frameworks and new
(18:39):
accountability protocols do weneed?
Specifically, what do humancompliance officers need to
develop to truly manage a systemthat isn't static, but is
constantly learning andpotentially adjusting its own
definitions of risk?
Ori (18:53):
That's the deep question,
isn't it?
The human role seems to shift.
It's less about just assessingthe risk directly and more about
auditing the machine thataudits the risk.
And that machine is constantlymoving the goalposts through its
learning.
It really becomes a challengeof managing the risk of the AI
managing itself.
Sam (19:08):
A fascinating challenge
indeed.
A perfect closing thought forthis deep dive.
Thank you so much for guidingus through these essential and
complex frameworks today.