Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ori Wellington (00:00):
Welcome to the
Deep Dive.
Today, we're really digginginto something that feels like
it's jumped straight out ofscience fiction and into the
well, the corporate reality.
Sam Jones (00:09):
Right AI agents in
the enterprise.
Ori Wellington (00:11):
Exactly what
happens when they start to
outnumber the humans and maybemore importantly, how do you
manage that kind of risk,especially, you know, if one
goes rogue?
Sam Jones (00:20):
That's the million
dollar question, isn't it?
Or maybe billion dollar, giventhe stakes?
Ori Wellington (00:24):
Could be so for
this Deep dive.
We've got some really sharpinsights.
We're starting with a, frankly,pretty stark warning from
Nikesh Arora, the CEO of PaloAlto Networks.
Sam Jones (00:35):
Yeah, he doesn't
mince words on this topic.
Ori Wellington (00:45):
No, he doesn't,
and we're also going to bring in
a broader perspective lookingat integrated risk management,
or IRM.
Sam Jones (00:48):
We're drawing an
article there by John A Wheeler
which is crucial for connectingthe dots between the tech risk
and the overall business.
Ori Wellington (00:51):
Precisely Our
mission here is to really
understand the huge shift theseAI agents represent for well for
your organization's risklandscape, and why getting a
holistic grip on managing themisn't just, you know, a nice to
have.
Sam Jones (01:05):
It's urgent,
absolutely imperative.
Ori Wellington (01:07):
Right.
So the hook is really this whatdoes it mean when AI agents are
, as Aurora puts it, runningaround trying to help you manage
your enterprise and there aremore of them than people?
Sam Jones (01:20):
And what happens if
they go off script?
That's the core fear.
Ori Wellington (01:23):
Okay, let's
unpack this Nikesh Aurora's
warning.
I think it was on TMBC.
It was incredibly direct.
He predicted and this is thequote that really jumps out
there's going to be more agentsthan humans running around
trying to help you manage yourenterprise.
Sam Jones (01:37):
Wow, just pause on
that for a second.
More agents than humans.
Ori Wellington (01:41):
Yeah, and when
you really think about it, this
isn't just like another softwareupdate.
It's a fundamental, it's amassive change in the entire
risk surface for big companies.
Sam Jones (01:52):
Absolutely.
And what's really criticalthere is the access they'll need
.
These agents aren't just, youknow, handling simple website
chats.
They're going to needprivileged access, deep access
into your critical systems, yourinfrastructure.
Ori Wellington (02:04):
Right the crown
jewels.
Sam Jones (02:05):
Exactly, and if you
don't have really solid
guardrails, proper controls, thethreats are immense.
We're talking agents gettinghijacked for ransomware.
Ori Wellington (02:15):
Which we already
see happening in other contexts
.
Sam Jones (02:17):
For sure.
Or you know systemic sabotageacross your operations, or just
outright business disruption,stopping everything.
Ori Wellington (02:25):
And Aurora's
bottom line on this really hits
home.
He said the whole new art ofsecuring these agents, this art
of securing AI, is going tobecome the next bastion in
cybersecurity.
Sam Jones (02:36):
It's a whole new
battlefield, essentially.
Ori Wellington (02:38):
Feels like it.
Sam Jones (02:38):
It really does.
And, connecting this up, Aurorareally zeroed in on identity as
the well, the central point,the control plane for AI risk.
Ori Wellington (02:47):
Okay, identity,
how so?
Sam Jones (02:48):
Well, think about it.
Just like your human employees,these AI agents need unique
identities.
They need clear sponsors,someone responsible for them,
and they need very specificpermissions or entitlements,
saying exactly what they canaccess and what they can do.
Ori Wellington (03:02):
Right, like a
job description, but for code.
Sam Jones (03:04):
Kind of yeah.
Without that basic identityframework, you've got no real
way to contain an agent thatgoes off the rails or even just
to quickly revoke its access ifsomething looks fishy.
It's foundational.
Ori Wellington (03:17):
And Palo Alto
Networks putting billions into
buying Cybernuk, an identitycompany, certainly backs that up
.
Sam Jones (03:22):
Absolutely
underscores the point.
Big time Identity is becomingcentral.
Ori Wellington (03:27):
So we're talking
about potentially thousands,
millions of digital workers withkeys to various parts of the
kingdom.
Sam Jones (03:34):
Yeah.
Ori Wellington (03:34):
Yeah, that's a
bit terrifying.
It keeps CISOs awake at night,for sure I bet and Aurora used a
really good analogy to makethis feel more real he compared
these agents to self-drivingcars.
Sam Jones (03:44):
The Waymo example.
Ori Wellington (03:45):
Exactly he
pointed to Waymo as, like a
functioning agent out in thereal world, it makes decisions
in real time speed up, slow down, turn here, stop there.
All on its own.
Sam Jones (03:56):
That analogy is spot
on because, think about it If a
self-driving car gets hijacked,Disaster Immediate physical
disaster.
Right Catastrophic.
And that directly mirrors thepotential impact if one of these
enterprise AI agents getscompromised if it's operating
autonomously inside your coresystems.
Ori Wellington (04:14):
Making decisions
, taking action Without a human
in the loop.
Sam Jones (04:17):
The consequences of a
breach could be instant and
devastating for the wholebusiness.
Ori Wellington (04:21):
OK, so identity
is step one.
Containment policies that'sbaseline.
But you mentioned somethingelse IRM.
Sam Jones (04:28):
Exactly Integrated
risk management.
This is the crucial next stepBecause, while identity and
basic controls are essential,they need to live within an IRM
model.
Ori Wellington (04:38):
Why is that so
important?
Sam Jones (04:39):
Because IRM ensures
that those agent guardrails
aren't just technical rules in avacuum.
They're directly tied to youroverall enterprise goals
Performance, resilienceassurance compliance.
Ori Wellington (04:54):
It makes the
security effective across the
whole organization.
Got it so it connects the techsecurity to the business
strategy.
Sam Jones (04:56):
Precisely.
We can actually use that caranalogy again to see how IRM
itself has evolved.
Think of risk management in thepast.
Ori Wellington (05:03):
Spreadsheets and
SharePoint.
Sam Jones (05:04):
Yeah, the car of
yesterday Basic, slow, kind of
clunky for managing compliance.
Lots of manual work, errorprone.
Ori Wellington (05:12):
Okay.
Sam Jones (05:13):
Then today we have
maybe driver assist IRM.
It's smarter, more integratedplatforms, but still heavily
reliant on humans making the keydecisions.
Ori Wellington (05:21):
Some adaptive
cruise control maybe.
Sam Jones (05:23):
Good analogy, but the
key insight for AI agents is
the car of tomorrow autonomousIRM.
This is where AI agentsthemselves can actually take
risk management actions at scale, at speed.
Ori Wellington (05:36):
Because they're
governed by those IRM guardrails
.
Sam Jones (05:38):
Exactly that's the
linchpin your security, your
risk management.
It starts to look like thisalmost self-driving system,
itself governed by IRMprinciples.
That's the big takeaway here.
Ori Wellington (05:48):
That makes a lot
of sense.
It paints a picture of muchmore robust integrated control.
Yeah, but why the urgency?
Why is this shift to IRM socritical right now?
Sam Jones (05:56):
Great question.
There are basically three bigexternal forces really pushing
this First regulation isaccelerating fast.
Ori Wellington (06:03):
Okay, like what?
Sam Jones (06:04):
Well, the big one is
the EU AI Act.
It's already starting to phasein.
You've got prohibitions and AIliteracy requirements hitting in
2025, full obligations by 2026.
This isn't optional.
Ori Wellington (06:16):
So companies
need to get ready now.
Sam Jones (06:17):
Definitely.
Plus, you've got standardsemerging like ISO IEC 14001.
That sets up an auditable AImanagement system.
Think of it like a blueprintfor proving you're governing AI
responsibly.
And then there's the NIST AIrisk management framework, the
RMF.
That gives you a lifecyclestructure how you govern, map,
(06:38):
measure and manage AI risk fromstart to finish.
These are becoming the globalbenchmarks.
Ori Wellington (06:44):
So regulation is
driver number one.
What's next?
Sam Jones (06:46):
Second, the big
consulting firms are jumping in
feet.
First, companies like KPMG, ey,deloitte.
They're already launchingmulti-agent platforms, meaning
they're building services thatembed these AI agents directly
into their clients' operations.
So adoption isn't just going tobe driven by tech vendors, it's
being pushed hard byprofessional services too.
These agents are coming, andprobably faster than many
(07:08):
realize.
Ori Wellington (07:08):
Okay, so the
deployment is accelerating
because the consultants arepushing it.
Sam Jones (07:12):
Right.
And third and this loops backto a rose point, breach velocity
Attacks are getting incrediblyfast.
Ori Wellington (07:19):
The 25-minute
stat.
Sam Jones (07:20):
Exactly Attack to
data exfiltration in just 25
minutes.
That's terrifyingly quick.
Ori Wellington (07:25):
Yeah, no time
for a committee meeting there.
Sam Jones (07:27):
None, it means
security controls absolutely
cannot be an afterthought.
You can't bolt them on later.
They must be integrated fromday one.
How you build the agent, howyou deploy it, how you monitor
it, it has to be baked in.
Ori Wellington (07:40):
So regulation,
consulting, pushing adoption and
lightning fast attacks, thatpaints a pretty urgent picture.
Sam Jones (07:47):
It does, and that's
where a model like the IRM
Navigator comes in handy.
It helps structure how youintegrate these agents safely.
Ori Wellington (07:52):
Okay, the IRM
Navigator Break that down for us
.
Sam Jones (07:55):
Sure.
It basically looks atintegrating agents through four
main objectives or risk domains.
First is performance, whichfalls under enterprise risk
management or ERM.
Ori Wellington (08:04):
So business
value.
Sam Jones (08:05):
Right Deciding where
autonomy actually creates
measurable value, like can anagent speed up supplier
onboarding, Can it automatecollecting evidence for audits
and critically tying that valueback to your overall business
goals and your appetite for risk.
Ori Wellington (08:21):
Makes sense.
What's second?
Sam Jones (08:22):
Second is resilience.
This is under operational riskmanagement, orm.
Think of it as building thefail-safes, defining clear
triggers for when things gowrong.
What are the escalation paths?
What are the degraded modes?
How does it operate ifpartially failing and, crucially
, what are the criteria for ahuman to step in and override?
Ori Wellington (08:42):
Planning for
when things don't go perfectly.
Sam Jones (08:43):
Exactly.
Third is assurance undertechnology risk management, TRM.
This is about treating theagents themselves as managed
assets.
Ori Wellington (08:52):
Like servers or
laptops.
Sam Jones (08:53):
Sort of yeah, they
need to be instrumented for
continuous monitoring.
You need to be able to revoketheir access quickly and their
telemetry, their operationaldata, needs to feed into your
security tools.
Ori Wellington (09:04):
Like your XDR
and your SOC workflows.
Sam Jones (09:05):
Precisely, so your
security teams can actually see
what these agents are doing, andOK, performance resilience
assurance.
Ori Wellington (09:14):
What's the
fourth?
Sam Jones (09:14):
The fourth is
compliance.
This falls under Governance,risk and Compliance GRC.
This is about translating thosestandards we talked about ISO
42001, nist, ai, rmf into actualenforceable policies.
Ori Wellington (09:29):
And proof.
Sam Jones (09:30):
And auditable
evidence that you're following
them.
It also means systematicallymapping your systems to the EU
AI Act obligations based ontheir risk class.
It's about proving you're doingthe right thing according to
the rules.
Ori Wellington (09:42):
So, putting it
all together, this IRM navigator
framework, it really takesAurora's idea of guardrails and
builds it out into acomprehensive management model.
It's not just stopping badstuff.
Sam Jones (09:54):
No, it's about
proactively integrating AI to
achieve business goals, butdoing it within a structure that
systematically manages theinherent risks Performance,
resilience, assurance,compliance, all connected.
Ori Wellington (10:03):
Okay, I think
I'm getting the picture.
It's moving from just securityto integrated risk management.
Sam Jones (10:08):
You got it.
That's the core shift.
So for the leaders listeningright now, maybe feeling a bit
overwhelmed what are somepractical things, some
actionable steps they should bethinking about, say in the next
90 days, Okay, yeah, let's getpractical.
Based on everything we'vediscussed, here are a few
concrete steps.
Ori Wellington (10:26):
First, stand up
an AI council.
Sam Jones (10:26):
Okay, Put it under
your existing enterprise risk
management program.
This council's job is to setyour organization's tolerance
for autonomy.
How much automateddecision-making are you
comfortable with?
They approve specific use casesfor AI agents and they define
the metrics the board will useto track performance and risk.
Ori Wellington (10:45):
So a central
steering committee for AI.
Sam Jones (10:47):
Essentially yes.
Second, define your EU AI Actposture.
Start classifying your AIsystems and even your suppliers.
Now Figure out what obligationsyou'll face between 2025 and
2027.
Don't wait.
Ori Wellington (10:59):
Get ahead of the
regulation Makes sense.
Sam Jones (11:01):
Third, build an agent
registry.
Seriously document every agent.
Who is its human sponsor, whatare its exact entitlements, what
can it access, what can it do?
And, critically, does it have areadily accessible kill switch?
Ori Wellington (11:13):
An off.
Button.
Sam Jones (11:14):
An immediate off
button just in case.
Fourth, pilot ISO IE 72001.
Don't try to boil the ocean.
Pick two or three specific AIuse cases and scope the ISO
standard for them.
Learn from those pilots, thenexpand.
Ori Wellington (11:30):
Start small,
learn fast.
Sam Jones (11:31):
Exactly.
And finally, number five,choose your delivery partners
very carefully If you'rebringing in consulting firms
with their own multi-agentplatforms.
Ori Wellington (11:40):
Which you said
is happening fast, right.
Sam Jones (11:42):
Make absolutely sure
their platforms integrate into
your IRM model, not the otherway around.
Your risk framework needs togovern their tools, not be
dictated by them.
Ori Wellington (11:50):
Maintain control
of your own risk posture.
Sam Jones (11:52):
Precisely.
Ori Wellington (11:53):
Okay, so
wrapping this up, Nikesh Rohr's
vision seems spot on.
Securing AI agents really doesfeel like the next big frontier
in cybersecurity.
It's a huge challenge.
Sam Jones (12:03):
It is, and hopefully
what we've unpacked in this deep
dive shows how integrated riskmanagement that IRM piece
provides the essentialenterprise-wide view.
It's what makes those securityguardrails actually work
effectively at scale.
Ori Wellington (12:14):
So it connects
the security tech to the whole
business.
Sam Jones (12:16):
Yeah, the future of
AI agent security isn't just
about buying the right securitytools.
It's fundamentally anintegrated management challenge
for the whole organization.
Irm helps align everythingperformance, resilience,
assurance and compliance.
Ori Wellington (12:31):
So, as we finish
up, here's something for you,
our listeners, to think about asthese AI agents multiply and
weave themselves deeper intoyour daily operations.
What's the single most criticalquestion you need to ask about
your organization's readiness tomanage this emerging autonomous
workforce?
Sam Jones (12:47):
That's the key
question to take away.
Ori Wellington (12:49):
Think about that
and we'll see you next time on
the Deep Dive.