Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sam Jones (00:00):
In our digital world,
everything just seems to be
moving faster and faster,doesn't it?
It's almost dizzying.
Ori Wellington (00:05):
It really is.
The pace is incredible.
Sam Jones (00:07):
And you know, with
that speed comes this really
critical, maybe even unsettlingquestion how do we actually
manage risk when things arehappening faster than any person
can possibly react?
Ori Wellington (00:19):
Yeah, it's like
trying to catch smoke sometimes.
Sam Jones (00:22):
Exactly.
It feels like trying to I don'tknow manage a flood with a
teacup.
The sheer speed just overwhelmsthe old ways of doing things.
Ori Wellington (00:29):
Uh-huh, manual
processes just can't keep pace.
Sam Jones (00:32):
So today we're going
to dive deep into this really
profound shift happening in riskmanagement.
We're talking about moving awayfrom, let's say, human speed
reactions.
Ori Wellington (00:42):
Which are often
too slow.
Sam Jones (00:43):
Right, too slow and
moving towards machine speed
foresight and, crucially,response.
This whole area is being calledautonomous integrated risk
management, or autonomous IRMfor short.
Ori Wellington (00:55):
And for this
deep dive we're drawing heavily
on some fantastic insights froma key source.
It's called autonomous IRM,orchestrating risk at machine
speed, put together bywheelhouse Advisors, and it
really digs into how companieslike CrowdStrike, for instance,
are well pioneering this new erausing something called agentic
AI.
Sam Jones (01:16):
Agentic AI.
Ok, we'll definitely need tounpack that.
So our mission for youlistening in is pretty clear.
Today, we're going to unpackwhat this autonomous IRM thing
really means.
Ori Wellington (01:26):
We'll look at
the new capabilities it unlocks.
Sam Jones (01:28):
Yeah, and we'll lay
out this architectural blueprint
that organizations apparentlyneed if they want to adopt it.
Ori Wellington (01:34):
And, crucially,
touch on the big challenge for
companies trying to make thisleap.
Sam Jones (01:39):
Right, so get ready
You're gonna get a real shortcut
to understanding a supercutting edge topic, one that
honestly demands attention rightnow.
Ori Wellington (01:46):
Absolutely, it's
moving fast.
Sam Jones (01:47):
OK, so let's start
unpacking this paradigm shift.
The core problem, as thatwheelhouse source points out,
seems simple but profound.
The speed of risk has justflown past human decision making
.
Ori Wellington (01:59):
Completely.
We're talking about theseagentic systems.
They can spot an incident andreact in literally seconds
seconds, wow, okay.
Sam Jones (02:08):
So for listeners
maybe hearing this term for the
first time these agentic systems, what exactly are they?
How are they different from,say, the ai alerts we've maybe
gotten used to?
Ori Wellington (02:19):
right, good
question.
So agentic ai systems?
They don't just send you analert like hey, look at this,
they autonomously assess thesituation, they can actually act
on it and even learn from it.
So they do things, yes, withinpredefined rules and parameters,
of course, but they makedecisions and take actions.
That autonomy, operating atmachine speed, is really the
(02:41):
catalyst driving this wholeshift.
Sam Jones (02:43):
And CrowdStrike.
You mentioned them.
They're right out front withsomething called Charlotte AI.
Ori Wellington (02:47):
That's right.
Charlotte AI is what they calltheir agentic AI architecture.
It's now built into theirFalcon platform and it offers
this triad of capabilities, asthey put it agentic detection,
triage, agentic response andagentic workflows.
Sam Jones (03:00):
Okay, that triad
sounds comprehensive.
Let's take one, say agenticdetection triage.
How does that really changethings for a security team on
the ground beyond just gettingalerts faster?
What's the?
You know the machine speedinsight here.
Ori Wellington (03:14):
Well, think
about it.
Instead of a human analyst,maybe overwhelmed, sifting
through thousands of alerts,trying to connect the dots,
deciding what's important.
Sam Jones (03:21):
Yeah, that sounds
exhausting.
Ori Wellington (03:22):
It is the
agentic AI system does that
triage instantly, it prioritizes, it adds context, it might even
kick off some initialcontainment actions
automatically.
True, no human needed for thatfirst step.
Wow, so the core insight.
It shifts the heavy lifting ofanalysis and initial action from
human to machine.
The human role becomes moreabout oversight, setting the
(03:46):
strategy, defining the policies,not being in the weeds of every
single alert, second by second.
Sam Jones (03:49):
Okay, that makes
sense.
So that triad detectionresponse workflows it sounds
like it covers the wholeautomated response cycle.
Now how does integrating thatinto the bigger picture, the
whole integrated risk managementor IRM framework, how does that
change things for the entirebusiness?
Ori Wellington (04:04):
Ah see, that's
where it gets really profound,
because we're moving way beyondjust smarter security alerts or
faster post-incident digging.
Sam Jones (04:13):
Right.
Ori Wellington (04:13):
These are
decisions made by machines,
decisions that have immediateconsequences for governance, for
compliance, for day-to-dayoperations.
Think about it A machine actioncould trigger a business
continuity plan or impact athird-party relationship
instantly.
Sam Jones (04:30):
Okay, so the ripple
effect is potentially huge and
immediate.
Ori Wellington (04:33):
Exactly, and
that absolutely demands a
completely new way forenterprises to oversee risk.
You can't manage machine speeddecisions with monthly committee
meetings.
Sam Jones (04:42):
Yeah, that seems
obvious.
Now you say it.
Ori Wellington (04:45):
So CrowdStrike,
they've essentially built, as
the source says, the signal andexecution layers.
That sounds like a huge step.
Sam Jones (04:52):
It is technologically
.
Ori Wellington (04:53):
But if they've
built that, what's the really
hard part now for the let's callit the broader IRM ecosystem?
Translating those super fast,autonomous security events into
joined up auditable businessresponses.
Where are the headaches goingto be?
That's precisely it.
It's the orchestrationchallenge.
Crowdstrike provides thelightning fast signal and
response at the security level,the sort of nervous system
(05:15):
impulse.
Okay, but the rest of the IRMworld, the business processes,
the compliance checks, theoperational adjustments needs to
be able to receive that signal,understand its business meaning
and then coordinate the rightresponses across all the
different functions legalfinance, operations, vendor
management, everyone.
Sam Jones (05:34):
So connecting the
security action to everything
else it touches.
Ori Wellington (05:37):
Yes, and doing
it seamlessly, audibly and at
that same machine speed.
The headache is bridging thatgap between the isolated
security action and the fullyintegrated, enterprise-wide risk
management response.
It's about weaving thoseautonomous decisions into the
fabric of existing businesspolicies and controls without
slowing things down.
Sam Jones (05:57):
Okay, that
orchestration challenge sounds
significant.
Now, to help us sort ofvisualize how to tackle that,
the 2025 IRM Navigator ViewpointReport introduces this idea of
five functional layers ofautonomous IRM.
You can think of this, maybe,as the architectural blueprint
risk leaders need.
It maps everything from thehigh level strategy down to the
real time controls.
(06:18):
It's designed to ensure theseautonomous actions don't just
happen in a vacuum.
Ori Wellington (06:23):
Exactly, it
provides structure.
Should we walk through themquickly?
Sam Jones (06:26):
Yeah, let's do that.
Start at the top.
Ori Wellington (06:28):
Okay, layer one,
strategic oversight this is the
highest level.
Its job is to make sureeverything aligns risk appetite
where the money goes, businesspriorities.
It all needs to line up withthe overall company strategy.
Sam Jones (06:40):
And this is squarely
in the realm of ERM enterprise
risk management.
Ori Wellington (06:46):
Correct,
primarily focused on performance
and resilience.
Think of it as setting thestrategic guardrails, the big
picture rules for any autonomoussystems operating below.
Sam Jones (06:55):
Got it Okay, moving
down layer two.
Ori Wellington (06:57):
Layer two is
business orchestration.
Now, this is where thatcoordination piece we just
talked about really happens.
It's about taking those risksignals, maybe from layer three
or four, and routing them acrossthe right business function.
Sam Jones (07:09):
Ah, so making sure
the right teams get notified and
act together.
Ori Wellington (07:12):
Precisely
Driving coordinated mitigation,
making sure operationalexecution happens smoothly.
This is operational riskmanagement, ORM territory and
again the goals are resilienceand performance, Getting that
synchronized business response.
Sam Jones (07:26):
Okay, makes sense.
Ori Wellington (07:27):
Layer three
Layer three Threat intelligence
and validation.
This is fascinating here you'reusing AI, real-time data feeds,
threat modeling basicallysimulating attacks and stress,
testing your systems constantly.
Sam Jones (07:40):
So proactively poking
and prodding to find weaknesses
.
Ori Wellington (07:44):
Exactly
Dynamically validating your
actual exposure.
This sits mainly in technology,risk management, trm, and the
objectives are resilience,obviously, but also assurance,
knowing your defenses areworking.
And here's a critical point.
The source notes thatCrowdStrike's Charlotte AI
performs vital functions righthere in this layer, providing
that real-time intel andvalidation needed before an
(08:06):
autonomous action is taken.
Sam Jones (08:08):
Ah, okay, so it's not
just reacting, it's informing,
the validation before thereaction.
That clarifies layer three.
So if a threat gets validatedthere, what happens at layer
four?
Ori Wellington (08:16):
Layer four is
remediation and response.
This is where the economistaction kicks in, based on those
predefined policies andthresholds set higher up.
Sam Jones (08:24):
Okay, so what does
autonomous mitigation actually
look like here?
What kind of actions are wetalking about?
Ori Wellington (08:29):
Could be a range
of things.
Maybe isolating a user accountthat seems compromised that's
identity isolation.
Or perhaps automaticallytriggering business continuity
protocols if a critical systemseems under attack.
Sam Jones (08:41):
Okay.
Ori Wellington (08:41):
It could even
involve escalating alerts or
actions to third-party vendorsif the risk originates with them
.
This layer involves both TRMand ORM technology and
operational risk, because theactions have both tech and
process implications.
Sam Jones (08:56):
And the goals are
resilience and compliance,
presumably.
Ori Wellington (08:59):
Yes, resilience
and compliance are key and
crucially.
Just like layer three,charlotte AI is highlighted as
performing critical functionswithin this layer two, executing
those rapid approved responses.
Sam Jones (09:10):
Got it, executing the
plan, which brings us to the
final layer, layer five,verification and audit.
What's the main job here,especially when machines are
doing the acting?
Ori Wellington (09:21):
Layer five is
all about accountability and
proof.
Its purpose is capturing theevidence of what happened, what
the machine did, why it did it,based on what policy.
Sam Jones (09:31):
Okay, the digital
paper trail.
Ori Wellington (09:33):
Exactly Aligning
those actions back to specific
controls and providing real-timeassurance, not just for
internal managers, butpotentially for auditors,
regulators, other externalstakeholders too.
This is the GRC governance,risk and compliance domain.
Sam Jones (09:48):
And the objectives
are assurance and compliance
Makes sense.
Ori Wellington (09:52):
Assurance and
compliance, yes, making sure
everything is documented andverifiable.
Sam Jones (09:56):
So if we kind of tie
a bow on these five layers, the
whole point of autonomous IRMstructured this way is to make
sure these super fast machineexecuted decisions aren't just
happening randomly.
Ori Wellington (10:07):
Right.
They need to be authorizedbased on strategy.
Sam Jones (10:09):
Absorbed into the
whole risk picture.
Ori Wellington (10:11):
Scored for
impact, escalated if needed.
Sam Jones (10:14):
Documented for audit
and, importantly, used to learn
and improve the system over time.
Because without thatcoordinated system across all
five layers, these powerfulautonomous actions like from
Charlotte AI could just end upbeing isolated incidents.
Ori Wellington (10:28):
Exactly.
They'd be unmanaged events,potentially causing new risks,
which totally defeats thepurpose of integrated risk
management.
You need the whole structure.
Sam Jones (10:37):
Okay, that framework
is clear, but now we hit this
almost paradoxical point youmentioned earlier.
The technology, like CharlotteAI, seems ready.
It's capable of operating inthose crucial layers three and
four.
Ori Wellington (10:51):
Yeah, the tech
is moving incredibly fast.
Sam Jones (10:54):
But organizational
readiness, that's often a
completely different story,isn't it?
What's the biggest disconnect?
You see there?
Ori Wellington (11:01):
That really is
the crux of the matter now.
The tech capability is leapingahead, but organizations are
struggling to keep up internally.
Sam Jones (11:08):
And the source
mentions the IRM navigator
maturity curve as a way to sortof diagnose this.
Ori Wellington (11:14):
It's a useful
lens.
Think of it as stages movingfrom basic reactive risk
management on one end towardsfully integrated predictive
autonomous systems on the other.
And the key finding, or maybethe warning from the source, is
that while the technology, likeCharlotte AI, represents a
catalyst pushing towards thosehigher stages, operating layers
three and four, most IRMprograms today are actually and
(11:35):
this is the quote stalledbetween coordinated and embedded
stages on that maturity curve.
Sam Jones (11:41):
Stalled.
Okay For our listeners.
What does that stall reallymean in practical terms?
Is it just a slight lag or isit a major problem preventing
them from actually using thesenew tools effectively?
What do those stages,coordinated and embedded, even
look like?
Ori Wellington (11:57):
It's a critical
choke point, I'd say.
Being stalled there means maybeorganizations are still just
trying to get their differentrisk and compliance functions to
talk to each other consistently.
That's the coordinated stage.
Sam Jones (12:10):
Silos are still a
problem.
Ori Wellington (12:11):
Big time.
They might have some automatedtools in pockets, maybe in
security, maybe in compliance,but they lack that overarching
orchestration framework, theembedded stage to connect a
machine speed event in one areato the necessary business impact
analysis and coordinatedresponse across the whole
enterprise.
Sam Jones (12:28):
So they can't
translate the signal properly.
Ori Wellington (12:31):
Exactly.
It prevents them from movingbeyond reacting in fragments,
often at human speed, tooperating as a truly integrated,
resilient and increasinglyautonomous organization.
The stall means they can'tfully leverage the power of
tools like Charlotte AI.
Sam Jones (12:47):
So if the tech's
ready, why the stall?
Is it just about needing biggerbudgets for new IRM platforms,
or is it something deeper?
Ori Wellington (12:55):
Budget is always
a factor, of course, but the
source makes it clear.
It's often much deeper thanjust technology or money.
We're talking about significantstructural barriers Like org
charts, yes, and culturalbarriers too how people think
about risk and leadership buy-inor lack thereof.
In many places, risk managementis still treated primarily as
an audit function.
Sam Jones (13:16):
A check-the-box
exercise after the fact.
Ori Wellington (13:18):
Pretty much A
compliance thing done
periodically.
It's not seen or run as adynamic operational system that
needs to be woven into thefabric of everyday business
decisions.
And now machine speed actions.
Sam Jones (13:29):
So it's a fundamental
mindset shift that's needed.
Ori Wellington (13:31):
Absolutely From
the very top of the organization
down.
Sam Jones (13:34):
Which means to really
get to autonomous IRM, you have
to shift focus.
It's less about justdocumenting risks after they
happen.
Ori Wellington (13:42):
And much more
about orchestrating responses in
real time.
Sam Jones (13:45):
And moving from
looking at compliance as
snapshots in time.
Ori Wellington (13:49):
To ensuring
compliance is built into the
real time execution.
It's a completely different wayof operating, really Essential
to keep pace.
Sam Jones (13:57):
And this isn't some
far offoff future scenario, is
it?
Ori Wellington (13:59):
No, not at all.
The source is really clear onthis.
It says and I think this isworth quoting again this is live
production-level activityinitiated by AI, executed within
security platforms anddemanding immediate
reconciliation across policycontinuity, third-party and
assurance domains.
Sam Jones (14:17):
It's happening now.
Ori Wellington (14:18):
It's happening
now.
The autonomous actions are realand the need to integrate them
into the broader risk picture isimmediate.
Organizations have to adapttheir IRM programs to handle
this reality today.
Sam Jones (14:28):
Okay, so let's get
practical.
Then.
For an IRM program wanting tooperate at this new machine
speed, what are the concretesteps they need to take?
The source lays them out right.
Ori Wellington (14:36):
It does.
First, they absolutely have tobe able to ingest this new kind
of data, the agentic telemetry,coming from systems like
Charlotte AI.
Sam Jones (14:45):
Okay, get the data in
Step one.
Ori Wellington (14:47):
Step two
translate that raw signal into
meaningful risk context.
What does this alert mean interms of our risk thresholds?
Which controls are relevant?
Which business units or userpersonas are impacted?
Sam Jones (15:00):
Add the business,
meaning Makes sense.
Ori Wellington (15:02):
Third, based on
that context, trigger the right
real-time workflows and,crucially, these workflows need
to cut across different IRMdomains security, it risk,
operational risk, compliance andpotentially across different
software platforms too.
Sam Jones (15:16):
Okay, coordinate the
action.
Ori Wellington (15:17):
Fourth, capture
the evidence.
Those machine-driven actionsneed to be logged automatically
as formal evidence for audittrails and compliance reporting.
Can't lose track of what themachine did.
Sam Jones (15:25):
The verification
piece we talked about in Layer 5
.
Ori Wellington (15:27):
Exactly.
And finally, fifth, learn andadapt.
Use the outcomes of theseautonomous actions, the feedback
, to continuously adjust thepolicies, the rich models, the
thresholds.
It's a closed-loop system.
Sam Jones (15:40):
Ingest translate
trigger capture adjust Sounds
like a cycle.
Ori Wellington (15:45):
It has to be.
That's how you build aresilient, adaptive system.
Sam Jones (15:48):
And tying this back
to the bigger picture, the
organizations that can actuallybuild these capabilities across
those five functional layers wediscussed.
Ori Wellington (15:56):
And managed to
climb up that IRM maturity curve
, getting past that stall pointinto stage five, the truly
autonomous stage.
Sam Jones (16:04):
They're the ones who
are going to have a serious
advantage.
Ori Wellington (16:07):
A decisive
advantage, I'd say, not just in
handling today's crazy fastthreats, but really in designing
genuine resilience for whatevercomes next.
They'll be architecting for thefuture.
Sam Jones (16:17):
Okay, so let's try
and bring this all together.
Crowdstrike with tech likeCharlotte, ai has effectively
built the nervous system youcalled it.
Ori Wellington (16:25):
Yeah, providing
that incredibly fast signal and
execution capability, thedetection and the immediate
response.
Sam Jones (16:32):
But that's not the
whole story.
The enterprise, theorganization itself, now has the
job of building the what didthe source call it?
The musculature and memory.
Ori Wellington (16:41):
Exactly that's
the critical next step Building
the architecture, the processesthe connective tissue that takes
those rapid nerve signals andturns them into coordinated,
effective strength in action,embedding that capability deep
within the organization'soperations.
Sam Jones (16:57):
So autonomous IRM
it's not really just about
plugging in a new piece of AItech.
Ori Wellington (17:03):
Not at all.
It's fundamentally aboutbuilding that connective tissue,
making sure risk intelligencedoesn't just sit in one place
but actually moves, drivesaction and helps the entire
enterprise learn and improveconstantly.
Sam Jones (17:15):
And the five
functional layers give you the
blueprint for that structure.
Ori Wellington (17:18):
And the IRM
Navigator maturity curve kind of
gives you the map showing youthe journey you need to take.
Sam Jones (17:23):
So what's missing for
most organizations right now?
Ori Wellington (17:25):
Well, as the
source puts it so bluntly,
what's missing is execution,orchestration, integration.
And that execution Is no longeroptional, just isn't Because
the risk environment.
It's definitely not waiting foranyone to catch up.