All Episodes

June 23, 2025 26 mins

Artificial intelligence has firmly established itself at the forefront of the cybersecurity agenda, creating both unprecedented opportunities and complex challenges for security leaders. In this eye-opening conversation with cybersecurity veteran Tim Sewell, we dive deep into the realities of implementing effective AI governance and security practices in today's rapidly evolving threat landscape.

Tim shares invaluable insights on how AI has fundamentally transformed the cybersecurity domain, comparing this shift to the rise of desktop computing or cloud adoption. He cautions against the "wild west" approach to AI governance that many organizations have inadvertently embraced, where tools are deployed without proper oversight or awareness. Most concerning is his observation that AI is increasingly being integrated into existing business processes by vendors or partners without explicit notification, creating dangerous blind spots in security programs.

The discussion reveals surprising developments in third-party risk management, where AI tools now handle everything from vendor questionnaires to SOC 2 report analysis. We explore the troubling reality of "AI sending questionnaires to AI that is responding to questionnaires," raising critical questions about trust and verification in our increasingly automated security ecosystem. Tim provides practical guidance for security teams on transparency in AI usage, particularly when making decisions that may later require justification in legal proceedings.

Despite the focus on advanced AI capabilities, Tim emphasizes the continued importance of security fundamentals. He notes that sophisticated nation-state actors are increasingly targeting basic vulnerabilities like buffer overflows and cross-site scripting, especially in critical infrastructure with legacy technologies. For new security leaders, his advice is refreshingly straightforward: identify what you're protecting, assess existing controls, and practice your incident response.

Listen now for essential insights on navigating the AI security landscape, from governance frameworks to practical implementation strategies that balance innovation with risk management. Whether you're a CISO looking to update your program or a security professional wanting to stay ahead of emerging threats, this episode delivers actionable knowledge for securing your organization in the age of artificial intelligence.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
All right, thanks for tuning in to Simplifying Cyber.
I'm Aaron Pritz, I'm CodyRivers and I'm Todd Wilkinson,
and today we're joined by TimSewell, who has been a long time
I've known Tim for probablyover 10 years now Longtime
cybersecurity practitionerleader really great depth of

(00:25):
knowledge and experience inaerospace and defense,
healthcare, pharma andconsulting.
And we're excited to have threehosts today.
With Tim, it's the Dynamic Fourand, yeah, excited to have a
great conversation on kind ofsome of the future of
cybersecurity and some of Tim'sinsights and some recent

(00:46):
evolution of thinking from RSA.
So, tim, welcome to the show.

Speaker 2 (00:50):
Thanks, it's great to be here, nice to be back.

Speaker 1 (00:53):
Awesome.
So let's start out with kind ofa big and broad question.
You know thinking about CISOsand kind of some of the insights
that you learned and discussedat RSA what are some of the top
cyber program opportunities thatyou see right now for leaders
and what's top of mind for you?

Speaker 2 (01:14):
Yeah, so definitely AI.
The explosion of generativemodels, artificial intelligence
tools and their increasing usein the environment has to be top
of mind for pretty mucheverybody, cybersecurity or not.
A few additional topics I thinkwe have a rapidly changing

(01:35):
regulatory environment incybersecurity.
That's somewhat unprecedentedand should require a little bit
more focus than even we'vehistorically given it.
And then I think we have someinteresting challenges on the
technical side in terms of therise of quantum cryptography,
some of the use of deepfake andother kinds of AI attack
technologies.

(01:55):
And then how do we protect thisinfrastructure that we're
building to handle AI and kindof the future of computing?

Speaker 1 (02:03):
That's a lot of great topics.
Let's start, maybe, with AI andgovernance.
I know everyone well.
If you're not, if you'resleeping under a rock, maybe you
don't know of AI or yourcompany is not working on it.
But when you say AI governance,how do you set that up?
What is that for you?
And maybe what are some of thegaps that warrant having
governance to kind of moderatethe progress?

Speaker 2 (02:27):
Yeah, I think, governance in AI.
If you'd asked me a couple ofyears ago, when you know, really
, the large language modelstarted, we talked to a lot of
folks.
They said, yeah, we've got anAI policy, we've got governance,
we're good to go.
And what we've learned sincethen is that a policy saying
don't put sensitive data in AIis not particularly practical,

(02:52):
nor is it really solving theunderlying issue of how we can
use these tools holistically.
Ai has such broad reach intothe enterprise, so many
different use cases, that itreally requires the organization
to come together in kind of anew way.
It's almost a new wave ofcompute, similar to desktop

(03:14):
computers, similar to the riseof cloud, and now we have rise
of AI.
It's truly that level oftransformation for the business.
So you've got to have all thestakeholders at the table and it
takes time.
That takes effort and it's notnecessarily the fun work that
people want to do, but it doesenable the fun that we can all

(03:35):
have with.
AI.

Speaker 1 (03:36):
Yeah, so maybe balancing your points on kind of
the progress and thenminimizing the mistakes that can
defeat progress or slowprogress.
What are some of the challengesthat you've seen in?
Maybe AI not being used rightor not being governed well to
kind of justify that kind ofgovernance layer in place.

Speaker 2 (03:56):
Yeah.
So I think in someorganizations AI governance is
kind of wild west and people arejust using it willy-nilly the
organizations that are trying toget their arms around
governance.
I think there are some reallygood efforts going on out there,
but I think a lot of them oneof the common pitfalls is they
get stuck in this idea of usecases for AI.

(04:19):
I think it's really importantto understand what use case
you're trying to bring AI in for.
What that model doesn't coverfor most organizations is where
you've got a tool, you've got aprocess that's already in place,
but now AI has been introducedinto that process, either by the
existing technical solution orby one of the partners involved

(04:43):
in that process.
Technical solution or by one ofthe partners involved in that
process and now you've got AI inthis business process or in
this flow that has been workingjust fine for however long it's
been in place, and if youhaven't put governance around
that kind of change, if you'renot reviewing your processes and
your tech stack on a veryregular basis to find those

(05:04):
changes processes and your techstack on a very regular basis to
find those changes, you'reusing AI in processes where you
don't know you're using it.
So you've got AI use casesyou're unaware of.

Speaker 4 (05:13):
Good stuff.
So a lot of AI talk a lot ofstuff.
Here I'm a newer CISO or I'vebeen a CISO for a while and AI
or those aren't my big topics ormy deep knowledge.
How high on the risk registeram I putting AI in?
What are some kind offoundational things I can start
putting in place on like when tostart addressing them?

Speaker 2 (05:35):
I would say AI needs to be very near the top, if not
the top, for most organizationstoday that have a cybersecurity
program, for most organizationstoday that have a cybersecurity
program.
I think there are some caseswhere there are organizations
that don't have a program andthere are some foundational
pieces that I might prioritizehigher in those cases.
But for organizations that havea security program, the way

(05:59):
that this is transforming theattack surface, the threat
landscape, the business,transforming the attack surface,
the threat landscape, thebusiness it's got to be top of
mind, top of list, okay.

Speaker 4 (06:09):
And then kind of going further on that thought
for and this is kind of aquestion for you on certain
companies, when is it relevantto have, like, a dedicated AI
security person, or when can myexisting security team can
absorb those responsibilities?
Maybe like idea of both andpros and cons of each one, but
probably a common questionpeople have these days.

Speaker 2 (06:31):
Yeah, I think it varies for the organization.
I would say certainly anyorganization that is putting out
a product that contains AI oris using AI to deliver their
core business function has astrong argument to have some
dedicated AI cybersecurityresource.
I think large enterprises alsohave a strong case for that

(06:52):
because their exposure to AI isso enormous just because of
their size and the number offolks they've got in the
organization using these toolson a regular basis.
Of course, it gets a littleharder for smaller organizations
where resources are constrained.
People have to wear multiplehats.
It's hard to get a dedicatedanything in those conditions.

(07:13):
But I think another challengewith this is that the skill set
for how to deal with securityand AI it's not broadly
dispersed or distributed throughcyber practitioners.
A lot of us in the field arestill learning.
There's a fairly steep curve insome points for how to deal
with the security.
If you've got an organizationwhere that's the case, where you

(07:37):
don't have a lot ofcybersecurity expertise for AI
in-house already, again, I thinkthat builds a strong case
because it's a significantaddition to the estate that you
have to deal with.
Yeah, you may need thoseresources sooner than you think.

Speaker 3 (07:54):
Yeah, tim, you made a comment there that there's
tools and processes that thecompanies have been using and
now AI just may be introduced tothem just as part of using that
product, just as part of usingthat product.
So yesterday we may have had adecision tree that the output to
the consumer was an approval ora rejection or something along
that line.

(08:14):
That was based on we made thisdecision tree.
We understand the forks in theroad because we were part of
making that calculation, but now, all of a sudden, ai is in the
middle of that making decisions.
So there's a clear path wherecompanies are going to have to,

(08:34):
I think, decide or interject.
When to say are we going totrust the decision that AI is
going to make and how do theyaddress that?
I think that's a big questionthat companies are going to have
to wrangle with or are rightnow.
But let's pivot that inside,those same things are happening
with security tools and securityteams, and so they're starting
to get that exposure.
Do you have any advice on maybea couple of key steps the

(08:54):
inward facing teams may need totake to either adopt or how to
start to dip their toes in it alittle bit more aggressively?

Speaker 2 (09:02):
Yes, I do and I think you're absolutely right.
The internal use of AI by thesecurity team creates additional
challenges.
First thing I would recommendis know where you're using AI in
any of your informationsecurity workflows, for a couple
of different reasons.
One, as you said, you've got tobe able to justify the
decisions that you're making,and in an InfoSec perspective,

(09:25):
that's really important becauseyou're dealing with very precise
, very binary truths or fictions.
There are cases that have goneto litigation where solid
forensic evidence has been kindof tossed out because at some
point in that process AI wasused but it was not necessarily

(09:47):
known to have been used orcalled out by the process.
So if you as a security teamcan't say and I used AI here for
this reason, and here's how Ijustify that decision there are
instances where things are beingtossed out, and if that's
happening, you're going to seethat ripple further into
legislation and regulation andother places.

(10:10):
You've got to be transparentabout where you're using the AI.
Yeah, the second thing I wouldsay is you've got to make sure
you're dealing with AI that'sgoing to deliver some value
versus become a distraction.
There are lots of really funways you could use AI in
cybersecurity.
But you've got to go back tothe foundation of how is this

(10:32):
helping me reduce myorganization's risk?
Can I go back to mystakeholders, my leaders, my
board of directors with a clearreturn on investment?
For hey, this is why I'm usingthis AI tool in this way.
One of the risks with that isit's really easy to go back to
cost.
I'm saving two FTE of securityanalyst time by using AI here.

(10:56):
Well, that might be true thisyear.
It's because AI is very early.
We're still seeing the earlypricing.
If you remember Uber in 2015,your ride across town was like
10 bucks.
That same ride is probably 70or 80 now because we're used to
the convenience.
I think we've got to anticipatea similar shift in AI pricing

(11:22):
$100,000 analyst by using a $20AI subscription.
That's not going to stay thecase forever and as a CISO, as a
security leader, you need to beanticipating that shift.
So your value for using AIneeds to go beyond I'm reducing
headcount or I'm reducing OPEX.
It needs to go to I'm enablingnew use cases, I'm blocking

(11:44):
better threats, I'm doing morewith the resources I have,
versus the simple headcountreduction.

Speaker 1 (11:52):
On that note, are there any compelling areas
within cyber where you're seeingmore true valid traction versus
just buzz and agents applied toeverything from a tooling
standpoint.
But where are the hotspots foryou with AI in cyber programs?

Speaker 2 (12:09):
Where are the hotspots for you with AI in
cyber programs?
Yeah, it kind of depends on howyou describe a hotspot.
One area I know that there's aton of AI being used today in
cyber programs is the whole areaof third-party risk management.
So this is an area incybersecurity.
We've helped tons of peopleover the years build programs in
this space.

(12:29):
We understand it pretty well.
There's a lot of informationthat's generated in this process
, both by the vendor and by theconsumer.
We do all these surveys, we doSOC 2 reports, we do external
scans.
We collect all this data.
We ended up with a data problem.
It sounded like a great usecase for AI.

(12:51):
So now we have AI tools thatwill go and summarize SOC 2
reports and will help us analyzethese questionnaires and help
send them out.
And then on the other side, wenow have the situation where the
organizations that receive allof those questionnaires are
saying I can't keep up withthese questionnaires.
I'm going to use AI to fillthem out.
So we've got tools that will gothrough your trust center

(13:14):
resources and try toauto-complete the questionnaires
you're getting coming in.
So now you've got AI sendingquestionnaires to AI that is
responding to questionnaires.
So I think there's a tremendousamount of traction in
third-party risk.
The value question has to bedetermined.

Speaker 1 (13:31):
I think there's a smart way.
I'm curious your thoughts onthis.
I was a few weeks ago.
I was at an event where a CEOwas kind of humble bragging
about using AI to completequestionnaires for their cyber
program.
That didn't exist and I thinkthe prompt that again, I don't
know why he was bragging to me,but the prompt that he used was
like complete this for having amodest cyber program that was

(13:55):
passable but not supersophisticated, and he was
getting responses that were kindof a passing grade but it was
all fabricated.
And I mean the ex-auditor in meis kind of freaking out saying,
okay, that's borderline fraud,but secondly, that's not
accurate and we're just creatingAIs, talking to AIs that are
not actually doing anythingmeaningful.

(14:17):
So is it fair to assume that thecompanies that win in this
space and let's stay onthird-party risk are those that
use it, but don't push it toofar?
Party risk are those that useit, but don't push it too far If
they free up more time for thesmart humans that are probably
overqualified to do third partyrisk assessment program question
matching.
But could they be pulling otherthreads that they've never had
time to pull to get to the realessence of the risk or where the

(14:41):
data is within the third party.
Where should you care Thingslike that?
What are your thoughts on thatand have you seen anybody
getting it right?

Speaker 2 (14:50):
I think that's a great way to think about it.
Ultimately, I think it comesdown to the trust.
So in your example with thisCEO, right, there's no trust
there.
You can't trust the reportsthat he's giving back to the
folks asking him questions,because he's just using AI to
make up answers and fill it out.
So you've got a trust deficit.

(15:12):
I think it's the tools that canfigure out how to close that
trust deficit, either by pullingon threads you couldn't
otherwise pull or having somekind of validation or
verification of these answersare accurate.
These answers are accurate.
So there are some vendors outin the space that are trying to

(15:38):
do this almost in real time,using a combination of technical
controls and posture andvalidation techniques.
But it's all very early startup, unclear how much of it is
slideware versus real.
At least, I haven't seen itreal personally, I hear some
promising stories and some goodapproaches, but to me, with the
AI and particularly thethird-party space, it comes down
to the trust.
How do you trust the answersthat you're getting back from

(16:00):
the questionnaires, or how doyou trust what you're seeing?

Speaker 4 (16:05):
Provocative question here.
So, like in a world focused onlike advanced, persistent
threats and nation-state actors,how important is a strong basic
hygiene program in defendingagainst like sophisticated
threats?
So I almost like back to thebasics.
Right, we get down the road andlike is the road we're going?
Is it still the right road?
Or there's a kind of a pauseand go back to the fundamentals

(16:26):
that are pretty effectiveagainst AI.
But we were kind of that lostleader where we're going so
advanced that now we're not evenon the same orientation anymore
.

Speaker 2 (16:37):
I love that question.
You know I'm kind of an oldschool cyber guy from way back,
so I really appreciate thebasics, the fundamentals, the
tackling, the blocking, and Ithink for a long time we've been
focusing as an industry on themore advanced side of things, on
more advanced controls, moreadvanced technology and what

(17:00):
we're starting to see,particularly as more critical
infrastructure gets connectedand becomes more aware, more
smart.
We are seeing attackers,specifically these sophisticated
nation states that are goingback to the pure technical
exploit, that are looking forthe latest buffer overflow, the
cross-site scripting, themalformed packet that is a

(17:21):
technical means to penetrateinto an organization, into a
control system and again,especially for these critical
infrastructure areas that have alot of legacy technology that
may not necessarily be easy toupdate.
So I think, if you've got anyexposure at that level, that
those basics, those fundamentals, become much more important

(17:45):
again because we are seeing theinvestment by very sophisticated
adversaries in targeting thosesystems.

Speaker 3 (17:53):
So I got this.
May be a bit of a hot takequestion, so I'll set it up with
that, but a hot take question.
So when I filled out a lot ofthese questionnaires, I've had
to ask a lot of these questionsand I get down to there's two
things I really want to knowwhen is my data going and how
are you protecting it?
That's all these questions weask to me boil down to those two

(18:16):
pieces to oversimplify it, butto be more provocative, todd,
how many times do 300 questionquestionnaires not actually
answer either of those questions?

Speaker 1 (18:28):
Yeah, so if I go, back to those two questions.

Speaker 3 (18:30):
Do you think AI, at some point, is going to help us
answer those two questions?
Where did my data go and howare you protecting it?
Because that's what I want toknow.

Speaker 2 (18:42):
And I love the way you asked that question At some
point absolutely when that pointis.
That's where my crystal ballgets a little bit fuzzy, do we
need the cyber laugher curve?

Speaker 4 (18:56):
Yeah, there you go.
We've got to ask Sarah Connorman.
Maybe she can give us someinsight.

Speaker 2 (19:02):
Yeah, I do get worried when I read about the
models that fight being shutdown and then blackmail the
engineers that are trying to doso.
That's a little scary.
Who gives AI root on thecontainer in which it's running?

Speaker 4 (19:21):
Tim question.
Here too and this is not somuch AI-focused but maybe like
stakes for CISOs are gettinghigher and higher and there's
been talks of like CISOsconsidering purchasing
professional liability out oftheir own pockets, but what are
your thoughts on that and whatdoes that really say about the
current risk landscape forsecurity leaders?

Speaker 2 (19:39):
Yeah, I mentioned that.
One of the top things I wouldfocus on as a CISO is that
changing legal landscape.
It's moving more rapidly than Ithink it ever has, particularly
with the rise of AI.
You're seeing regulations andlaws being passed in all kinds
of different jurisdictions thatare creating conflicting
requirements.
It's creating new requirementsthat you may not be aware of and

(20:03):
I think, as a CISO, if youdon't have a good relationship
with a lawyer, now is anexcellent time to invest in one,
and I think you need to havethat relationship both with your
internal legal resources.
But I think there is some valuein having a trusted outside
perspective that is independentof the organization that you're

(20:26):
working with.
I also think that, with some ofthe rise in liability that
CISOs are being asked to take on, either by law, by regulation
or just the kind of industrystereotype that after a breach
the CISO gets fired, that aftera breach the CISO gets fired you

(20:50):
might want to look at apersonal policy around
professional liability orumbrella insurance or something
like that, because I think therole of the CISO is
misunderstood in a lot of waysand that kind of
misunderstanding when there aremillions of dollars in fines or
regulatory losses orreputational harm at stake.
Fines or regulatory losses orreputational harm at stake.
It's good to have a little bitof personal protection beyond

(21:11):
what you might haveprofessionally.

Speaker 3 (21:15):
I think I heard.
Make sure there's a line itemin the budget that says I've got
some legal expenses.
Make sure I've got that covered.

Speaker 4 (21:22):
Yeah Well, I thought about like with to your point
earlier, about like data andwhere's it going and who has
access to it, and like there'ssomeone on both sides into those
questions and there's a lot offinancial outcomes based on
those two directions and stuff.
So my thought is, like similarto doctors, like
anesthesiologists right, they'vegot very high malpractice
insurance because they make amistake and it's more of a life

(21:43):
in that scenario.
On the business side it's likeyou know, we see a lot of
companies that do or don't getcontracts based on their
third-party risk question theyput forth, they get to say yes
or no, and that could be to aCISO that may have altered those
things and so just kind of notdirectly aligned but kind of
adjacent to our conversationtoday, these decisions become
more and more financiallyincentivized and the outcome.

(22:06):
So I just want to get youropinion.

Speaker 2 (22:08):
Yeah, I think cybersecurity has some really
disorganized incentive models,and what I mean by that is that
the people that reap the rewardsof taking a lot of cyber risks
are not the people that areimpacted when that risk is
materialized.
People that are impacted whenthat risk is materialized, and I

(22:29):
think, because of that, you'vegot to be aware that that's the
case.
There's an asymmetry.
People take a lot of risk, theyget a lot of reward, but if
that risk happens, it's nottheir data that was lost, it's
yours, it's mine, it's everybodyelse's, and because of that, it
takes a lot of preparation andplanning.

Speaker 4 (22:48):
Yeah, kind of wrapping up here like thoughts
here Newer CISO, I get to dothree things.
Right, you know I should makeit two.
I get to do two things thisyear around getting my AI
cybersecurity program builtaround AI.
What are those two things I'mdoing this year?

Speaker 1 (23:07):
Can he wish for more wishes?
No, he can't.

Speaker 2 (23:09):
He can't, absolutely, not Absolutely not Wish for
less wishes, and then thenegative integer wraparound will
get you lots, of, lots ofwishes that way.
Oh, if I could only do twothings as a as a new CISO and of
course it always does depend onwhere you start right.

(23:29):
Absolutely, I would say theinvestment on the legal side
understanding the landscape, therisk, the exposure, both for my
organization as well as mepersonally.
I think that legal investmentwould be pretty near the top for
me.
If I could only do one otherthing, I would probably practice

(23:55):
against an AI attack.
By that I mean a tabletop or asimulation of what's going to
hit me.
If I've only got those twothings, let's be ready to get
hit in the face.

Speaker 4 (24:11):
I support that.
I like that.
You know, like I think if yougot on a cruise, you didn't hit
it in the face.

Speaker 1 (24:15):
I'll keep that in mind, Cody.

Speaker 4 (24:18):
Like a cruise ship.
It's like you know the firstthing you get on.
I hope never have a cruise godown or a fire with.
The first thing is a fire drill.
Right, we'd hope it doesn'thappen, but can't assume it's
never going to happen.
So first thing we do is let'sjust do an instant response and
make sure it doesn't happen, orif we do know what to do.
So good answer.
I like that.

Speaker 1 (24:36):
Awesome, tim, any maybe, maybe one last wrap up
question what would be youradvice to a new CISO or CIO that
just inherited cyber or, let'sbe provocative, a CEO that
inherited cyber because theprior reporting mechanism didn't
work?
What's your advice to themmaybe those that are not as
close to it with all your yearsof experience of how you'd start

(25:00):
that conversation with them orhave that coaching conversation
of what do I even do here, tim?

Speaker 2 (25:07):
have that coaching conversation of what do I even
do here, Tim?
That's a big question.
You've got to start with thebasics.
What am I protecting, Whetherthat's business process, whether
it's assets, whether it'sinformation, whether it's people
?
You've got to start out withwhat am I protecting?

Speaker 3 (25:22):
And.

Speaker 2 (25:22):
I'm not saying you have to have 100% clarity and a
perfect asset inventory, butyou've got to have at least a
business understanding.
This is what I'm here toprotect.
The second part of that storythen becomes what do I have in
place to protect this today?
Do I have anything in place forall of these critical assets?

(25:43):
And then I've got to use mybusiness acumen now to
prioritize where I've got gaps.
And then the third thing Iwould do again is practice.
You can get hit in the face atany time.
So if you've never been throughthat, if you've never thought
about that, that's what I woulddo.
Next, I'm going to understandwhat am I protecting, what have

(26:05):
I got in place to protect it?
And then, what do I do ifsomething really bad happens to
the things that I'm trying toprotect?

Speaker 1 (26:12):
Awesome, tim, thank you for joining the show.
This was a great conversation.
I learned a few things andalways appreciate time with you.

Speaker 4 (26:18):
Awesome.
Thank you, Tim, it's been funguys.
Thanks Bye.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.