Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chris Romeo (00:09):
Welcome to the
security table.
This is Chris Romeo.
I'm also joined by my co-hostsand good friends, Matt Coles and
Izar Tarandach, and we got aspecial guest today.
Uh, someone who, most people inAppSec will already already know
who Jim is, but Jim Manico isjoining us today.
And so, Jim, since in casethere's a few folks out there
who maybe don't know who youare, just do a quick
Jim Manico (00:31):
My name's Jim
Manico, been a software
developer since the nineties.
I, I'm a computer, uh, I'm asoftware and application
security educator.
Chris, I love applicationsecurity.
It's, it's my job and it's a lotof fun.
That's me in a nutshell.
Chris Romeo (00:46):
And I, I can attest
I've seen you have a lot of fun
doing application security atvarious events and stuff over
the years, but let me set up howwe got to this conversation.
Okay, so the year, I think it's2022 and, uh, za and I happened
to be in Austin, Texas forLascon.
A certain, uh, speaker named JimManko is up delivering the
(01:09):
keynote.
And we had heard in the past,we, we had heard and seen things
that Jim had said about threatmodeling that made us think, ah,
maybe he wasn't a giant fan ofit.
But then a slide pops up on thescreen where Jim references
threat modeling and the threatmodeling manifesto.
And he says, oh, you know, I'vekind of, I've kind of changed my
mind some on threat modeling.
(01:29):
I see the value proposition init.
And Izar turns to me.
He's literally sitting next tome where we're at a round table.
He turns to me and he says, didhe just say what I think he
said?
I looked at Azar back at him andI said, I think he did.
I think he, I, I think he justsaid he's supporting threat
modeling.
(01:50):
And so for us it was like wewere, we were thinking you were
in a different kind of mindsetwhen it came to threat modeling
and didn't see the value in it.
And so we said, we gotta get Jimon this show.
We got to unpack this and, andso we gotta go, we gotta get in
the way back machine here.
For those people of that likedcartoons in the seventies and
eighties or the DeLorean orwhatever.
We're gonna get in here.
(02:11):
But let's travel back in time,Jim, and, and just let's start
out by setting the stage withthis story.
Like what, when you firststarted doing application
security, like how did threatmodeling intermingle into it?
What was your opinion of it overtime?
Jim Manico (02:24):
I started my career
in nineties, started doing like
security services andapplication security type of
work about 10 years later, rightin the, the early mid two
thousands.
And I saw a lot of consultants.
Beginning to ramp up and dothreat modeling, and the
companies were very interestedin it and were spending a great
deal of money on it.
So I wanna start by going backand defining threat modeling as
(02:47):
it was 20 years ago.
So I go ask ChatGPT, gimme asnarky version of threat
modeling and it it came back.
And I'm like, ChatGPT, You seeme?
Ah, threat modeling, the art ofgathering a bunch of highly
overpaid consultants in a roomarmed with fancy whiteboards and
colorful, colorful markers tobrainstorm all the ways the
(03:09):
system could be compromised.
It's like a brainstormingsession for the paranoid, but
with more jargon, less commonsense.
And it's extremely expensive.
So real quick, we'll start witha fancy diagram of your system.
Don't worry if it's notaccurate.
The point is to make it lookreally complicated.
The more boxes and arrows thebetter.
This way we can charge you more.
For each box we analyze, I'mgonna keep going.
(03:31):
Identifying threats.
This is where we list all thepossible threats from the highly
possible to the utterlyridiculous.
Alien invasion.
Compromising your data center.
Let's add that into the threatmodel.
It's all billable hours anyways,right?
Risk assessment, we're gonnarank threats based on how scary
they sound, not necessarilytheir actual risk.
Remember, the goal is to makeeverything urgent, so you'll
(03:53):
keep us around longer mitigationstrategies.
This is where we recommend abunch of expensive tools and
solutions.
Whether they fit that doesn'tmatter.
What's important is they comewith hefty licensing fees.
And require extensive training,which by the way, we offer
review almost done after all thehard work.
We're gonna present our findingsin a thick report filled with
(04:14):
jargon and complex diagrams, andmost of it is total fluff.
You don't need to read, but it'sgonna look really impressive on
your office shelf.
And last the follow-up.
Of course, threat modeling is anongoing process.
That means we need to scheduleregular follow-up sessions at
our premium hourly rate toupdate the model with new even
(04:35):
more farfetched threats.
That's the threat modeling thatI've been railing against for
decades, and believe me Chris,I've seen that that's a joke,
but I've seen that in the realworld.
So when you want me to talkabout threat modeling, The most
important thing for me is tostart with what does that mean
for you as a company?
What does that mean for you as aconsultant or a security team,
(04:57):
and how do you go about it?
And the reason I've beenchanging my tune is because I.
the tools I see for threatmodeling, the tools to analyze
software and give me diagrams,the tools to track risk in this
area have gotten extraordinarilybetter in just the last 20
years.
And so that's why I'm changingmy tune.
(05:18):
So when you hear me snark andit's that, that snark story is
what I've seen in the real worldmany times, which is why I was a
naysayer for for many years.
I'm less so now, Chris, because.
Chris Romeo (05:31):
No, I
Jim Manico (05:31):
I'm a student.
I'm not a, it is not aboutreligion.
Yeah.
This is about science and as Iget new information and see new
data and see new processes, likeany good scientist, I should
change my mind and change itquickly as I see other evidence,
and I've certainly seen that.
Chris Romeo (05:50):
Yeah, so it's
primarily the.
the impact of kind of new toolsand technology is, is really
what drove what, what kind ofyou started to see, what were
you seeing in these tools andtechnologies that didn't exist
before that caused you to kindof, to to, to make this
Jim Manico (06:07):
Like I, I see.
I won't, I won't quote names oftools to protect the, the
innocent.
Right.
But I've seen some tools reallyextensive.
Uh, threat classification builtinto it.
So I can describe the kind ofapplication and business I have
that'll give me like some a, areally good threat
classification to be concernedwith for that kind of business.
I used to have to pull that outof the air to figure that out.
(06:29):
Now there's really good threatclassification in various tools.
Number two, I, I'll mention afew companies.
I like Blast, Octo and Levo.ai.
These are all the nextgeneration API companies.
They're not even threat modelingcompanies, but They're
installing agents and servicesand I can click a button, let'em
run for a bit, and getdramatically accurate and
(06:51):
impressive understanding of thatmicroservice architecture.
The exact data flows betweenthem.
That used to take me 20 plushours in a threat modeling
session.
And I have tools that makegeneration, uh, generation of
diagrams a lot easier andextremely more accurate.
And the other thing is justprocess like threat modeling
(07:12):
used to be Cigital, AdamShostack, and a couple others.
And that set the Jim Del Grossoand that's about it.
And now there's.
Hundreds of thousands ofprofessionals that participate
in and do threat modeling.
And the more we do it, thebetter we get at it.
As an industry, this isn't yourfirst threat model.
Chris, you've done a few, and ifI go back, if I sit in on your
(07:34):
first threat modeling sessionever, Chris, I'd have probably
thrown you out the window andfired you.
And if I probably watched you dothreat modeling today.
My guess is I'd beextraordinarily impressed with
your work.
'cause you've been doing thisfor, you're old and gray, you
old man and you.
You've learned.
You've learned.
So am I buddy.
So am I.
You've learned a lot.
(07:54):
So process has gotten better,the tools have gotten better.
Threat classifications gottenbetter.
And using all these more modernthings, we can make threat
modeling extraordinarily morevaluable than it was even 10, 20
years ago.
Chris Romeo (08:09):
Yeah, and I'm, I'm
hogging the microphone here, so
Matt, and these are jump in at,at will, but I'll, I'll hog the
microphone as long as I can,
Izar Tarandach (08:16):
No.
Chris Romeo (08:16):
know.
So, um, threat modeling made a.
Manifesto.
How, how what?
I know you mentioned that onyour slide.
That was another thing that Izarand I saw up there were like,
oh, that's cool.
'cause the three of us here areall co-authors with a number of
other folks of the threatmodeling manifesto.
So what was the, what was theimpact that threat modeling
manifesto, when you took a lookat that and, and how did that,
(08:37):
how did that kind of get intoyour thinking about threat
modeling?
Jim Manico (08:40):
It was really good
to see so many professionals
that were competitors in theworld of threat modeling coming
together to talk about.
Why, this is why this isimportant, right?
So they define threat modelingin, in a, in the first time that
I thought was a reasonabledefinition, right?
what are the four key questions?
(09:01):
Uh, what are we working on?
What can go wrong?
What are we gonna do about it?
Do we, do we do a good enoughjob?
They talk about why the threatmodel.
They talk about Who shouldthreat model.
They talk about the values of agood threat modeling team and
consultancy and like what theethics are of doing threat
modeling.
They talk about the coreprinciples of threat modeling
for the first time, threatmodeling in my world went from a
(09:24):
bunch of overpriced consultantsbilling me$350 an hour to do
fluff to Matthew's.
Like, where do you get that ratefrom?
Talk to me.
Matt Coles (09:33):
I, I don't remember
getting that much for
Jim Manico (09:40):
They took modeling
from.
From a real fluffy hourly ratething.
This is the first time I thinkwe're really moving a.
a.
We see competitors in theindustry agreeing on what a more
ethical cost effective andeffective set of processes are
gonna be.
They all also talk aboutanti-patterns like.
The, you know, threat modeling.
(10:00):
Like get rid of the hero threatmodeler, uh, you know, focus on
practical solutions.
Be careful about over focusing.
Be careful about I, I mean, thethings that I was concerned
about with threat modeling, theyaddressed without any bullshit
in, pardon my language, withoutany BS the manifesto itself.
And I thought that was reallyimpressive.
(10:21):
They addressed my concerns aboutthe waste that threat modeling
could be.
That's where, that's why I wasimpressed with it.
Matt Coles (10:27):
So, Jim, can I, uh,
can I just jump in here, Chris?
Uh, um, do you think about, uh,about, you know, bringing threat
modeling to, to developers, youknow, making it Um, part of
everyone's everyday life asopposed to something, you know,
you hire a consultant for tocome in and, and as, as Izar
likes to say, parachute in andsolve world hunger, uh, for
(10:49):
obviously a lot of, a lot ofmoney.
Jim Manico (10:52):
All developers do
not need to be a part of threat
modeling.
I want my developers writingcode and building solutions.
I, I believe threat modeling ismost effective.
When I'm about to build a newproject, I want the architect
and the lead there.
Not every developer, right?
Or when I'm about to make a verylarge architecture change, I
want some of the leads.
I don't want the whole team,right?
(11:13):
Traditional threat modeling,they parade every single
developer in and ask a bunch ofquestions.
I don't think that that's a hugewaste of developer time.
So I want, I want, my developersand I'll have my architects or
the lead do threat modeling withthe security folks.
Typically, this is my take onit.
Izar Tarandach (11:30):
Time for me to
jump in
Jim Manico (11:34):
Got what you got.
Matt Coles (11:35):
you the hand
grenade.
Jim Manico (11:36):
developers threat
modeling.
That's great.
That's
Izar Tarandach (11:38):
So G, look.
Jim Manico (11:40):
write code while
your guys all are sitting in
threat modeling meetings for aday.
Go.
Izar Tarandach (11:43):
So listen man.
Long time fan.
First time, uh, argue Sobasically what I'm hearing from
you is that you are sort of, uh,in a way you are, uh,
disagreeing with yourself.
So there's that famous tweet ofyours.
(12:03):
I use that Avi Dublin uses and alot of other people that I know
use to justify a lot of effortby developers, where you point
out that nowadays everydeveloper is basically
responsible for whateversecurities in the company
because their code is in thefront line.
Right?
And I agree with thatwholeheartedly.
The thing where I go head tohead with you, Is that, I think
(12:27):
that once you put that forward,saying that every developer is
writing stuff that's on thefront line, to give an exception
to certain developers or to, uh,elevate some people because they
are architects, doesn't reallyagree with what we see in the
field where the architect giveslike a direction and the
(12:49):
developer, when they aredeveloping, they are forced to
make a number of architecturaldecisions that might, and do,
influence the threat modeloverall.
Jim Manico (12:59):
That, that's a good
point.
I mean, what I'm trying to sayis it, it, it depends on a lot
of factors.
These are in some, in someorganizations, they're gonna
pick the architecture setarchitect.
'cause part of threat modelingis, Establishing good security
architecture patterns thatdevelopers follow.
So if your company's mature, hasarchitects and sets a security
(13:20):
architecture document that'svaluable, then I would deliver
that to the developers.
I believe in developer training.
I want all of my developers togo through security training
from one of our companies.
We all do.
We, we, we do a good job.
So I want all developers to beeducated about security basics.
But I don't need to spend fourhours with every developer
threat modeling theauthorization code flow, and
(13:42):
pixie for OAuth two.
but I do need to do that withthe identity architect and some
of the devs working on that.
So I just don't want the entireteam to do threat modeling.
I want the entire team to beeducated on security and go
through regular developertraining of some kind.
There's many options, but again,I don't want to take my user
interface, react developer.
(14:04):
And have them spend four hourson OAuth.
Two architecture threatmodeling.
That's not their world.
I want them to understand how todo react security dangerously
set inner HTML, ECUs sanitizer.
I gotta use my types and propvariables properly.
I gotta blah, blah, blah.
I gotta make sure I validateURLs that enter an active
context.
That's not threat modeling.
(14:24):
That's technical education.
So I like role-based securitywhere the different members of
my team, Are gonna do threatmodeling only if it's
appropriate for the role.
Otherwise, I'm being extremelywasteful that, oh, that React
guy who's busting trying to getme a good ui, sitting in a long
threat modeling meeting is awaste of time and don't, it's
(14:48):
bad to waste engineering
Izar Tarandach (14:49):
Right,
Jim Manico (14:50):
time.
Izar Tarandach (14:50):
Right.
Jim Manico (14:50):
So yes, I believe
in, in threat modeling.
Yes, developers should do threatmodeling, but my big problem is
how wasteful it's been.
I wanna like, I want to bringthat in.
So I'm doing it costeffectively, especially as the
economy changes, especially asthings begin to dip.
I gotta be, e efficiency insecurity is gonna be the big
(15:11):
buzzword for the next two orthree years.
Izar Tarandach (15:13):
and and I think
that basically when you put it
like that, we agree.
The thing is a
Jim Manico (15:17):
I like that.
What, wait, what did you say?
Izar?
You say that louder?
I didn't hear you.
What?
What was that
Izar Tarandach (15:22):
it's not the
first time that we, it's not the
first time that I agree withyou.
It's just that you never got tohear it.
But, uh, the, the, my, my pointhere is that my spiel is that,
uh, uh, threat model ofrestoring.
So I want that developer threatmodeling at the scope that they
work exactly as you said, right?
So you have a baseline thatcovers the whole architecture.
I want the mindset of the threatof the, uh, the developer when
(15:46):
they start doing their story tobe on a threat modeling point,
saying, what choices am I makinghere that influence the security
of the architecture?
And is there something that Ihave to add to the overall
threat model?
Jim Manico (15:58):
I agree.
Chris Romeo (15:58):
I wanna, I wanna
take a question back.
I wanna, I, I wanna ask you thisquestion a little bit further
back, Jim.
'cause I think this mightinform, uh, Izar's kind of
threat model every user story I.
And also your point aboutlimiting the waste that's
happening here.
What do you see, like, what doyou recommend as far design
Izar Tarandach (16:14):
Hmm Hmm.
Chris Romeo (16:15):
for developers?
Like what should they be doingin your mind from, and I'm not
talking about just thearchitects that are, that are
putting together the big view ofthe whole thing, but let's just
say a senior software engineerwhose normal job is to write
feature code, what is the roleof design in their world and how
do you, how do, how should theydo design?
Jim Manico (16:32):
If I'm doing
something that I've done many
times before, I'm building a UIwith database interaction, or
I'm, I'm doing more of thecommon features of the
application.
I don't need to threat modelbecause it's something I've done
before, over and over and overagain.
But if I'm about to do somethingthat I don't have guidance on or
something a lot morecomplicated, like suppose I want
(16:53):
to do file upload in a queuewith really dangerous file
types.
And I haven't built that out inmy system before.
File upload is a major difficultthing to write securely.
It is extremely complicated withpieces like file name
validation, magic bytevalidation, content
introspection, the persistentstrategy that developer should
(17:15):
stop and do threat modelingbefore they build out that
feature because it's new.
We don't have a referencearchitecture for it.
It's mammothly complicatedinteracting with the database
file system and more.
Now we need a stop and threatmodel before we build that out.
And I usually see this, by theway, just in normal software
development, when a developer isfaced with a really complicated
(17:38):
feature, any good developer isgonna talk this over with their
peers or technical boss and thecustomer to get more insight in
what that is.
Anytime we have a complicatedfeature, That we haven't built
out before.
I'm often given requirements asa developer that I have to go
back and ask a dozen questions,and even though this might not
be threat modeling, now thatwe're adding security into our
(18:00):
process, those complicatedquestions absolutely should
involve an architect, a securityperson, and involve security if
it's new, if I don't have areference architecture or
example of that done securelyalready.
Matt Coles (18:12):
So what you're
saying is for exceptional
conditions,
Jim Manico (18:18):
I'm, I'm I'm with
you're doing something new and
complicated and risky.
Stop and design this out withyour team before you start
writing code.
Absolutely.
Chris Romeo (18:30):
Yeah.
Yeah.
It's, it's a world where you'vegot a library of patterns and
you've got threat modeling.
And when you establish, or youwork on a threat model to the
point where a threat model cangrow into a pattern, basically
in what you're describing here,like you could, you could grow a
file upload pattern.
Where you're like, okay, here'sall the things that we consider
(18:51):
in the threat model, whichtranslated into requirements.
So if you're gonna do anotherfile, upload with dangerous
files, here are the things thatyou have to do.
Now, you'd probably want peoplein that scenario to refresh
that, especially if it's sixmonths later or 12 months later,
um, when they're using it,because things change, right?
Like there is a pattern is notgood forever.
Um, Some patterns, input,validation.
(19:13):
I'm never gonna believe that'snot good.
But some patterns, like a fileupload, things are gonna change.
New files types are gonna comeout.
Someone's gonna, I don't know,attach Bitcoin to it somehow.
And you know, there's just gonnabe, there's gonna be new, new
derivatives that are gonna
Jim Manico (19:26):
we're singing the
same song.
If I'm doing file uploaddevelopment for my team for the
first time.
Reference architecture threat,model that out.
But if I'm on the, the other endof that maturity scale where I
have a well configurable,reusable service for file upload
that all my developers canleverage, it's going through
massive assessment, multiplethreat modeling rounds.
I'm not gonna, I'm just gonnawhip it out and do it.
(19:47):
I'm not gonna go back.
And threat model like so Izar itall depends on the context of,
of what's available and what'swhat's been done in the past in
that team.
Izar Tarandach (19:56):
And, and it's
interesting that that meets the,
the, the bit where you operateon the, on the secure, secure
coding part.
Like once you build that, thatlibrary of reference
architectures, then you alreadyhave reference implementations,
then you already have securelibraries, things that you
already know that have beenvetted and tested and whatnot.
And then you get your developersbuilding with the right Lego
(20:16):
pieces.
Jim Manico (20:18):
Exactly.
And then we don't need to, we'vealready done our work in threat
modeling.
It's the change, it's the bigchanges or the newness or the
uh, I'm gonna mess with thearchitecture.
Something That we haven't donebefore.
That cries out for threatmodeling or even better at the
beginning of a project, youknow, more the, the the earlier
(20:38):
of a, before we're writing code,we have an idea, we budget
writing.
I am not opposed to having alarge number of developers in
those initial pre-productmeetings, and then they can go
off and do their work and thenhave smaller threat modeling
sessions as new things come up.
But to your point, Izar,especially at product
conception, having like one dayof talking about security risks
(21:02):
and security duties and whatwe're about to build, that's a
really good idea.
Doing it every week on a cycle,every Friday is threat modeling
day big.
No.
Big, no.
But at the beginning of aproject, I'm more inclined to
have more developersparticipate.
To your point, Izar, do you feelthe love in the room right now?
Do you feel it?
(21:22):
I feel it.
Izar Tarandach (21:30):
I'm, I'm still
hearing some vibes of, uh, okay,
we got the, these solutions and,and we are very logical and
experienced people and ifsomebody were to turn off their,
their podcast listening deviceright now and think, oh my God,
this guy's just solved the wholeproblem, right?
So Jim, why is it that we keephaving CVEs?
Why is it that we keep havingproblems?
Where are we going wrong?
Jim Manico (21:51):
Because security's
hard.
Not pe.
Pe sometimes it's not threatmodeling.
Izar Tarandach (21:55):
Nothing.
Jim Manico (21:56):
a SQL injection.
And you didn't scan your code.
Oh, believe me, there's a lot ofteams that are still what's SAST
I I, I I said this in a managertraining once I went to a bunch
of senior managers of a real bigcompany and said, If you're
managing software projects andyou're not using a static
analysis tool, that'snegligence.
Oh, and next thing I know infront of HR explaining myself
(22:18):
why I'm condemning all theirmanagers, but I, but I stand by
that statement in 2023.
If you're doing a softwareproject, you're writing code,
you're not doing staticanalysis.
That is bleeping negligence atthis point.
Like, what are you doing?
You, you're not doing security,and I'm not trying to sell a
certain tool.
I'm just saying something likekeeping your third party
libraries up to date.
(22:38):
And scanning your code forsecurity.
This is the cost of doingbusiness.
And if you're not doing it,you're way behind the eight
ball.
And, and these are, that's whywe still find vulnerabilities
today.
That's why we still have CVEs,because a lot of software teams
are still not doing the basicsof security analysis, the
basics.
That's still.
Izar Tarandach (22:58):
Yeah, it's, it's
basically.
Matt Coles (22:59):
gotta, ask.
I have to ask, I have to ask,because this came up in a
previous, uh, episode of ours.
How do you feel about DAST Tools
Chris Romeo (23:06):
Yes, es yes.
yes.
Jim Manico (23:13):
In the age of
microservices, DAST is a dead
technology.
At, at OWASP, we just lost ZAP.
ZAP left the foundation.
I love Simon Bennetts.
He's a great volunteer.
But I think, I think watchingDAST walk away from OWASP is a
sign of the Times because in aDevOps lifecycle, I like IAST, I
like SAST.
Hey, Jeff Williams.
(23:33):
I like IAST I said it, Jeff, youcould quote me on that.
I like SAST I like, I likesoftware composition analysis.
But DAST does not work well fora DevOps work dev DevOps cycle
especially, doesn't work wellfor APIs and a lot of vendors
are gonna beat me up for this.
Um, but yeah, I've given up onDAST.
I don't use DAST
Matt Coles (23:53):
Well, so just, just
uh, just a level set, right.
Zap went over to, uh, a new newinitiative called Software
Security Projects in the LinuxFoundation.
Yep.
Jim Manico (24:01):
it's a great tool
and, and for an old school web
application.
Absolutely I would use it.
I don't see a lot of old schoolweb apps anymore.
I see microservice meshes andreact and all kinds of different
things where DAST is just notnearly as effective as other
tools and it's, it's slow.
It's super slow.
I want DevOps lifecycles.
(24:22):
I want to be able to like have adeveloper issue a PR and run a
whole bunch of security toolingin like three minutes so they
can merge.
I, I don't like dast running forhours.
None of those incrementalscanning, but just in general,
and I'm not talking about me, alarge number of my customers
who've made their own decisionsabout tooling have given up on
DAST for a lot of reasons.
(24:43):
There you go, Chris.
I like it when we agree
Chris Romeo (24:45):
Well, I said it, I
said it about, I mean, I, I said
it a couple of months DAST
Jim Manico (24:49):
is dead.
Chris Romeo (24:50):
and.
I stepped into it because allthe, the, some of the DAST
vendors came after me and wantedto with me
Jim Manico (24:56):
Good.
Chris Romeo (24:56):
about the viability
of the technology.
And I'm like, I don't even haveto argue with you.
I mean, I have, I have evidencein my own career of trying to
use these tools, like you said,against modern applications.
They just don't provide anyvalue.
Like, I don't need to know thatI'm, that, that, uh, you know
what the D N SS records were?
For the thing you scan, likethat's the top finding you're
giving me, coming outta thesethings.
So yeah, it's a, uh, it's, it'sa, definitely a dead technology,
(25:18):
but I'm with you.
Like, uh, Jeff has, uh, broughtme over to the IAST thinking as
well.
Um, I've kind of come on to, tothat as a, as a thing that I
think is, adds a lot of valueto, uh, to the world.
And so, yeah, It's good
Jim Manico (25:31):
it's last on my
list.
I'm gonna ramp up SAST first.
I'm gonna ramp up SCA second.
Maybe I'll do IAST third ifthere's a, if there's a good use
case for it.
Hey Chris.
I'll go one further.
Can, can I talk?
Can I talk some more?
Smack.
You ready?
Chris Romeo (25:44):
Let's
Jim Manico (25:45):
Software composition
analysis tools.
You ready?
They're all, they're all goingaway.
Snyk is dead as a company andall that software composition
analysis is, it's just a featureof SAST.
That's where it's gonna go.
And the whole softwarecomposition analysis industry,
the whole SBOM industry, thewhole tooling industry in that
(26:05):
world is all going away andthere it's just gonna be a
feature of static analysisthat's, so we're gonna end up
with no SCA, it's just a SASTfeature.
No DAST, it's too slow.
SAST is gonna rule securityassessment and it's gonna get a
lot better.
We see vendors like Emre Schiffleft and others making a lot of
(26:26):
innovation there, and we'regonna be scanning code to do the
majority of our securityassessment, then do some
container scanning and othersecondary things.
But SAST is gonna rule the worldwhen it comes to application
security.
Izar Tarandach (26:38):
So jim,
Matt Coles (26:38):
static code analysis
is a slow operation today.
Jim Manico (26:42):
No
Izar Tarandach (26:42):
No, no, no, no,
no.
Jim Manico (26:43):
wrong.
Matthew.
Look at next gener.
You gotta look at the old toolslike, like a check marks is, I'm
a big fan of these folks.
You do the initial scan, it'sextremely slow, but then you do
the incremental scan, it'slightning fast.
You take a tool like Semgrepfrom returntocorp.
That was built.
It's a semantic rep engine.
I can scan millions of lines ofcode in under two minutes.
(27:06):
So you gotta look at the moderntooling in the SCA world and to
pick the right vendor.
You do have DevOps speed code QLbuilt in a GitHub lightning
fast.
But the old tools in theirinitial scan mode are slow.
But new tools or incrementalmode does work at DevOps speed.
And that's Matthew, that's why Ilove SAST so much'cause of that
(27:29):
speed and increased fidelitythat I've seen over the years.
Izar Tarandach (27:33):
So Jim, two
things.
First of all, big parenthesishere.
I will not, uh, say the name ofthe company because I work for
it.
But, uh, the way you're talking,I have some tools to show you.
So catch me, catch me outside.
But, uh, no, it's not.
Jim Manico (27:48):
No, they're, they're
they're bleeding cash.
Izar Tarandach (27:51):
I'm
Jim Manico (27:52):
in their practices
for doing sales.
they're, they're, they're,they're building a tool that I
can rewrite in two days.
And they're, they're,$8 billioncompany hanging off an easy to
build tool.
SAST is hard.
SCA is not.
They're all gonna be features ofSAST I predict it.
Watch
Izar Tarandach (28:10):
just gonna tell
you that I have some
integrations that you're gonnalove when it comes to data, I am
like a dog.
Okay.
Jim Manico (28:17):
convince me.
Izar Tarandach (28:19):
The next thing,
the next thing.
So, uh, you've been talkingabout tools, you've been talking
about SAST, how important it isand all that stuff.
And of course, one nowadayscannot go for too long without
saying anything about AI and allthat good, good stuff.
But now developers are workingwith all kinds of, uh, uh, uh,
looking over their shoulders,coding assistance and whatnot,
(28:43):
which are fed as well by AItrained and whatnot.
And I'm starting to see a cyclehere where AI writes code and AI
checks code and there is a, adeveloper somewhere running
around asking, what did I write?
What did the AI AI write?
And who's checking this?
And how much can I trust this?
How much do you trust code thatcomes from an AI system these
(29:06):
days?
Jim Manico (29:08):
I need to go back.
I'm not done with my Snyk rant.
I wanna say one more about Snykand then I, and then No, just,
I'm gonna say so nice.
If I had to buy a tool today,that's where I would go.
Their enterprise glue is thebest of any tool out there.
The people that work for thecompany are exceptional.
They're great, great, brilliantprofessionals, and I would buy
them today.
I'm just talking about thefuture, I think and, and that's
(29:29):
why Snyk put out a SAST engine.
That's really innovative.
So I'm not trying to, like, Ihave a lot of respect for the
company.
A great deal.
I would buy them in a heartbeat.
I recommend'em all the time.
I'm predicting out in the futurethat it's gonna be a hard, a
hard industry to keep in.
It's gonna merge into SASTThat's all I'm trying to say.
Alright, I'm done.
with that.
I'm done.
I'm done.
I'm done.
Izar Tarandach (29:47):
Now go to the AI
part.
Jim Manico (29:49):
AI?, all of a
sudden, tools like Black Duck
are more important, right?
Black Duck is an old school likelicensing engine and it's, it's
an old school.
Um, Sta, uh, softwarecomposition analysis tool back
in the day.
So the, the tools that have theability to look at segments of
code and check for licensing issuddenly really important in the
(30:11):
advent of AI because as adeveloper, I will use AI all day
long for a million purposes.
I have ChatGPT on my chat, apaid ChatGPT With multiple
unique plugins, some that Ibuilt myself, that I use for my
work to do research and similar,and to be a developer and not to
use AI excuse, I got the Coronafrom the conference.
(30:33):
I'm sorry, got that Corona.
But, um, to be a developer andnot leverage ai.
you're gonna be way behind theeight ball real fast because I
can work.
I, I, I, it's tripled toquadrupled my productivity.
And I don't, I use it for a lotof reasons.
Hey, gimme the initial stubbedout code for this need done.
Um, hey, here's some code I'mworking on.
(30:54):
I hope my boss doesn't mindthis.
Please analyze this.
What can I do better?
And the answers I get, and I'mnot agreeing with everything AI
says.
I'm not just blindly using it.
But it's like a copilot.
And if, and I use my owndiscernment and review before I
push this stuff live.
To use AI as a developer isnecessary.
And if you don't, yourproductivity is gonna be one
(31:15):
third to a fourth of your peersand you're out.
So not only is AI important,it's now mandatory for
development.
If you want to be efficient.
We have to be discerning and youneed to have tools in place to
make sure that you're notbusting out licensing issues.
You're, you're not breaking thelicensing by stealing code from
other teams.
And there are tools to assistwith that, that should be in
(31:37):
place.
How's that Izar
Izar Tarandach (31:39):
I, I'm coming
here from a point where like the
paranoid in the room, I'mthinking that something here
sounds fishy and, and you asduh.
Code security expert for me.
Explain to me how this look ofAI, talking to AI, checking to
checking AI and the fact thatthose AIs are trained on code
(32:00):
that is not necessarily known tobe safe and secure.
So where do I get some assurancein here that what's coming out
is good enough, especially ifdevelopers start leaning so much
on the AI that as anything elsein life that gets automated and
assisted.
Their own tolerance goes downbecause they start trusting the
(32:21):
thing more.
Jim Manico (32:23):
First and foremost,
I take a big chunk of code,
throw it into AI and say, dostatic check security of this
code, and I get results.
comparable to professionaltools.
So AI has security awareness.
If you ask the right?
questions, right?
if you say stub me out some codethey'll stubb you out some code.
But if you say stub me out somecode securely, it will do that
(32:44):
and it's, and you need to usediscernment.
First of all, I'm reviewing anycode that I generate with AIand
second, before I push it live,Izar I'm doing static analysis
with multiple tools.
By the way, I'm doing softwarecomposition with great tools
like Snyk This message isbrought to you by Snyk
marketing.
You know, I, I'm also, you know,and I'm, I have some custom
(33:04):
rules that I'm doing.
I'm doing container scanning.
I'm using other cloud securityservices, so I'm not just
generating code and AI andpushing it live.
That's nonsense.
I have Multiple layers oflicensing and security checking
along the way.
But if I have that in place, thewhole DevOps security lifecycle
Izar I can whip out some codefast, like Grease Lightning,
(33:24):
right?
Chris Romeo (33:27):
here's.
Here, here's, here's some other,the other kind of side of the
coin.
I guess when I, when I thinkabout AI, I, I'm in the same
boat as you.
I think about it as enhancing.
It's not, it's not replacing, Idon't care what anybody says,
like maybe 10 or 20 years fromnow, I'll just be able to tell
it, write me a full webapplication that does this,
this, and that, and I can justdeploy it in five seconds or
(33:48):
whatever.
But right now it's anenhancement.
It makes you a 10% to 50% betterdeveloper.
But then I think about trainingdata.
I think about where, you know,Jim, you've taught secured
coding for a long time.
The Stack Overflow problem wasAL has always been fun because
people would go to StackOverflow, they would copy a
(34:08):
snippet of code, they wouldpaste it in and SI scientists
went and studied Stack Overflowcode, figured out that there was
a certain number ofvulnerabilities, per example or
whatever, like Stack Overflowcode was found to be not very
secure.
Now, if somebody's, but ifsomebody's training an AI based
on Stack Overflow, Which is oneof the, you could argue it,
there is probably not a bettersource of total number of lines
(34:29):
of code on Earth than whatexists in Stack Overflows
database.
But if you're training the AI tocode insecurely with Stack
Overflow code, how does that notget reflected right back into,
and I get your discernment, butyou know, not everybody's got
your discernment, though.
Not everybody's got yourknowledge and experience to be
able to, to wrestle andunderstand that something's bad
coming outta the ai.
It's
Jim Manico (34:48):
not that big of a
deal.
I I get outta the ai, I, get itworking, and then I do security
scanning and fix myvulnerabilities if they're still
there.
It's not, And if you're not, ifyou're using AI without security
review, you're screwed.
In a bad way.
So the answer is do properDevOps style security review
with the, with the, with, withproper tooling.
(35:09):
Review the code.
You get out of AI before youpush it live and make sure when
you're asking AI questions, youask for security.
Stub me out, this and that withreally good security in mind,
and it will use that part of themodel if it's even in the model.
To answer the questionseriously, just try it, say,
give me a basic.
Ruby on Rails app for a for aweb app that does chat, and then
(35:31):
ask the same question to say, doit with extremely rigorous
security, and you'll getdifferent answers.
So it's about asking your AI theright engine.
a as.
mo The, the marketplace, Chriswill eventually become the
different models.
So my prediction is I'll be ableto buy a model of really good
application security fixesacross all different languages
(35:51):
and ask that engine a lot moreaccurate questions eventually.
Okay.
Here's a even bigger prediction.
Static analysis is going away.
I can just use AI to do it.
Oh.
So static analysis past 10 yearswill be gone.
We'll just ask aI AI will belooking over my shoulder saying,
Jim, you did that wrong.
You know, or, or or fix it forme, or whatnot.
(36:13):
So, I mean, AI already is reallygood static analysis as is if
you ask the right questions.
Izar Tarandach (36:20):
So unbelievably
so we do have people watching us
on LinkedIn, and that's onlybecause of Jim.
And if any of you guys watchingwould like to ask any questions,
please feel free.
Just put it in there and wetrying to try and sneak that in
somehow.
Matt Coles (36:35):
While, While, we're
waiting, uh, for that, I do have
a question for you.
I'm gonna switch gears a littlebit here if I can.
Uh, As you know, uh, there'sbeen a lot flu of activity
coming out of, uh, US WhiteHouse and, and CSA and NIST and
others.
And there's a big push recentlyaround memory safe languages,
switching to memory safety inlanguages, uh, as part of Secure
(36:57):
by design, secure by default.
What are your thoughts on that?
Is it worth it?
Should we move there?
Is that, is that the place togo?
Jim Manico (37:05):
I've been a Java
programmer since the nineties,
so.
I, I, only use, for the mostpart, I only use memory safe
languages.
Right?
They prevent commonvulnerabilities.
They reduce exploitation.
They what?
It's actually a simplerdevelopment process.
I'm not doing memory management.
It's like, it's, it's way morecost efficient.
I don't have to do memory fixes.
Um, you know, it's, they'reusually part of modern, um,
(37:29):
software ecosystems.
They're more reliable overall.
So this is a great thing.
But this, but like when I hearthis being said, I'm like, Yeah,
that I, I made that call about25 years ago, so I'm not sure
why they're talking about itnow.
They're probably thinking theworld of like more like thick
client development and like ccplus plus type development.
Matt Coles (37:48):
iot.
Iot, device development,
Jim Manico (37:51):
I'm sorry.
Matt Coles (37:52):
IOT device
development, embedded
development,
Jim Manico (37:54):
embedded but this,
this is why I, I, I'm, although
it's not my world, the peoplethat teach for me that it is
their world, they tell me topush rust.
They say, get away from cc plusplus move.
Move that kind of developmentinto the rust world.
And a lot of the memory problemsyou see in c and c plus plus
largely go away.
I don't know if that's true, butI think it's extremely
(38:16):
reasonable.
to push towards.
rust to use memory safelanguages and stop doing memory
um, uh, manual memorymanagement.
And I made that call about 25years ago.
One more thing,
Chris Romeo (38:28):
All right, we got
somebody.
Jim Manico (38:29):
back to
Chris Romeo (38:30):
So
Jim Manico (38:30):
AIreal
Chris Romeo (38:30):
go
Jim Manico (38:30):
real quick.
You know why I like ai, Chris isbecause it lets me build, um,
gangster rap about my friends.
That's what I use AI for mostly.
Well, Verse one.
You ready, Chris?
Real quick.
Chris Romeo on the mic.
Security Pro drop of knowledgeeverywhere that he go from the
boardroom to the streets.
he's the one to beat when itcomes to AppSec.
(38:51):
He brings the heat now.
Chris, Romeo Romeo, SecuritiesRomeo.
He's guarding The gates nevermoving slow from the east to the
west.
He's the best.
No contest.
Chris Romeo.
Romeo He's Romeo.
One more.
with The hacker's mind and theteacher's soul.
He's patching up systems andmaking them whole from SQL
injection to buffer overflow.
Chris is the name that thehackers know.
So that's my favorite use of.
(39:12):
ai.
Chris Romeo (39:16):
there we go.
I'm honored.
I'm honored.
So, Tony Quadros.
Tony Quadros had a good questionhere that, uh, I wanna get your
take on.
'cause we talked a lot aboutSAST and AI and how these things
are gonna come together.
So Tony says, uh, what aboutauto remediation for SAST
findings?
Is that legit?
Like, what are your thoughts onthat?
Jim Manico (39:33):
Yeah but you need
the right model.
So here I had this conversationat at DEFCON with a few vendors.
So there.
are a couple vendors out there.
who've been doing scanning andsecurity testing out of giant
scale.
think like edge scan, think likethe white hats of the world,
those who do security a serviceand they have decade and edge
scan's.
My favorite right now.
(39:53):
But basically the, um, thinkabout the millions of
vulnerabilities they discovered.
Developers tried to fix them andthen they went and reassessed
that the fix was proper.
So if I can pull like a 10 yearmodel.
Of all the software securityfixes that worked.
I'm gonna be able to have areally good remediation engine,
(40:15):
but I have to be very carefulhow I train that engine.
I have to look at eachvulnerability and each fix and
be really clear that that's aproper fix to add to the model.
the data.
is out there.
Human beings I think will needto sort through the data to, to,
fill a model.
But once that's done, I'mpredicting a year or two out
remediation engines are gonnalight up the industry.
(40:38):
And the things we're gonna bedoing is we'll be SAST SCA which
are gonna merge, I believe, andremediation engines suggestions
are gonna be a really big thingin like about a year or two.
So I'm I'm with you, Tony.
I think remediation and AI And,and proper data sets and manual
curation of that data setsRemediation will change the
(41:00):
industry when it comes to autoremediation.
Go build it.
Someone's gotta build it.
Chris Romeo (41:06):
I just saw a good
question pop in about, uh, let
me read it.
Um, this, this might be, I'lllet Jim, we'll let you take it
first, but this might be a Mattand Izar question.
What considerations should wehave when performing threat
modeling for applications thatuse artificial intelligence?
I.
Jim Manico (41:22):
that's, that's a
really good question.
Um, I'm, I'm actually justbuilding my AI security class
now, so there's, there's, gimmethat question one more time.
I'm I'm gonna light this up in.
Chris Romeo (41:35):
So what
considerations should we have
when performing threat modelingfor applications that use
artificial intelligence?
Jim Manico (41:46):
That's a really good
question.
I'm I'm gonna get help on thisquestion because I want, I want
to be really detailed in myanswer, and so
Chris Romeo (41:52):
So wait, so hold
on.
So you're, so Skynet isbasically providing you the
answer to this
Jim Manico (41:58):
no,
Chris Romeo (41:58):
about
Jim Manico (41:59):
No, it's just, chat
GPT four's data model with the
web crawling plugin and a fewother technical plugins that I
wrote to gimme good answers.
data poisoning, model inversion,adversarial attacks, model
stealing, model explainability,data privacy.
Infrastructure security.
um, supply chain threats to alldifferent tools are relatively
(42:20):
new.
Bias and fairness about howyou're training your model
robustness in generalizationfeedback loops where a, where AI
models influence the data theylater consume.
Consider potential dangerousfeedback loops, resource
exhaustion.
AI takes a lot of horsepower,reproducibility.
model drift as it learns overtime in the wrong direction.
(42:41):
Simple access control,regulatory and compliance.
That's coming up So I.
There you go.
So when it comes to like threatmodeling, ai, there's a lot of
good information already out onthis topic specific to the
different AI engines, and wecan, I'll pick one.
Like model stealing, right?
A threat actor might be able toreplicate a proprietary model by
(43:03):
using the public API asking alot of questions.
and extracting parts of themodel out for their own use.
Gotta be careful about that ormodel inversion.
Attackers might attempt toreconstruct the training data by
querying the AI model revealingsensitive data.
They shouldn't be revealing datapoisoning.
We saw this in some ofMicrosoft's early ai.
a bunch of racist began to liketrain the AI with a lot of
(43:27):
really horrible ideas.
and the AI engine itself becameextraordinarily racist and they
shut that thing down.
So ChatGPT is doing and, and,I've seen other AI engines that
someone plugged into it.
my life is difficult.
Here are the challenges I'mfacing.
and the AI engine said youshould kill yourself.
This is dangerous to humanhealth and that there's
(43:47):
liability for that company.
So there's a lot of really richinformation out there already on
how to Threat model an AI BAsoftware that's
Chris Romeo (44:02):
Yeah.
And before Izar and Matt give ustheir thoughts on this, I just
want to put a plug out there forthe OWASP top 10 LLM project.
Uh, Yeah, Steve Wilson and, uh,the rest of the team.
That's, that is a, that there,that is a case study in how to
build an OWASP project at topspeed with 500 volunteers.
Like there's a conference talkto be found in that.
(44:24):
But that, that's something Ilook to as a source of threat as
well when I'm, uh, andunderstanding what are the
issues in dealing with LLMs.
There's a good example there.
Matt and Izar, you got anythingyou want to add on the threat
modeling AI side?
Izar Tarandach (44:37):
So first of all,
yeah, thanks.
Matt Coles (44:40):
I was just gonna say
briefly, uh, actually you've
covered everything I was gonnacover.
Uh oh.
The OWASP project is, isdefinitely a go-to reference.
I put them in chat folks, ifthey're not familiar, they can
go ahead and take a look atthose links.
Uh, you know, this is, um, it'sthe same activity, right?
Threat modeling is threatmodeling.
You're just looking at adifferent set of threats.
(45:02):
Uh, and so, uh, continue as yourwork.
Sorry, Izar.
Go ahead.
Izar Tarandach (45:07):
So I, I just
wanna second everything that
Chris said about the, uh, theOWASP top 10 for LLMs.
But the, the one thing where Ithink that I step away from what
a lot of people are talking interms of threat mode of, uh,
LLMs and AI is, I, I'm notfocusing so much in the system
that's doing the LLM and the AI.
I'm more worried about how thatthing is interfacing with a lot
(45:28):
of other things.
So how that those are becominginterfaces and in some cases
they are getting commonauthority over a number of other
systems and we are getting thosethings that are not completely
well understood, that are notcompletely predictable, and we
are giving them privileges andpowers beyond what they might be
(45:50):
advisable to have right now.
So I, I'm, I'm putting my focusmore on how those things connect
with the real world and withwhatever comes down on the
pipeline from the results thatthey generate, then, well,
actually, in this specific case,prompt injection would be a
very, uh, important thing tolook at.
But I'm, I'm not as worriedabout all the poisoning of
(46:15):
models or stealing models, likeI'm not focusing on the model
itself.
Basically because I can'tunderstand the math.
It's way over my head.
The only math that I try tounderstand is calls, and even
that's hard sometimes.
So, uh, I, I, I'm, I'm, justtrying to say it's, it's, these
are again, parts of a bigcontinuum and I'm putting my
(46:35):
eyes on what comes down on, onthat pipeline, not on the other.
Chris Romeo (46:40):
So almost the trust
boundary.
You're thinking about the trustboundary around the AI
artifacts,
Jim Manico (46:45):
LLM, I
Chris Romeo (46:47):
are coming into it.
So yeah, that's a, uh, that's agood, a good thing to consider.
So we're almost out of timehere.
We've, we've cleared anyquestions I
Matt Coles (46:54):
there was
Jim Manico (46:54):
Quick note,
Matt Coles (46:55):
uh, around
Jim Manico (46:56):
just a real, really
quick note.
The, the OWASP top 10 for LLM, Ithink it's good, but it's really
basic.
There's not a lot of detailsthere.
I wanna recommend a resource.
The, and this is something thatGary McGraw has put out.
He got a team of PhDs together.
It's called the BerryvilleInstitute of machine learning
They have their own.
And this is like PhD backedresearch, PhD level articles.
(47:17):
Some of'em are hard to read, soI think the OWASP top 10 for LLM
is a good place to start.
But like any top 10, you read itonce and let it go.
The re I think deeper researchyou'll find like variable into
machine learning and a coupleother think tanks, that are
diving deep.
The OWASP top 10 for LLM issurface.
It's a good way to start.
And I, and I, and this is likeberryville iml.com, it's run by
(47:39):
Gary McGraw, who is a PhD and ateam of PhDs.
I like that.
And, and, and I needed the OWASPtop 10 to get me started.
And then I'm, now I'm readingall of his articles and getting
a way, way deeper perspective tohelp me be a better professor.
You know?
So if I'm gonna be in front ofstudents, I need more than the
top 10 to be, to be legit, just.
Izar Tarandach (47:57):
Right.
But, but, but Jim one call outthe, the work that, the amazing
work that Gary and
Jim Manico (48:02):
mind boggling.
Izar Tarandach (48:02):
team are doing.
It, it's, it, it goes deeperinto the model side and
building, building models and,and protecting models.
D o s I think focuses a bit moreon how you get to use this thing
securely.
So while I agree with you thatthe difference in that is
amazing, I, I, I think that theywork side by side and not
Jim Manico (48:22):
No, I don't think
so.
Side by side, I think OWASP topten first'cause it's very little
detail and then put it aside andthen focus on the more PhD level
papers.
'cause it's just OWASP top 10.
For LLM, it's really basic and Ineed details to be a good, to be
a good, professor.
So not side by side one.
Then the other is my take.
Izar Tarandach (48:38):
oh, sorry.
As a professor.
Okay.
I, I've, I missed thatqualitative.
Okay, cool.
Matt Coles (48:45):
So the, there were
two actually, there are now two
questions, uh, on the list forus.
I think the first one, uh, isprobably a Jim, Jim-targeted
question around, uh, it's atwo-parter.
Uh, this is from Max.
He was asking, uh, showing theROI of Threat modeling A and
part B correlating results ofThreat modeling with say, SAST
(49:10):
and other activities throughoutthe lifecycle.
So maybe, maybe the first partyou could tackle the ROI of
threat modeling.
Like how do you demonstratethat?
What's your, just a, and I knowwe got a just a few minutes
Jim Manico (49:22):
left here...
Mean, ROI of threat modeling,this, I don't do threat modeling
consulting and that, that's noteven an interesting question to
me, but I know that for those ofyou who do threat modeling as
part of your job, it is a, it's,it's a lot more important.
So again, I'm getting assistancehere, like define the costs,
ongoing costs.
Quantify the benefits, uh,prevent, does it actually
(49:44):
prevent security incidents,reduce, reduce remediation cost,
improve security posture, and I,how do you even study that?
I bet the cost to study if yourthreat modeling was useful is
like the cost of threat modelingitself.
I don't, I don't have a goodanswer to that, but Matthew Za
and Chris, I bet you do have abetter answer than I'd have
(50:04):
about proving the ROI of threatmodeling.
Any thoughts from you three?
Matt Coles (50:10):
Well in the
Izar Tarandach (50:10):
should have an
episode on that.
Matt Coles (50:13):
What was that, sir?
Izar Tarandach (50:14):
We should have
an episode just
Matt Coles (50:15):
We should, uh, I'll
just call out the, in the
manifesto we do call out, ofcourse, the ROI, the, the
fundamental ROI of threatmodeling is that you get
meaningful and valuable resultsout of, out of the activity,
right?
So we wanna focus on the, theresults that we get.
And for the level of effort putin, and as you, as we talked in
(50:35):
the very beginning of thisepisode, uh, you know, you high,
you highlighted rightly so thatyou don't want your entire
development staff, uh, part ofthat activity, that that doesn't
maximize value.
What maximize value is focusingon the hard things, the things
that are unique to the system orthings you're innovating on.
(50:57):
And or getting everyone togetherat the beginning of a project.
So everyone's on the same pageas the, what they're gonna do
moving forward, and thenfocusing on the, uh, the
differences beyond that.
And that's where you maximize,that makes knowledge.
You maximize your value outcome,uh, from doing threat modeling
without necessarily directlymeasuring that value.
Jim Manico (51:16):
and, and the Matthew
really well said.
I'm, I'm gonna steal that.
I'm really impressed.
ROI though.
It's a total benefits minus thetotal costs divided by the total
cost.
So there's a and I and just theproblem is to measure the ROI of
a threat modeling session isitself mammothly expensive.
So I don't, I don't, other thanwhat Matthew said, I don't have
(51:37):
a good answer.
Chris Romeo (51:37):
Yeah, I mean I
think you gotta look at the
number of issues that come outof it there.
That's something that you canmetric, and I've seen that be
successful before thinkingabout.
How do we let, let's measurethe, the number of things that
were detected from a, fromthreat modeling sessions,
because then it gives us sometype of, we, we, we can, we can
all agree it doesn't matter whatform you use.
(52:00):
That, and I don't care whetherIBM came up with it or not.
It's gonna cost more to fixsomething in
Jim Manico (52:05):
I agree.
Chris Romeo (52:06):
Than it does early
in the process.
Right?
And, and so there is a return oninvestment in finding issues
before they get to production.
And so I can use that as a softROI to say, okay, we, we
detected, we found, uh, fiveissues during this threat model
that would've cost us five x.
To fix as rework in six weeks ortwo months or two years down the
(52:30):
road.
And so that's kind of my generalapproach and I'd love to provide
more data to it, but you kindahave to get into the individual
company to
Jim Manico (52:37):
Let me add one more
thing to that thought.
Chris, if I use threat modelingand I've discovered SQL
injection in a, in a pattern,that's not a good use of threat
modeling.
'cause I,'cause I can just catchSQL injection and static
analysis with really goodaccuracy right now.
So the, the, the, the questionwas relating to static analysis.
Threat modeling should beidentifying things that static
(52:57):
analysis can't find and can'tfind Well to really measure ROI
and if you do that
Chris Romeo (53:04):
Yeah, I mean,
you're talking business logic,
right?
Like there's not a SAST alivethat can find business logic
flaws now.
Jim Manico (53:09):
Static analysis is
useless at access control.
That's just a business domain.
Static analysis is useless ataccess control, which is the
number one thing On, the OWASPtop 10 broken access control.
'cause it's a business domain.
Everyone has their own policy.
So threat modeling, complicatedaccess control systems, I think
is a really good use of time.
Izar Tarandach (53:31):
Yeah.
On, on the subject of ROI I, I'mjust going to add one personal
thing here.
I, I have what I call questionfour A.
So question four on the fourquestions is, did we do a good,
good enough job?
To me, question four A is askingthe participants, would you do
this again?
If they are able to come back tome and say, this had value for
(53:52):
me and I will do it again.
I just proved ROI of it.
Jim Manico (53:56):
And if you bring
pizza to the meeting, that will
go up.
Matt Coles (54:02):
Yes, proven.
Proven over the
Chris Romeo (54:05):
that's allowed.
That's allowed.
Alright.
Well folks, we're uh, we'recoming towards the top of the
hours in sight here, Jim, I gotone lightning round question for
you'cause I've been, I haven'ttalked, I haven't interviewed
you in a couple of years and Ihaven't heard your take on this.
It might be kind of a hot take,but that's okay.
We've had plenty of thosethroughout the, the, the episode
here.
(54:26):
Where do you stand on the wholeshift left thing?
Jim Manico (54:30):
I don't, I mean, I
can, I, I can debate either
side.
I can debate shift left.
John Steven, um, is giving me alot of reasons to wanna shift,
right?
Actually let developers crankand deal with it later.
So I, I, I see emerging researchand emerging intelligent
discussion on both sides ofthat.
I, I'll, I'll say this, Igenerally like the idea of shift
(54:50):
left, but I'm not tied to it.
There's a there's a lot of goodresearch and processes that
don't, that don't believe inthat, that are still successful.
So I think It's one modalitythat can be really good for
application security, but I'mnot religious about it.
There's other ways to go aboutthings successfully.
There you go.
Chris Romeo (55:12):
That's a great,
that's a great answer.
It's a great way of describingit.
Um, I, I think I fall into thesame category of we can shift
left, but we can also shiftright.
It's, and, uh, yeah, so I likewhere you landed there.
So, uh, folks, we're gonna,we're gonna wrap up this
episode.
Thanks for, uh, to Jim for beinga part of this on the, and
joining us on the securitytable.
Um, this'll be available as arecording both in podcast form
(55:33):
and our u on our YouTube channelso people can go back and listen
again or share it with otherfolks.
Um, you can also find Jim, Jim,uh, didn't even mention it, but
Manicode Security is, is whatJim does, is in his day job as
well as advise lots of otherstartups out there.
So check him out from thatperspective.
Follow him on Twitter, find himon LinkedIn, and, uh, he's, he's
(55:54):
a, a wealth of knowledge and Ialways enjoy an opportunity I
get to, uh, interact with you,Jim, and learn from you.
Jim Manico (56:01):
It is my pleasure.
I'm a big fan, Chris and Izarand Matthew.
Thank you.
for having me on the show.
It's been a, I had a great time.
I.
Matt Coles (56:08):
Excellent.
Thank you.