All Episodes

September 26, 2023 56 mins

Matt and Izar join in a debate with Chris Romeo as he challenges the paradigm of "scan and fix" in application security. Chris references a LinkedIn post he made, which sparked significant reactions, emphasizing the repetitive nature of the scan and fix process. His post critiqued the tools used in this process, noting that they often produce extensive lists of potential vulnerabilities, many of which might be false positives or not appropriately prioritized. He underscores the need for innovation in this domain, urging for a departure from the traditional methods. 

Izar gives some helpful historical context at the beginning of his response. The discussion emphasizes the significance of contextualizing results. Merely scanning and obtaining scores isn't sufficient; there's a pressing need for tools to offer actionable, valid outcomes and to understand the context in which vulnerabilities arise. The role of AI in this domain is touched upon, humorously envisioning an AI-based scanning tool analyzing AI-written code, leading to a unique "Turing test" scenario.

Addressing the human factor, Izar notes that while tools can evolve, human errors remain constant. Matt suggests setting developmental guardrails, especially when selecting open-source projects, to ensure enhanced security. The episode concludes with a unanimous call for improved tools that reduce noise, prioritize results, and provide actionable insights, aiming for a more streamlined approach to application security.

Chris encourages listeners, especially those newer to the industry, to think outside the box and not just accept established practices. He expresses a desire for a world where scan-and-fix is replaced by something more efficient and effective. While he acknowledges the importance of contextualizing results, he firmly believes that there must be a better way than the current scan-and-fix pattern.

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @SecTablePodcast
➜LinkedIn: The Security Table Podcast
➜YouTube: The Security Table YouTube Channel

Thanks for Listening!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chris Romeo (00:09):
All right.
Hey folks, welcome to anotherepisode of the wild and wacky
world of the security table.
And you could only see the fiveminutes after we hit the record
button up until now, as far aswhat just, what just went down,
what just happened, butunfortunately due to content
restrictions and YouTube rules,we're not allowed to play that

(00:30):
part right there.
But, uh, I'm Chris Romeo.
Joined by my good friends, IzarTarandach and Matt Coles, and we
talk about all things securityaround the security.
And so I kicked a hornet's nest,which I've become fond of doing
now.
Um, but I kicked a hornet's nestreal good.

(00:52):
And so we're going to talk alittle bit about this post I put
on LinkedIn and just kind of thereactions that we got from some
different people as well as Mattand Izar's perspective on this.
So, um, I put a post out acouple of days ago and we'll put
a link to it in the show notesin case you want to jump in.
And the premise of the, of thepost is, The hamster wheel of

(01:13):
Scan N Fix.
And so my, my premise is that inapplication security, we are on
this hamster wheel of scan andfix.
And so I started thinking about,okay, where do we get this from?
Like, why do we have thispattern of tooling that is a
scan something and then generatea list of 10, 000 different
things that have to be fixed?

(01:35):
And so I started thinking about,okay, what's the earliest tool
that we are aware of in AppSec?
It's SAST, Static ApplicationSecurity Testing.
But SAST didn't create thispattern.
Vulnerability scanners, waybefore SAST, created this
pattern of, let's scansomething, let's generate
anywhere from 100 to 100 millionresults, and let's...
Put it into a list and send itto somebody, but SAST is really

(01:57):
where it kind of entered thepicture from The AppSec world,
AppSec perspective.
And so I think this pattern isis just wrong I think it's just
broken and I think we've seen ahistory of the challenges that
this following this pattern doesin Working with developers.
And so I started thinking aboutIs AI the answer?
Can I get an AI bot that willcreate a PR and fix my problems

(02:19):
for me?
I'm still doing scan and fix atthat point, it's just fancy
robot fixing, so I don't thinkthat's the answer.
Is RASP IAST, the pattern ofRASP and IAST, which I think of
as view a request and then blockor allow the request.
Could we do SAST?
Inside the runtime, but then Istart thinking about that like
now I think because ofperformance issues and just the
strangeness of where we placethat control.

(02:42):
And then my last thought was, dowe do this in the IDE like
people have been talking aboutforever?
Does that get us close enough tothe developer to fix the
problem?
So with that, I think I've,I've, I've spent.
enough time setting up what thispost was.
And it's funny what happens whenit's a Thursday night and you
have to write something orWednesday night, you have to
write something for a weeklynewsletter.

(03:03):
You can't think of anything.
And so I literally went and doyou guys know Travis?
Um, what's Travis's last namefrom resource that Travis McBeak
was at Netflix?
Um, he had a post that basicallysaid scan and fix is wrong and
I'm like, yeah, let me writesome more paragraphs that go
with that.
And all of a sudden I had a, ahornet's nest that I kicked.

(03:24):
So, all right.
What do you guys think?
What's your reaction to this?
Do you think Matt's giving methat look when Matt was like,
he's ready to, to, to fight overthis.
So I love it.
Let's go.

Matt Coles (03:36):
hear Izar's response first because, uh, yeah.

Izar Tarandach (03:43):
So, the year was 199x, right?
And the thing coming out wasSaint and Satan, I forget which
one came first, then, uh, thenFarmer and, uh, Vici Venema.
And all of a sudden, everybodystarted looking around their

(04:04):
Unix boxes and seeing this, thisprocesses that were scanning
them from the outside andchecking for open services.
And all of a sudden, all thisoverflow thing and all kinds of
different things and checkingfor configurations.
And this thing is too permissiveand that thing is too open.

(04:24):
And that was actually the firsttime that I, that I met
ScanNFix.
And then, a bit later on, we, Iworked with a good friend of
ours.
in a company called Netact,creating a scanner called
HackerShield that was doingbasically scanning somebody,

(04:45):
please come and fix.
And that was 1997, 8, 9?
Something like that.
And that's when I startedbuilding my own, what the hell
is happening here with the scanand fix thing.
And very quick, it's to me, atleast it became clear that it

(05:08):
was.
Uh, uh, double quote solution tothe problem of again, a black
art needing to be madeavailable, commoditized and the
constant search for a silverbullet.

(05:29):
Me as, me as a network engineer,me as a system administrator, I
want this tool that I point inthe direction of my box, I click
a button and it gives me out alist of things that I need to
fix in order to be secure.
And that gives me psychologicalsafety on a bunch of different

(05:49):
things.
First of all, I have theimprimatur of a tool, a
recognized tool, that waswritten by people who know their
stuff, that actually comes andsays, this box has been scanned.
And, uh, I can go to sleep atnight knowing that somebody who
knows more than me took a lookover me.
Then I have the psychologicalsafety that, uh, I don't have to

(06:13):
assume the responsibility,because the tool itself is
responsible for what it says,and if there's a problem down
the road, I can always point atthe tool and say the tool didn't
tell me that.
Because at that time, and eventoday going forward, that kind
of knowledge is...
not immediately available.
So you're basically hiring anexpert in that scan cycle.

(06:39):
And then there's the fact that,uh, it, I think, and I could be
wrong, but I think that it'smuch more natural for a human
being to produce something,focusing on those things that
interest them, and then puttingit on the table and telling
people, now somebody who knowsmore than me about other things,
come and look at it and tell mewhat needs to be different.

(07:01):
So the Scan Fix, to me, it'ssomething that came, that fit
very well in that model of firstI build and then I test, first I
build and then I bolt on thesecurity.
And, um, even if we look atSAST, way back in the day, I

(07:22):
don't remember the name of thescanner, it wasn't ISS, it was
something else, but it would goover a C program and basically
just tell you, hey, you're usingmemcpy here.
Or things like that, or even,uh, Microsoft's, uh, hmm?

Matt Coles (07:38):
rats probably, or flaw

Izar Tarandach (07:39):
Probably, yeah, RATS, RATS, RATS.
And, uh, then there wasMicrosoft with the Include...
What was the name of the Includethat you could give and when it

Matt Coles (07:48):
uh, dangerous functions.

Izar Tarandach (07:50):
Dangerous functions.

Chris Romeo (07:51):
the library.
Silence.

Izar Tarandach (07:52):
And then,

Matt Coles (07:54):
Uh, BAMS, BAM.
H, I think it was,

Izar Tarandach (07:56):
Right.
And then, uh, GCC startedincluding some stuff in its, in
its warnings about security,right?
But that, that was later in thething.
But, uh, so, so it wasn't evenSAST, it was first like the, the
network scanner, the thing thatlooked out from the box.
That started, in my opinion, thewhole, in my memory, the whole,
uh, uh, scan and fix cycle, andSAST only came, came later, and,

(08:22):
and, we could, we could even saythat these were the first forms
of DAST, I think, right?
I, I, I remember bugs like the,the palmito bug, and that went
against, uh, ProFTPD, JordanRitter founded in, I think, 97,
98, and it was already doing thewhole handshaking, and then at a

(08:43):
later Time into the protocol, itwould do the, the, the buffer
flow.
So you could say that that's anearly form of DAST, extremely
focused at, at that specific,what became later cv, but, but
DAST.
So I, I think that in, in yourwriting, you, you pulled SAST as
the first thing.

(09:04):
So, I, I just think that it'slike, it's the other way around
and that matters, exactlybecause of the thing that I have
my box, I point something at it,it says that I'm okay, I can go
to, to, to sleep, fine.

Chris Romeo (09:16):
Yeah, but those, those scanners in those early
days were vulnerabilityscanners, right?
Like

Izar Tarandach (09:21):
Without being called so.

Chris Romeo (09:23):
Nessus was not a DAST.
It wasn't testing applicationlevel issues.
It was testing for, a lot oftimes it was looking at a banner
and saying this banner says1.85.

Matt Coles (09:34):
right?
a port And

Chris Romeo (09:35):
a vulnerability in 1.
85 and you've got a

Izar Tarandach (09:37):
Right, but the

Matt Coles (09:38):
a vulnerability scanner, And a configuration
scanner.

Izar Tarandach (09:41):
but the example of, uh, sorry, the Palmito bug,
the Pro FTPD, as implemented inHackerShield, for example,
already put you much closer to aDAST because it was actually
building a payload, exercisingthe protocol, and then inserting
the payload.
So it was later on.
It wasn't just checking theversion and saying, oh, the
version looks old, so maybe youhave this thing.

(10:05):
So I think that my point isthat, uh, it started.
from looking from the outsidein, it moved into looking at the
code as it gets written.
And then it got deeper anddeeper and deeper in the code as
it's compiled, as it's running.
And then we got to where we havetoday with with RASP and IAST

(10:27):
and all the good stuff.

Matt Coles (10:28):
Yeah, so,

Izar Tarandach (10:30):
Now...

Matt Coles (10:31):
can I get a word in there too?
I'm serious,

Izar Tarandach (10:32):
Hahahahaha

Matt Coles (10:35):
Thank you for the, thank you for the history
lesson.
I, I, I just want to highlight,I, I think we're, we're, it's,
the history lesson is great.
I 100 percent agree with you.
I think, and you touched uponthis, but I think it's really
important just to reiteratethis, at least, I, I think on
this side of the fence of thisargument.
is that, um, the toolsthemselves, uh, are not the

(11:00):
problem.
The tools are a, uh, uh, an easyscapegoat for why the tools
exist, why scan and fix as acycle exists.
It's similar to the qualityproblem, right?
Developers are not infallible,designers are not infallible.

(11:22):
You have to have a way, ascalable way, of analyzing a
system for defects.
Whether you're looking atquality or security, and
tooling, automation, helps withthat.
Part of the problem is the toolshave gotten very noisy over
time.
So, I want to, I'm going to becareful, and I'm going to try to

(11:42):
inform this properly, and tablethat for just a moment, but I'm
going to introduce that conceptof noise in these tools.
We have a need for buildingquality in.
We have a need for buildingsecurity and the way we do that
is either we ensure that whatgets written is secure.

Chris Romeo (12:01):
You.

Matt Coles (12:02):
and in order to do that we have to do analysis,
right?
Or we have to have very definedpatterns, but because we want
developers to have to bringtheir intelligence to bear and
their creativity to bear.
We don't make everything cookiecutter.
And so you have to have a way ofanalyzing code, looking for
security in insecurity patterns.

(12:23):
You have to have a way oflooking at components, looking
for components that havevulnerabilities and.
Due to, in part due tocomplexity of the systems we're
looking at, and the noisiness ofthese tools, this problem of, I
have so many things now to lookat, what do I catch first, and

(12:44):
then I have to keep iteratingand iterating and iterating, so
it drives a, it drives aOrganizational pattern that
drives a program pattern,program process pattern that
utilizes the tool in thisiterative approach.
And so I think that's, that'sfundamentally the problem of
why, why there's this perceptionof a hamster wheel of scan and

(13:04):
fix.
But we have a need for thisbecause A, humans are, humans
make mistakes.
As Izar rightly called out, thetool provides a certain amount
of assurance, but the tool alsobrings with it a certain amount
of noise.
And so, we need to overcomethose challenges to,

Chris Romeo (13:24):
That is the tool,

Izar Tarandach (13:26):
No, wait.

Chris Romeo (13:27):
challenges are the tool.

Izar Tarandach (13:29):
No, no, no, no,

Chris Romeo (13:30):
disconnect the tool from the challenges in the tool.
The tool has the challenges, andso somebody has to build
something better.
That was my whole point.
We need a better pattern.
The tools need to implement abetter pattern than scan and
fix.
There's gotta be a better way.

Izar Tarandach (13:44):
the end of the day, Scan Fix is a response to
something existing, and tosomebody needing to rent
knowledge.
Right?

Matt Coles (13:53):
and by the way, if you didn't scan, what would you
do?

Izar Tarandach (13:56):
Exactly.
So You are renting knowledge.
You are renting what was in thehead of somebody else that knows
that stuff well, who coded thatin a certain way that can be
used in a tool to figure outthose things.
So,

Chris Romeo (14:10):
the head of somebody, I'm gonna draw the
illustration further, who hasforgotten random things in the
midst of all the things thatthey know.
And tends to see things thatdon't exist.
And so it's not, you know,you're not renting a

Izar Tarandach (14:25):
wait, wait, wait, no, no, no, no, no, wait,
wait, wait, wait, wait.
Let's look at things the waythat they are, not the way that
people are saying that they aregoing to become soon.
A scanning tool is basically adecision tree.
If you see this, and you seethis, and you see this, and you
see this, chances are that youhave a problem.

(14:45):
Now, the verbosity of them iserring on the side of caution.
It was decided at some pointthat it was much better to let
people know there might be aproblem than to shut up about it
and be bitten by it.
The problem is that we have somany chances now for problems to

(15:06):
appear.
and we have so much complexityin the way that those problems
may appear, that the amount of,uh, alerts at all different
levels...
right, is ridiculous.
So, rather than break thepattern, which I still see as a
pattern that does have value,especially as the knowledge that

(15:27):
we have to rent gets better andbetter, I think that going to
what Matt said, it becomes aproblem of prioritization.
Now, that prioritization, againin my opinion, could be
completely wrong.
A huge part of it iscontextualization.
And contextualization will onlycome from knowing what are you

(15:50):
scanning and what's theenvironment that that thing
operates in or lives in.
And that's when you take a stepback from the scan and fix to
the let's understand theenvironment where this thing
that I am scanning lives andlook at other factors that all
of a sudden may inform,contextualize, enrich all the

(16:12):
different aspects that I have,that my scan is providing me and
that the rules are acting on topof to actually give the customer
the top things that they have todeal with.
Does that make any kind ofsense?

Matt Coles (16:26):
And by the way, if it wasn't scan, I mean, so, if
it was manual code review,instead of sAST, we would still
have the same problem.
Right?

Izar Tarandach (16:35):
With the added problem that you are not renting
recognized knowledge, you are...
trusting the knowledge that youhave in place.

Matt Coles (16:43):
Well, and you'll probably have to hire a ton of
people to scale to the levelthat you can execute a tool
over.
Same thing for looking at a, ata, you know, pattern matching
rules for vulnerabilities.
You know, looking at a componentinventory or looking at a system
and doing fingerprinting ofbinaries and looking and doing
manual matches, right?
The tool replaces the human.

(17:04):
Automation always improvesproductivity because it does
better what a human can do byhand.
It does it faster, usually.

Chris Romeo (17:13):
on.

Matt Coles (17:13):
It better, usually.

Chris Romeo (17:15):
Hold on.
So if we had enough experts,let's do a little thought
experiment here.
If we had, let's just assume wehave unlimited experts

Matt Coles (17:26):
And unlimited money, because you'd have to pay them
all, right?

Chris Romeo (17:28):
Yeah, but the thought experiment isn't
considering money.
It's considering the, it's good.
Well, cause you said

Matt Coles (17:34):
the challenges here, just to let you know.

Chris Romeo (17:35):
But no, no, you said, Mo, what you said was
automation is always better thanmanual, effectively, is what you
said.
And so my point is, if I had anunlimited number of, of experts,
automation would not be betterthan manual.
If I had an unlimited number of,of, uh, Jim Manicos, who know a
lot about coding, secure codingin a lot of different languages,
that could look at the code, andwe had unlimited time, would,

(17:58):
would a army of Jim Manicoclones come up with better
results than running a SASTtool?

Izar Tarandach (18:05):
That depends.
Does the sAST, does the SASTtool has the same decision tree
in their mind that Jim Manicohave?

Chris Romeo (18:11):
I don't think any, I don't think any SAST tool at
this point has the decision treethat's in Jim Manico's mind just
because he's experienced thatlike SAST tools don't experience
things.
They're just, they're just ruleswhere somebody tried to capture
something and tried to, uh, makeit so that it's not so noisy

Matt Coles (18:27):
so for any

Izar Tarandach (18:28):
what you're

Matt Coles (18:28):
so for any given rule set, sorry, for any given
rule set, for any given ruleset, the tool is going to be...
More efficient at executing thatrules than an army of humans
because humans will be willalmost certainly miss something
Even if the

Chris Romeo (18:44):
you're assuming that you can, you can load that
engine with the rules thatManico's head.

Izar Tarandach (18:50):
No, we are assuming the fact that given
that all...
Let's go philosophical here fora second.
Given that all humans arefallible, and tools are made by
humans, tools are fallible.
Now, in your thought experiment,you say if I had an infinite
amount of Jim Manicos.
It could be argued that giventhe same input, an infinite

(19:12):
amount of Jim Manicos wouldproduce the same output all the
time.
Because he himself has his ownprocess, and I really hope, Jim,
if you're listening to us, it'scoming out of love.
So, given the same input, hewould produce the same output.
Why?
Because in his mind, he has acertain decision tree that he
uses most of the time, and thatwe are trying to reproduce in

(19:33):
our tools.

Chris Romeo (19:34):
I mean, if you're talking about the same piece of
code, yes, I would say if you'retalking about scanning different
examples of code or having Jimreview, an army of Jim's review
multiple different pieces ofcode, he, the thing that he
could bring to the table thatthe scanning tool can't is
experience.
His mind can interpret thingshe's seen before and knit them
together and see new things.

(19:54):
That's my point is that a toolcan't do that.
Tools only as good as the person

Izar Tarandach (19:58):
but, no, no, no, no, no, no, wait, wait, if you
take SAST into consideration,Okay, and we have seen enough
approaches to SAST, and I am byno means an expert or a
researcher in SAST, but you haveseen the the GREP approach, you
have seen the, uh, let'sinterpret the code and play the
the tainting approach, you haveseen the in the language

(20:19):
approach of pro tainted mode.
right?
So what they try to do in acertain way, it's either look
for patterns or look for, uh, aninterpretation of the data
coming in and how it getstransformed and what goes out of
it.
And these are all rule based atthe end of the day.

(20:40):
What you're proposing with theexperience is because I have
seen many different things, myset of rules is much richer.

Chris Romeo (20:49):
Correct.
That's what I am.
I am arguing that point.
Exactly.

Izar Tarandach (20:53):
now, now we come to where we are in, in tech and
I hate to go there, buteverybody's going there.
So why not go there?
And you train your model and youretrain your model and you fine
tune your model and you're goingto have an amazing model for
that specific use case, right?
Because it's, it's verydifficult to have a very good
fine tuned model for the genericcase.

(21:15):
So everybody's going there.
Everybody's saying that AI isthe next thing.
I'm going to have a lot of, uh,a really fun time when I have to
sit back and watch an AI basedscanning tool, scan the code
written by an AI, right?

Chris Romeo (21:33):
Infinite loop, we'll be

Izar Tarandach (21:34):
And not infinite loop, but I'm going to enjoy
very much that discussionbetween both of them.
It's going to be the new Turingtest.
But, uh, my point is that at theend of the day, we are trying to
rent that knowledge and we aretrying to duplicate that
knowledge in, as Matt said,scalable ways.
that you can come and applyagain and again and again at

(21:56):
larger scales.
My point is that we are going tofind more and more and more
issues and findings, but westill have to prioritize them.
And for that you needcontextualization.

Matt Coles (22:07):
so to break, to shortcut this a bit, I think
where you're getting with, fromthe hamster wheel, uh, post that
you made originally was thetools are noisy and they
generate a lot of results.
And so you have to have, youhave to, developers need time to
fix, you have to do scan and fixand scan and fix and scan and
fix.
The way you solve that is youmake, you make the results be
prioritized, more actionable,less noisy.

(22:30):
So reduce false positives.
I will try to eliminate falsenegatives.
Right?
Because that's, if you don'tcatch it in SAST, you're going
to catch it in vulnerabilityscanning, or you're going to
catch it in fuzzing, or you'renot going to catch it at all,
and it's going to go out to thefield and be reported back to
you, and now you have a biggerloop.
Uh, and so, you need the toolsto be smarter, because that will

(22:50):
reduce the noise and allow thehumans to make intelligent
decisions about which to fixfirst.
But you're still not going toever solve the problem, because
developers are introducing bugsinto the system.
Until that stops, you still haveto have analysis and until you,
I mean, you you don't, youdon't, if you don't, if you
don't do QA and you shipsomething

Chris Romeo (23:13):
everything, everything you just said about
what needs to get better in thetools, people have been saying
it for 20 years and nobody'sdone it.

Matt Coles (23:20):
and it hasn't happened yet.

Izar Tarandach (23:22):
But

Chris Romeo (23:22):
why we need a new pattern.
That's my whole point though.
That pattern, I'm saying thatpattern cannot be kept to the

Matt Coles (23:27):
What pattern?
Oh, which, which pattern?
Which

Chris Romeo (23:30):
and fix, the scan and fix pattern

Matt Coles (23:32):
The scan and fix pattern?
but not the tools, not

Chris Romeo (23:35):
all right, Izar's going to fall out of his chair
here.

Matt Coles (23:36):
not the tools.

Izar Tarandach (23:42):
So Matt, Matt got so close to it, so close to
it, so close to it.
The thing is

Matt Coles (23:46):
the tool.

Izar Tarandach (23:47):
No, it's not the tool, it's not the tool, it's
the human.
Now the thing is, we don't haveto break the pattern.
We have to put the pattern, myopinion, we have to put the
pattern where it belongs.
We have to place the pattern inthe bigger pattern of things.
Now We keep designing infiniteloops and circles and whatever,

(24:12):
okay.
And, And,

Matt Coles (24:13):
Mobius strips and all.
Yeah, that whole works.

Izar Tarandach (24:16):
whatever, right?
And, And, and, and, that's that,that's the nature of the beast.
We, we start with the M V P andwe grow to incredible companies.
And the thing is cyclical, butMatt said, said, said, right.
The problem here is that, thatpeople are fail and they
continue being fail.
And if we.
Give them tools, and if wechange the environment they

(24:36):
operate in, they're going tofind new ways to be failable at
those new environments, right?
So it's like, we keep trying todo, what's the saying?
We keep trying to do idiot proofthings, and the universe keeps
making better idiots.
So,

Matt Coles (24:53):
We love all developers equally, but, uh,

Izar Tarandach (24:54):
yeah, yeah, so the, the, the, the point here,
and, and yes, I am going to gothere, I am going to go there.
I will get there.
Uh, the thing that's going tobreak NO! The thing that's going
to put that loop into its placeand make things better is only
that tool that we all know andlove that says, Why don't,

(25:18):
before you do something, youlook into it and you see what
could go wrong, right?
Because,

Chris Romeo (25:24):
we had a solution, or we had an idea that would do
that.

Izar Tarandach (25:28):
it's, it's, it's, it's where you break the
cycle, right?
It's, it's where you say, ratherthan just building my thing,
putting it on the table,scanning and hoping that
somebody tells me what's wrong,Why don't I ask those hard
questions first, and Ialready...
apply some forethought, right?
I could even use the bestexperience of a scanner, the
best experience of Uh, IAST,RASP, things that I've seen,

(25:51):
threat intelligence, threathunting, all that input, and say
what's next on the next round.
And hopefully make things like,that might help make the, the,
but there's another thing too.
We talked about prioritization.
Prioritization is not onlyabout.
pointing at the important thingsat the top of the list that you

(26:11):
have to do first.
That's probably the mostimportant part of the
prioritization.
The other part of prioritizationis what do I do with the rest of
the stuff?
at which point you start cuttingyour losses and you say, I
accept that risk.
And that only comes fromunderstanding that risk and
understanding your environmentenough so that you can say,

(26:32):
those 10, 000 things are fine.

Chris Romeo (26:36):
I mean, and that's the challenge today, right?
That risk is being accepted.
Just nobody's going through aninformal process of doing it.
They're just leaving 10, 000items on the backlog that

Izar Tarandach (26:45):
There's a difference between accepting the
risk and putting it under the

Chris Romeo (26:49):
mean, they're accepting it.
They're accepting it.
They're just not willinglyaccepting

Matt Coles (26:53):
implicitly accepting

Izar Tarandach (26:54):
yes.

Chris Romeo (26:55):
They're not making a statement.
They're not going, we'reaccepting all this, but they're
accepting it anyway by not, bynot, a lack of action is an
acceptance of the risk.
not, I wouldn't go to court andmake that argument, but,

Izar Tarandach (27:08):
because what we are missing today, I think, is
the understanding of that risk.
And again, the contextualizationof that risk in terms of where
we are.
It could be that somethingthat's critical.
Let's not go there.

Matt Coles (27:19):
Yeah.

Izar Tarandach (27:20):
Something that's a medium.
in my environment, could wellkeep being a medium, and in your
environment, because it can bechained with three other
mediums, all of a sudden, givesomebody the opportunity, the
means, somebody who has themotive and the inclination, to
go and do something bad.
And at that moment, it becomes acritical.

(27:41):
So even if CVSS didn't stepfirst and said, it's a critical
panic, panic in the streets.
And we should talk about CVSS atsome

Matt Coles (27:48):
CVSS doesn't actually do that, but go ahead.

Izar Tarandach (27:50):
Oh no, but it doesn't do that.
But that's the way that wedecided to use it.
Because we don't have anythingbetter.

Chris Romeo (27:57):
Yeah.

Matt Coles (27:57):
So, so let me just add two other things to that,
that, that list of stuff.
So we, we probably, in order tohelp reduce this problem of
volume noise, right, is we needto look at, um, one, one,
probably obvious thing is weneed our tools to be smarter in
that we need to stop looking at,we need to maybe reduce our

(28:21):
reliance on purely signaturebased findings,

Izar Tarandach (28:25):
Yes.

Matt Coles (28:26):
simply, do you use get s or sprint f is a lot of
volume, right?
In old legacy codebases.

Chris Romeo (28:35):
them.

Matt Coles (28:36):
But is that necessarily effective?
Probably not, because it missesall the control flow and data
flow information that And thoseare the other, you know,
analysis techniques that werepioneered over the years around
SAST to get better accuracy onresults so that a single finding
wasn't simply, Oh, look, youhave a variable called password,

(28:56):
therefore you probably have aproblem with cleartext
passwords.
No, I have a variable that has apassword that goes into a UI
element.
In plain text, that's a passwordproblem, you know, in plain text
problem.
So you get contextualinformation.
So we have to, we need our toolsto be a little bit smarter in
providing actionable, validresults, not simply, you have

(29:17):
open SSL, you know, 098A, youhave a, you have a
vulnerability, all these othervulnerabilities, right?
So very basic signature thingswe probably need to reduce.
The other piece we need to, needto consider there, maybe a
little bit more, um, Uh, uh,aggressive and, or unpopular
will be take developer choiceand lessen, lessen full

(29:39):
developer choice.
Today, how many GitHub projectsare there of open source code
and how, what guidelines arethere for developers to choose
what they embed and bed fortechnology?

Izar Tarandach (29:50):
Yeah.

Matt Coles (29:50):
And then, and that's going to be very unpopular, but
guardrails.

Chris Romeo (29:54):
So, hold on, let's, let's unpack that before we even
go any further, like, uh, Mattis prescribing an authoritarian
approach to development, which,okay, that sounds more like,
what you just described soundsmore like guard, more than
guardrails, right?
Because you're talking about,you're making design decisions
for people now.

(30:14):
You're not giving them thefreedom to operate.
I think of guardrails,

Izar Tarandach (30:17):
No, no, no, no, No, no.
No, it's not what he's doing.
By putting guardrails, he'sperhaps limiting the options of
choice that you have to usesomebody else's work.
But he's not saying change yourdesign.

Matt Coles (30:29):
Or which pattern or, or potentially which patterns
you implement, but, but to alimited set, meaning you're,
you're, you don't, I, I thinkit's necessary if you, if you
don't put guardrails on whatyour choices are, whether it's
component selection, technologyselection, design patterns, Then
people are going to just inventstuff new, which requires a lot
of effort to get right.

(30:50):
We've talked about this in pastepisodes.
A lot of effort to get right,and jumps you right into the
scan and fix problem, becausenow you're introducing new
problems.
Now, it's limiting.
I am a hundred percent, like Isaid, very unpopular, this fan
mail, hate mail, whatever isgoing to come along on this, I'm
sure, but you know, if you don'twant to keep scanning and

(31:11):
fixing, scanning and fixing, youneed to limit the ability for
vulnerabilities or other issuesto get introduced to your code
base or your technologyplatforms.
And this is a way to solve it.
I'm not suggesting it's the way,or even necessarily you should
go this way, but puttingguardrails on, on, on choice is
a way to solve this.

Izar Tarandach (31:32):
Yeah.

Chris Romeo (31:32):
Yeah, I think it's, it's a place we, I mean, I agree
with most of what you're, whatyou're saying here, but the
problem is the reason I, thereason I'm still saying scan and
fix this pattern doesn't work isnobody's done it.
We've had 20 years of SAST toolsand nobody's done what you
described.
Nobody's made it moreactionable.
Nobody has, everybody said,they'll all tell you on a sales

(31:53):
demo.
Oh yeah, we, we're all aboutfidelity of results and we're
about limiting false positivesand avoiding false.
Like they all say these words,but here we are still 20 years
later with the same piles.
There are organizations thathave tens of thousands of
tickets that just get junkedbecause they come out of the
tools, they go in, they getjunked.

Izar Tarandach (32:12):
Chris, I think that we are generalizing a bit
here.
If you look at SAST...
Yeah, I'm going to agree withyou.
Even though, to be sure, thecycle got complete.
We started with GREP, basicallywe are back to GREP now.
Of course with AI coming in.
But that GREP got way, waysmarter.
But the counter example that Iwant to offer is we started with

(32:36):
firewalls.
Firewalls that did open andclose.
This port is open, this port isclosed.
Fine.
Then they got smarter.
We started getting packetinspection.
Then, uh, security got smarter,we started encrypting packets,
so packet inspection sort ofwent the way of the Dodo.
And then there was a strategicretreat, let's get back from the

(32:57):
firewall, let's start talkingWAFs.
So now that traffic is notencrypted, but before it gets to
the application, let's applyrules again and check the
traffic.
And then the attackers gotsmarter and those rules got
bypassed.
So we did another strategicretreat, and we closed the
wagons around RASP and IAST, andnow we are checking at the code

(33:20):
level.
So now we have that whole, let'suse SQLI, you have that whole
query package, and you can lookat it before you actually act on
it, or you can alert, or you canhave an in app WAF, and stuff
like that.
And that, to me, means thatrather than defense in depth, we
are being forced to bring thedefense as close to the crown

(33:41):
jewels as possible.
And what's left to do now is,because we are doing it so close
to the crown jewels, all of asudden we have, again, so much
context that we can apply on topof those rules to actually be
able to say this is a goodinvocation of a query, this is a
bad invocation of exactly thesame query.
So somebody asking for, uh, uh,doing a select on my table of

(34:05):
credit cards, If it's comingfrom this endpoint, it's a good
one.
If it's coming from thatendpoint, probably not a good
one.
If I've seen already it comingfrom that endpoint, okay, just
added some data that mightinfluence the way that I think
about it, about it being good orbad.
And this way you start buildinga level of confidence on top of

(34:28):
that thing, which of course hasto be fast enough so that you
don't...
Break the whole thing, but youstart building that confidence
in a way that's smarter thanjust coming, raising the flag
and saying, this is a bad query.
Everybody stop.
We have an incident.

Matt Coles (34:42):
You know, this is this, this notion of, sorry,
Chris, this, this notion ofcontextualization, actually
really interesting if you thinkabout it, SAST as a general
purpose tool needs to run acrossmany types of codebases to look
for many types of issues with alot a lot of context,

Chris Romeo (34:59):
no

Matt Coles (34:59):
as you get.

Izar Tarandach (35:00):
With no context.

Matt Coles (35:01):
a lot of context.

Chris Romeo (35:02):
no runtime, they have no idea what's happening in
the

Matt Coles (35:05):
Exactly.
That's right.
Because it's not dynamic.
It's static.
It's SAST, right?
It's looking at code.
It doesn't know necessarily whatthat code is used for.
It knows that, that, how thecode is structured and it can
analyze and say, oh, this is Ccode.
And it does a certain set ofthings, but not that it's going
to live within an IOT device oran enterprise server or a

(35:25):
desktop app or mobile app orwhatever.
It doesn't, it may not knowthat.
Some of the tools sort of knowthat at a great, you know, a
macro level, but, but notgenerally.
And so, uh, and, or, or itdoesn't take action in that
regard.
It uses it for reporting, butnot necessarily for, for
analysis purposes.
Uh, and, and so you need, butyou need this because developers
write code, code goes in so manyplaces.

(35:47):
You have to have these general,today we have these general
tools.
We could improve that by knowingabout the target runtime or the
target use cases and figuringout how to tell the tool, okay,
this is going to be used inembedded devices versus used in
an enterprise server.
And my network is going to be X,Y, and Z, and this is what it's
going to, and then, then you canget that context.
But as you shift closer todeployment, you're gaining more

(36:10):
and more information that youcan

Izar Tarandach (36:12):
Yes,

Matt Coles (36:13):
use in those

Izar Tarandach (36:13):
yeah.

Matt Coles (36:14):
providing context.
So RASP is probably a greatsolution.
At that level, it is probably agreat solution at that level,
because it has a lot of contextto work from.
it is not necessarilyuniversally, it's not
necessarily universally,accessible.

Izar Tarandach (36:30):
We have, we have.
I am fortunate to work withamazing people every day who are
doing this.
So, yes, it's possible and yes,it's a route.
So, the point, I think, is thatscan and fix by itself is not a
bad thing.
It may be badly used, it may beunderutilized, and it may

(36:54):
generate results that are notoptimal.
But if you start putting it intocontext by adding more and more
and more understanding of wherethat scan and fix is happening,
sorry, that scan is happening,you're going to have shorter,
prioritized, contextualizedcycles of fix.

(37:15):
So I want you to separate thescan and the fix.
right?
And in the middle we have to putan engine that says context.
As people say, context is king.

Matt Coles (37:23):
And we also need to make some other fundamental
changes, like stop, stopstandards development relying on
simply the scan as a goal,

Izar Tarandach (37:32):
Yes.

Matt Coles (37:33):
right?
So, you know, PCI DSS, I think,uh, you know, still requires
scan, scan with a Thunderblazescanner on, you know, every 30
days and patch.
That's a scanner fix,

Izar Tarandach (37:44):
But that goes to the reasonable security bit,
right?
You do what you can at thatmoment.
It's better than not doing it atall.

Matt Coles (37:52):
But, but that's something we fundamentally need
to change in the industrybecause people get into that
mindset of, Oh, I need to run ascanner.
I'm going to take the results.
I'm going to patch on a prioritylist from 10 to 0, right?
Again, using CVSS in a way thatwasn't intended.
Or, or, or rather, or making aninterpretation of what
information you get out of CVSSscores and severity ratings,

(38:14):
right?
And, and doing thisprioritization.
Oh, 10 is really bad.
And, and eight is not as bad andtherefore I can wait on those
without knowing context aboutthat ten in, in, in place of the
network versus that eight, youknow, front end, I, you know,
that there's additional contextinformation that you don't have
when you're making thosedecisions.

Izar Tarandach (38:36):
Parenthesis, better things are coming with
CVSS VRBA than before, whichMatt was one of the
collaborators in the, in thegroup.
So yeah, I've been there.
I looked at it and better thingsare coming.

Matt Coles (38:52):
Public preview is, uh, is ending, well, public
preview is currently now and,and hopefully it'll be released
in the very near future, soavailable for consumption.
Uh, but again, we have to, wehave to interpret the results
and make use of them in adifferent way than, than just
simply taking a score and usingit as a blind measure of, of
insecurity.

Chris Romeo (39:10):
Let me summarize that, what I think I heard, and
then I'll give you my finalthought.
Contextualization, then, is theargument for Fixing scan and
fix.
Your argument is scan isn't theproblem, it's fix.

(39:32):
And fix is a problem because wedon't have contextualization, we
have too many results, we havetoo many false positives, too
many false negatives.
I mean, that all makes sense, Iagree with it, it's just I
haven't seen anything happen in20 years that's getting me
closer to that.
So like, it's tough to say.
We should stop, we should, weshouldn't think about another

(39:53):
pattern when this pattern thatit has a good, I agree, that's
a, that's a perfect, almost aperfect state.
If you had contextualization andyou could get to the point where
you gave developers five things.
Here are five things that arereal things that are at the same
level of a RASP finding.
Like, that's the thing I loveabout RASP.
If RASP detects a SQL injection,guess what?

(40:15):
You got a SQL injection.
Because it's inside the app,it's watching it.
It's watching it execute andthen stopping it before it can
do some damage.

Izar Tarandach (40:23):
Is it,

Chris Romeo (40:25):
That's a whole other, that's a whole other,
don't get me, that's, thatwasn't my, my, that wasn't the
focus of my point.
Um, but yeah, I mean like, solike my, my conclusion though is
Yeah, I agree.
I would love for all thosethings that you guys described
to happen.
I've been waiting 20 years.
Do I have to wait 20 more yearsin this industry to see it?
I don't know if I got 20 moreyears of AppSec left.

Izar Tarandach (40:44):
Chris, I, I, I have to agree with the way that
you put it.
I have to disagree with the Notdisagree, but I have to raise a
bit of a problem here with thefact that you haven't seen
anything in 20 years.
In 20 years, the target hasmoved a lot.
We are not defending the samethings that we were defending 20
years ago.
20 years ago, you were defendingan on prem, uh, everything in

(41:07):
one box, server, doingsomething.
And today you are serverless, inthe cloud, multiple cloud
providers, multiple identitysystems and whatnot.
I mean, it's chaos.

Matt Coles (41:18):
And, and, and how many programming languages do
you have to

Izar Tarandach (41:21):
Exactly.
Exactly.
The runtimes, they just appearevery day.

Chris Romeo (41:24):
that doesn't, that's not, that's not an excuse
to have tools that don't, aren'tbetter.

Izar Tarandach (41:29):
It is.
It is.
It is because hitting a movingtarget is much harder than
hitting a static target, right?

Chris Romeo (41:36):
But I mean,

Izar Tarandach (41:37):
Today, let me put it like this, with what we
have today, if we were to scan atarget from 20 years ago, you
would have a very differentopinion from the results.
But we are not shooting attargets from 20 years ago, we
are shooting at targets fromtoday.

Chris Romeo (41:51):
but the point is we're still doing the same.
We, we still have the sameapproach to how we're scanning a
ser a piece of code andgenerating a, a, a series of
results.
And that's, that's my wholepoint here.
Everybody has just bought intothis is how we do the, you know,
what the most, one of the mostdangerous things is anywhere in
an organization, in a team, inanything?

(42:12):
Status quo.
is just how we do it.
This is how we do staticanalysis.
This is how we do processingresults.
When you have that happen,that's my whole point though, is
that there has been a period oftime nobody has thought about a
better pattern becauseeverybody's been like, this is
just how we do staticapplication security testing.
Thanks

Izar Tarandach (42:30):
Look.
I think that people who todaypoint at Copilot and saying
things, tools like that, writingcode together with the
developer, are going to save usfrom scan and fix because the
code is going to be perfectbeforehand.
First of all, they don't knowwhat they're talking about.
Second, the code may be perfect,the design sucks.

(42:50):
So, again, we keep looking atthe silver bullet.
There is no silver bullet.
Nothing is going to, to,

Matt Coles (42:58):
no, finish your thought, sorry, I didn't want
to,

Izar Tarandach (43:00):
no, no, no, no, no, Matt, go,

Matt Coles (43:01):
So, the, the complexity keeps going up.
Again, the, the boundaries ofchoice is still infinite.
And, and I'm not, I'm notsuggesting we must change that,
but, but maybe it's something weneed to look at.
And, uh, and so, was it garbagein, garbage out?
Uh, as you, as you build morecomplex things, your, your scope

(43:24):
of what you have to analyze forcontinues to increase.
And the things you have toaccount for change, right?
So containerization inKubernetes is new or was new or,
you know, became new at somepoint.
And so all these things, uh, youknow, need to be accounted for
over time.
And you don't know, I guess.

(43:44):
As, as an organization that hasto make money scanning
utilities, whether you're, Iguess, open source projects, you
could argue, you could have adeveloper who is really
interested in solving thisproblem, who could solve it in
one use case and then work onthe next use case, next use
case, because they'revolunteering time and not trying
to make money off of it.
Right.
And, and nobody's done thatbecause nobody's taken the
effort or nobody has thebrainpower to be able to do it

(44:06):
effectively.
Or maybe it exists and we justhaven't seen it yet.
Right?
In, in scale.
So I think just because wehaven't seen it in 20 years
doesn't necessarily mean thatit, A, can't be done, I would
agree with you, but B, is itvaluable in doing so?
Is it feasible to do so?
Um, would anyone use it if theydid?

(44:27):
Uh, the other thing I would justadd on the garbage in part was,
uh, is if we have a concertedeffort as CISA and others are
trying to do, Um, more recentlyaround doing things like memory
safe languages where developerscan't introduce certain classes
of issues.
You reduce the problem set.

(44:48):
You start reducing thatcomplexity and reducing the
infinite choice problem, right?
You cut off a class of errors.
You don't have to scan and fixfor those, right?
If you solve design problems atdesign time and you architect a
system appropriately, you'recutting off a slice of things
you have to scan and fix for.
If you start reducing the set ofcomponents or technologies that

(45:09):
you use, you cut off that sliceand you can reduce and then add
context to the things that areleft and now you've reduced the
problem.
You're still scanning, you'restill fixing, but you're doing
it in a much more manageable.

Chris Romeo (45:20):
if somebody builds a new pattern and comes up with
something innovative, I don'thave to scan and fix it all
anymore.
There'll be something else.

Izar Tarandach (45:26):
Hey, let's see WEs, let's see VEs, right?

Matt Coles (45:28):
If you, if you can solve, if you can solve how
quality assurance works.
Because this is exactly, this isquality assurance, right?
We just replaced, we justreplaced people doing automated
testing with a scanner.

Izar Tarandach (45:42):
Yep.

Chris Romeo (45:43):
Oh, here's my, here's my challenge to all the,
all the entrepreneurs out there.
Dream up a better pattern andbring it to market and see what
happens.
I think you're going to havesome, some very interesting
results.

Matt Coles (45:55):
It's the same, it's the same pattern.
You're just using, gettingbetter information in the
output.
It's the same pattern.
You're scanning with context andthen you have to fix them, scan
and fix.

Izar Tarandach (46:04):
But Chris, I, I, I, now now putting my
professional hat.
What Matt said is compounded bya lot of different things.
The complexity that's going up,it's not only the security
problem that goes up.
It's all the associated bits ofthe ecosystem that come together

(46:26):
to create what we call todaysystems that are put in place
and have to run at five ninesand whatnot.
It makes the problem ofscanning.
More complex, it makes theproblem of contextualization
even more.
And then we compound that by thefact that we have so many
personas out there who want touse these scanners, and each one

(46:50):
of them is expecting a differentlevel of fidelity of, uh, uh,
quality and, and of hands offwork.
Right.
So the, the, the five peopleset, uh, startup.
They're expecting a silverbullet that's going to tell them
you're doing everything rightand you're secure.
The company with the SOC and allthat good stuff, they are

(47:13):
waiting for something that willhelp them in their processes
that they developed in house,that work for them, that match
their organization, and thatadded a challenge of being able
to serve all these differentparts of the public in the way
that they expect to be served,and not having to tell them a
story saying, here's what I'moffering you, and here's why I
think that this is good.

(47:35):
The right solution for you.
It's mind numbing.

Chris Romeo (47:39):
Yeah, I

Matt Coles (47:39):
And I and there's, there's, there is a

Chris Romeo (47:41):
is I'm asking for somebody to dream up something
better.

Matt Coles (47:45):
there is a

Chris Romeo (47:46):
we would, but five, five or 10 years ago, we would
have never, if someone wouldhave suggested that you could do
something in the runtime,everybody would have been like,
no, no, no, there's, it's notpossible.
But now RASP and RASP and Iasked in runtime observability
are, are.

Izar Tarandach (48:00):
a reason for that.

Chris Romeo (48:01):
Are a big part of what

Matt Coles (48:03):
there is a pattern, there is a pattern that solves
this.

Chris Romeo (48:07):
uh,

Izar Tarandach (48:08):
No, no, no, no, no, no, no, no, no,

Chris Romeo (48:11):
MISRA.
Let the record show, for thoselistening on audio, his, Matt's
card said ADA, mISRA, formalanalysis.
Hm?

Matt Coles (48:23):
You you, limit choices, and you, make
everything, you, you design inperfection.
As best

Izar Tarandach (48:29):
Yes, and 99.
999 percent of developers outthere would not be able to live
in those conditions.
But Chris, you say a couple ofyears ago, looking at the
runtime, and Matt is going tocorrect me here and he's going
to jump at it.
We had very serious limitations,even at the hardware level, that

(48:50):
would not let you effectivelylook at the runtime.
We did not have the tools ofobservability that we have
today.
We did not have the proper sidechannels that we have today that
give you insight on that.
Right?
We didn't have the tools thatwould let you look at runtime.
Runtimes that emitted enoughsignals to be able to tell

(49:13):
someone, here's what's happeninginside me.
We're seeing that problem withAI today.
People keep saying, we don'tknow how we get into these
results.
Why?
Because AI is not emittingenough signals that say, here's
where my thoughts are.

Matt Coles (49:25):
And by the way, we still have this problem today
with, with certain types ofsystems, right?
Small embedded devices, IoTdevices, consumer electronics,
right?
Who's, how do you get thatobservability data out of
somebody's refrigerator if it'sdisconnected from a network?

Chris Romeo (49:43):
I mean, my, just my point was, we had a pattern, we
didn't have a pattern, and nowwe have multiple patterns, like
the observability became a newpattern.
So all I'm saying is I thinkthere's another pattern out
there.
I wish I knew what it wasbecause if I did, I would just
start a company and make abillion dollars and I'd be done,
uh, be, I'd be retired on a golfcourse.
But I, all I'm saying is I wantto challenge people to think of

(50:05):
another pattern.
Like what's wrong with anotherpattern?
What if, what if somebody cameup with something and it was
better than scan and fix?
Would you still argue for scanand fix?
Like, yeah, this is better, butI love scan and fix because
we've been doing it forever.

Matt Coles (50:17):
Uh, let, let me, can I?
Oh, sorry.
Can I just

Izar Tarandach (50:20):
Sorry, Matt.

Matt Coles (50:21):
go ahead?
Yeah, Yeah, go

Izar Tarandach (50:23):
risk of being, at the risk of being marketing,
marketing, marketing orientedhere, there is a new pattern.
Security observability is comingup.
Okay.
There is a lot of solutions thatare being set up around that
space.
There's a lot of solutions thatare using the tools of
observability to emit securitysignals and they're great.
But at the end of the day, ifyou look at what happens with

(50:47):
those signals, you end upfalling into the pattern of scan
and fix.
because you have something, youapply to it rules, and you fix
the results.
So at the end of the day, scanand fix is not only running a
scanner and fixing, it'sbasically check rules and fix.
It's not scan, the scan is justthe way that it happens.

(51:07):
What's being actually done ischeck rules and fix, and the
only way that we're going tobreak that, extend that pattern
in a way that the fix side ofthe balance becomes better is by
putting the context in themiddle.

Matt Coles (51:18):
And the last piece I want to throw in there, uh, is
think about the other things.
There's one other, one otherconcept that we haven't
introduced in this conversationyet is timing.
Observability requires a systemthat's functional, right?
And we've talked about, we'vetalked about as an industry over
those past 20 years, that it'svery expensive to fix something

(51:42):
that's in the field.

Izar Tarandach (51:44):
Yeah.

Matt Coles (51:44):
Right?
So if at the point you can doobservability, you've already
shipped it.
Right.
Or you're ready to ship it,right?
So, and you can simulate, youcan simulate environments and
all that sort of stuff.
But you're, you're, you'reguessing at that point, right?
What your user behavior is goingto be.
And I'm not talking about cloudservices.
I'm talking about products,systems, things that get shipped
to people and run in the realworld.
Right.

(52:04):
And so observability, you'vealready shipped or you're at the
point of shipping.
So it's cheaper to fix thingsearlier in the life cycle at
design and implementation, ofcourse, we're no longer doing
waterfall model of development.
By and large, but we still havea design, some sort of
understanding concept phase, andsome sort of implementation and

(52:26):
integration phase, and then somesort of deploy.
So, is it, do we want to breakthat pattern of find and fix
early?
Allow us to find and fix in thefield.

Izar Tarandach (52:41):
But wait a minute, matt, there is, while I
agree with you, there is a thinghere.
Observability for security goesexactly the way that you say.
Fortunately, people have beenusing observability for way more
things than security.
And now we can have the happysurprise of finding out the
tools of observability alreadydeployed with all that good

(53:03):
stuff, including IOT, right?
And we can just reap thebenefits of that existing.
So it's one of those situationswhere security has to lift the
head out of the box that we livein and say, what else is around
here?
What else can I use?
And these tools exist, and theyare at a very, very high degree

(53:23):
of fidelity, of visibility,scannability.
And we can use that information.

Chris Romeo (53:32):
All right, well, when somebody else comes up with
a new pattern, I'm going to callyou both and say, I told you so.

Matt Coles (53:39):
and when they're still doing scan and fix, we'll
look at you and go, uh huh.

Chris Romeo (53:43):
mean, but I mean, and I think there are, but my
whole point is like, I want to,I want to put forth, I want to
encourage innovation.
I want to encourage people tothink outside of what we've
done.
And when I see scan and fix, Isee.
We've done this, we've donestuff this way for a long time.
That doesn't always mean that'sthe best way to do it.
It, people, people can get intothat rut of this is how we do

(54:04):
it.
I just want to challenge some ofthe new thinkers out there, new
people in our industry.
Try and think of somethingdifferent.
Think of a different way to dothat.
And Matt, you took us on a bitof a journey into guardrails and
we could include paved roads inthat.
That's probably, that could bepart of a different pattern
where you have more, lesschoice.
But more secure and less and theresults are you're able to build

(54:28):
something that's more securebecause you're not giving
developers the ability to doanything, which, let's be
honest, in

Matt Coles (54:35):
not, not suggest, was not suggesting that at all,
but but

Chris Romeo (54:38):
okay.
that's, that's what I,

Matt Coles (54:40):
take, that to an

Chris Romeo (54:40):
what I drew

Matt Coles (54:41):
take take that to an extreme, absolutely, that would
be true.
But, but, but even puttingguardrails in place, again,
you're reducing the problem set,so you make scan and fix a
manageable activity,

Chris Romeo (54:52):
Yeah, I want a world.
I want a world where we don'thave to scan and fix.

Izar Tarandach (54:55):
Yeah, my point is that...
You won't have a word that youcan't, that you don't scan and
fix, because you have to breakso many other patterns before
you get to that world, that theonly option that you have is
scan, contextualize, and fix.

Chris Romeo (55:12):
yeah.

Izar Tarandach (55:12):
dog again!

Chris Romeo (55:13):
yeah, and the dog agrees with me, by the way.
Translation,

Izar Tarandach (55:16):
No, I was

Chris Romeo (55:17):
in every...
No, it's

Izar Tarandach (55:20):
Abaze

Chris Romeo (55:20):
I mean, I just wanna, I just wanna encourage
people to think, think big,think bigger.
That's my goal, right?
Like at the end of the day, Idon't care where we land on
this, but like, I just, I wantto encourage, especially new
people in our industry, thinkabout these things, don't just
accept the things that we'vealways done, think about new
ways to do things and who knows,maybe somebody will come up with

(55:41):
something that the three of uswill look at and go.
Huh, just like in threatmodeling where someone says a
threat and you're like, I'vebeen doing this for a long time.
I never thought of that.
That's a really interestingidea.
That's my point here, is I wantpeople to push the envelope for
us and so that those of us thathave been around a long time can
look at something and go, Youknow what?
I never thought of that.
I didn't even realize that wouldbe possible.

(56:01):
Exactly.
So, all right, folks, thanks forjoining us on the Security
Table.
And we look forward to anotherepisode next week where we dive
into something and get superexcited and jump around and
argue about it for up to 45minutes.
So thanks for listening to theSecurity Table.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.