Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(01:00:00):
[MUSIC]
Well, hello everybody.
It's Bryan on a special time normally.
(01:00:22):
I don't normally stream at this time.
I'd like to thank
everybody for joining us here,
our viewers on Twitch and YouTube.
I have a guest today.
And the reason that I'm
streaming at this special time today
is because of our time differences,
only because we're on one Earth, but we
have at least 24 time
zones.
So Josh reached out to me on the OWASP
(01:00:46):
slack and said, hey,
got a black hat training coming up,
but I'm really passionate about AppSec
and some of the gaps there between
security and AppSec.
And I wanted to have
Josh on to talk about that.
So welcome, Josh, to the show.
Hey, thanks for having me.
Really appreciate it.
Great to be here.
Right on.
OK, so as we tend to do on our show here
(01:01:07):
when we do interviews like this,
everybody has an origin story.
Everybody got into
security in different ways.
Would like to talk about
your background in AppSec
and how you got to be
where you're at right now,
and maybe a little bit
about the black hat training
that you're given.
Yeah, fantastic.
So yeah, I guess I got
(01:01:29):
my taste of security back
in the school IT room.
I think it's a relatively common story
where it's something
all these computers are,
what can we do with this?
Although after school, I kind of felt the
wayside a little bit.
And it took me some
time to get back into it.
I started off doing
general IT consulting and IT risk
consulting and gradually
broke my way into IT security.
(01:01:50):
And I'd done some
software development in the past,
and that sort of pushed
me towards the application
security side, like the
software security side.
So I spent many years working more
on the hacking side, the
penetration testing side,
working, coming in,
getting the client's application,
breaking it, giving them
some findings and suggestions,
(01:02:11):
and moving on.
And gradually, I got
to the stage where I--
OK, breaking things is fun for a while,
but I wanted to be
having more of an impact.
I wanted to try and have some
slightly longer term impact.
And I started getting
involved in projects,
actually working with developers,
working with organizations to try and
build software security
in the first place,
(01:02:31):
workers like an internal product
security, your
application security consultant.
And that's what I'm wearing around today.
I work for a company
called Bounce Security,
and we're a small booty consultancy.
And our focus is very much
not on the breaking side,
but on the building side.
And you're helping organizations to
either improve or operate
their product security programs.
(01:02:53):
Very nice, OK.
So you've been doing this a few years,
and so you've seen more than a few
engagements with regards
to AppSec, probably pen
tested a few apps that you're like,
yeah, oh my god, what are you doing?
What are some of the overarching issues
(01:03:14):
you see with AppSec, or when you come
into an engagement, or you come in
and you're trying to teach people,
I would say proper AppSec.
Well, that's another
question we can ask later.
What is proper AppSec to you?
But what are some of the biggest things
that you see overall when you come in?
Is it lack of maturity,
or is there something else?
(01:03:35):
So I think the very
basic problem is that when
we're talking about application security,
we're talking about developers.
We're talking about people whose job it
is to come in and build software.
They've generally
learned either in a boot camp,
or in the university, or
from the science program,
or something else.
And I think security is
very much under covered
(01:03:55):
in those sort of programs.
It's not really covered in much detail.
It's not something that's
really considered upfront.
It's not really something that's
considered in that situation.
So they come into a development role.
Often they don't have
that background in security.
And then their job is, OK, now you
need to build this piece of software.
Now you need to build this feature.
Now you need to fix this bug.
But security often doesn't
have a seat at that table.
It's not often a success
criteria or consideration.
(01:04:17):
I think one of the biggest challenges
is seeing security as an equal citizen
alongside all the
other considerations that
go into building an
application, and then making sure
that developers have
the time and the resources
and the guidance of how
to actually build things
in a secure way.
So I think that awareness is
one of the bigger challenges.
The other side of
that, I think, is that I
(01:04:39):
came from slightly more of a software
development background.
So I was more familiar with code.
Even then, I had to
learn a lot about development
before I really understood,
OK, how can I have impact here?
How can I fit into this process?
And so we talked
about my Black Hat course.
So I did a course last
year at Black Hat as well.
A slightly different
focus, more focused on tools
(01:04:59):
and understanding how to use the tools.
But a lot of the people who came there
were security people who
basically had application
security something to their lab.
Suddenly, they were the
application security person,
as well as the regular security person.
Now what do I do?
I don't come from a
development background.
I come from a network or
an IT support background.
Now I've got this hat
of application security.
(01:05:20):
How am I going to handle that?
And this year's course, I've
changed the focus slightly.
Focus, OK, well, let's
talk about that a little bit.
Let's talk about what
does AppSec actually mean,
and what's some of the wider
context here to understand,
well, how can I actually
have impact in this area?
So one of the comments from our speakers
(01:05:41):
said in the university, I didn't want to
hand in my first C++
assignment because I couldn't figure out
how to trap errors yet.
The teacher said that didn't matter,
and just to hand in a program that
worked under ideal conditions.
Obviously, that's not
how AppSec should work.
I would imagine some of
the more best practices
that you put into place would be things
(01:06:02):
like trapping errors
to find out if there's a problem,
or getting rid of
debug symbols, or something
like that to keep
people from being able to do
nefarious things.
When you give this training at Black Hat,
and I think we should
probably talk about that,
what's the split between
security and developers,
or is it just people who know AppSec who
(01:06:23):
kind of dabble in both sides?
I mean, what does the demographics look
like there for that?
So I'm going to be training a few venues.
I think Black Hat is very
much a security conference.
I think most of the
people I spoke to there,
most people I was
working with there were--
they either had
security in their job title,
or security is at
most of their day to day.
(01:06:44):
I've also done the training at OWASP
conferences as well.
I've done a few
different OWASP conferences.
I did it in Dublin, in San Francisco,
hopefully doing it in Lisbon.
But again, a slightly different focus.
The old focus of the course
was very much about the AppSec
tools, because I think historically,
if you talked about AppSec,
it was very much, oh well,
(01:07:06):
we do a penetration test.
And maybe that was
like five, six years ago.
And now you talk about AppSec and say,
oh, we've got this tool,
we've got that tool, and
we've got the other tools.
So the origin of this course
was in sort of about tools,
and OK, how to build
processes around tools
and use them effectively.
And I've had a fair amount of success
working with developers at OWASP
conferences for that,
and getting input.
(01:07:27):
And it was very much a variety of
different audiences.
Like I said, when I saw
that Black Hat was very much,
OK, these are security people, and these
are often not really
application security people.
These are mainstream security people.
So that's where I focused it
more on that target audience
of security people who want to
understand more about AppSec.
Certainly, a lot of the knowledge there
is useful for developers as well.
And I think we tend
(01:07:48):
to see developers more
at the OWASP
conferences and other conferences
where it's slightly more varied.
Right.
So you mentioned in
your description there,
you said the tool you should
process is around the tooling.
Does that hamstring people to have
(01:08:10):
to develop to the tool
instead of creating processes
and then finding the tool to fix that?
It seems like that would
be kind of backward to me.
You'd want to create the
process and then find the tools.
It feels like you're restricted by the
tools that you have,
because either maybe you have processes
where the tools don't exist,
so then you don't have tools
(01:08:31):
to do those things.
So maybe this goes back to
some of the best practices
that I thought we would
touch on at some point.
Do you build-- you
said build their processes
to the tools you have.
And for me, that seems backwards.
But I'd love to hear your take on that.
So I guess there are two sides to this.
(01:08:53):
I think on the one hand, if you're
starting from a
completely greenfield organization,
you haven't got a tool in
place, you haven't got a lot
of this infrastructure, whatever else,
in place for doing
software security scanning,
whether it's dynamic,
whether it's static.
If you've not got that in place already,
then you've got the
(01:09:13):
luxury of sort of saying,
well, we're starting from scratch.
We can make a decision about how we want
this process to work.
We can audition some different tools.
We can try out some different tools.
We can decide what works well for us.
But I think that's certainly something
that the knowledge in the
course comes to cover as well,
comes to give ideas about, OK, if you're
starting from nothing,
the considerations you want
(01:09:34):
to have for actually evaluating
tools and thinking about
what's going to work for me.
But many, many, many
cases, I see organizations,
they've already got the tools in place.
They've really heavily
invested in these tools.
They've already got them there.
And therefore, I want
to be able to say, OK,
I don't want to just come
and say, OK, throw all this out
and start again.
I want to say, look, these
are the tools that you've got.
Now, how can we get more value out of
(01:09:55):
these existing tools?
You're probably not going
to be able to throw them out
in the short to mid term.
So how can you work with what you've got?
How can you build a process around that?
How can you understand the
tool better in a way that
lets you say, OK, well,
I'm now going to structure
the process around this tool.
I'm going to figure out
what I want this tool to do,
what sort of scans I want it to perform.
And I'm going to figure out who's
(01:10:16):
going to take the
output of that based on what
the tool actually supports.
So if you've got that
greenfield site where
you can start from nothing, then I
think there's a lot
of valuable information
you can use to think about
what's going to work for me.
But often, you're not in that situation.
You've got this sort of legacy situation
where you've got tools already in place.
You need to make the
best out of what you've got.
(01:10:37):
Yeah, no, that's awesome.
And in hindsight, that does make sense.
You don't want to throw
everything out and start
over again.
It's not realistic to do so.
And some developers are
tied to specific tools
because there's really
only one tool you can do.
So if you have your black ducks or your
co-verities or whatever,
(01:10:57):
if you're programming a certain
programming language,
you don't have much of a choice in what
tools you're going to use.
There's one scanner for Rust or there's
one scanner for C++.
So that kind of makes sense.
Yeah, I think a lot of the time as well,
the existing-- I think a lot
of training that's out there
sort of assumes, OK,
you're starting from scratch,
and now you're going to
build this very complicated CI-CD
(01:11:19):
pipeline with all these new tools.
And I think that's realistic
for a lot of organizations.
I think a lot of organizations need
to take a slightly more
high level view of, OK,
this is what we've got, and here's
how we're going to get
the best out of that.
Right, so let's say you
don't have your greenfield,
your brand new start.
You come in and you're discussing AppSec.
(01:11:39):
I'd say you're almost
what, an AppSec dev rel
in developer relations for AppSec?
I'm still trying to wrap
my head around having people
who aren't salespeople
sell products to developers
and be cheerleaders for Rust or whatever.
I'm sure there's some
people watching this going,
and developer relations is where it's at.
(01:12:00):
I completely understand.
I don't actually.
What are some of the checklist items
that you like to come
in and you go, OK, we're
going to take stock.
You need-- you have eggs.
You don't have chicken eggs.
You may have a duck egg or whatever.
But the idea is you've got the
ingredients necessary to have
a proper CI-CD
pipeline, but you don't have it.
(01:12:23):
Or your best practices are probably
fairly generic, right?
You need to have a method by
which you're building code.
Or you-- what are
some of those ingredients
that are a must have to have an AppSec
program in that case?
OK, so yeah.
So I think that it's
funny you mention dev rel,
(01:12:43):
because I often think that the idea of
being this ambassador
for something to another group of people
is it very much resonates
with me as a security person.
When I'm working with developers,
I'm sort of the dev rel for security.
I'm trying to promote security.
I'm trying to bridge
that gap between security
and developers.
I think it's very interesting comparison.
(01:13:05):
I've said a couple of times I think
that OWASP should probably
have some sort of dev rel
role within OWASP.
The job it is to reach out.
I know that I've got new
communities now, Jason,
who I think is going to
partially take that idea.
But I think there's a
lot of crossover there.
In terms of building an AppSec program,
(01:13:25):
so again, I think that
historically a lot of AppSec
has been wrapped up in
all these different tools.
And certainly something we often get
brought in specifically
to talk about the tools in terms of
actually look at the tools
and try and make them more effective.
But I think if we're
looking at an AppSec program,
I like to think more widely.
I like to think, OK, well, tools are
going to be part of
that to a certain extent.
But I want to think more widely.
(01:13:47):
I want to think about, OK, I want to
build some sort of--
I guess not necessarily
comprehensive application
security program, but an
application security program
that comes to answer all different stages
of the development process.
I think the number one part of doing that
is something I think has sort of been
lost historically, which
(01:14:08):
we talk about a security person.
We talk about developers.
OK, the security person
needs to go to developers
and teach them about
security or help them with security.
And there's this trope
of, OK, well, security
is everyone's job.
Security should be a
full part of the process.
But security isn't
necessarily everyone's job.
Someone's job is
governed by their management
(01:14:29):
and their leadership
and whoever's overseeing
that group they work in.
So if a developer is
assessed against, OK,
how fast can they deliver
code, how fast can they deliver
features, then that's their job.
So I think the very first thing--
I heard this from a guy I was talking to.
Has an ASPM product
named Francesco Cappellini.
(01:14:51):
And he talked about the idea of shift up.
He had this whole trope of
shift left, this idea, OK,
we're going to push all
this security stuff earlier.
But before we get into
that, we need to shift up.
We need to actually have
that senior sponsorship to say,
we would like to build secure software.
We would like to
improve the level of security
in the software that we're pushing out.
And therefore, we need that buy-in
from some form of leadership.
(01:15:13):
We need someone in
development leadership,
not security leadership,
development leadership,
to say, we see this is important.
We want to invest in this.
We want to have our developers have this
as part of their
roles, part of their job.
And therefore, we're going to do what
it takes to make that happen.
Because ultimately, I
think in many, many cases,
(01:15:34):
security tends to be an advisor.
So security people
are providing guidance.
Certainly, there aren't enough security
people to do everything.
Often, we'll need to rely on other people
to do the things that we're suggesting,
that we're advising.
And when it comes to
AppSec, that's the developers.
I think the number one
thing is to make sure
that you've got that
buy-in upfront from leadership
to say, well, the developers are now
going to have this as part of their job.
(01:15:55):
They're going to have
time allocated to that.
If, for example, a
typical development increment
is two or three weeks.
Let's say every two or three weeks,
they plan OK for the
next two or three weeks.
Then how much of that time is going
to be on security activities?
How much of that time
is going to be on things
related to building
either security features
or the product or
addressing security bugs
that have been identified?
(01:16:16):
Right.
So--
[INAUDIBLE]
Yeah, to one of our
listeners, magnesium here,
says, my experience
here is an exhaustive bit.
Seen several attempts at AppSec to be
resisted by developers.
The only way I've seen
this work is by security
as being a developer partner,
shouldering some of the load
and acting a subject matter expert and
not getting in the way.
I was actually going to have
a similar follow-on question
(01:16:37):
here.
It feels like security
is suggesting going up
to leadership to get by.
And the problem is it's not a triangle.
In this case, it's more
like a trapezoid or something
like that.
But you've got to
have developers who also
want to have this happen.
It's got to come up from both sides.
All the stakeholders
have to want to have that.
(01:16:58):
So I guess the base-- and to continue
with this metaphor--
the base has to be fairly solid.
You have to have a good
relationship between security
and developers to begin with, to be
able to go up both
sides of this trapezoid
to their respective leadership.
Because it's not just
one leadership, right?
You're not going to go
to the CEO and go, hey,
I think we should have secure stuff.
(01:17:20):
It's going to be the CISO
or the head security person.
It's going to be the
head developer manager.
And then there's going
to be some cross-talk
between those two folks, because it's
like, why do we need this?
Oh, yeah.
So it's complicated, right?
It's not just a one-sided thing.
Your shift up is going to have to come
from multiple sides.
(01:17:40):
It may be a GRC thing as well.
So you're going to have
different pillars of that structure,
if you will.
So should you start maybe
with the developer security
relationship first?
Or should you go--
the one thing I don't want to do is
have this kind of
communication, where it's like,
I'm telling security management.
Management's telling
(01:18:01):
developer management,
and then it's going down there.
That doesn't lead to a good relationship.
So what are some of the
things you would suggest there
in that case, to magnesium's point
and to my point on getting that
implemented with shift up,
in this case?
Yeah, so I think shift up is part of it.
I think that one of the--
(01:18:22):
I've seen lots of
anti-patterns in terms of,
how can the security and
development relationship
go wrong?
And I think shift up comes to address
one of those anti-patterns.
One of the anti-patterns is that security
is basically coming
from the side and saying,
oh, can you do this for me?
Can you do this activity?
Can you fix this bug in
your spare time as a developer?
(01:18:43):
Because I've not got
senior buy-in for this,
but I need you to do this.
So this idea of, OK, I'm
now adding to your workload.
You have eight hours in your day to work.
I'm now adding a ninth hour, because you
don't have time allocated
from this and from the bottom.
And I think that's one
of the key anti-patterns.
And one of the things that
damages that relationship,
where people see
security and think, oh, you're
going to bring me more work
(01:19:03):
to do that I don't have time
to do because I don't have
buy-in to actually do it.
So I think that's part of
improving that relationship.
It's certainly not the only thing.
I think one of the--
again, the motivator, I
think one of the key things that
pushed me from going from doing this work
to actually creating
a training course about
tools was seeing organizations
(01:19:25):
where their entire security ecosystem,
their entire software security ecosystem
had been chasing tool results.
It had become, OK, we've
now put these tools in.
And now we've got loads and
loads and loads of results
from these tools.
Now what?
Now what do we do about them?
They degraded to this state where they
had security champions.
And it was more about
security champions after.
(01:19:45):
But they had the security champion.
The security champion was, OK, you
are a voluntary to be
the security champion.
And now your job is to go
through these tools results
and figure out what's going on.
And that was their
entire security experience.
Their entire security
experience was tool results.
And the problem with tool results
is that often there are
lots of false positives.
There are lots of
things that don't make sense.
There are lots of
things that it's not clear
(01:20:06):
are they actually
important or not important.
So it comes to a stage
where their entire association
of security was, oh, no,
not more nonsense that's
come out of this tool.
That was their
perspective on application security.
And that was, again, a very
difficult thing to overcome.
It was very much
damaging that relationship
between security and
developers, because it got to a stage
where developers are hiding under the
desk when security came
in because, oh, I don't
(01:20:27):
want to talk about these tool
results.
Now I have a job to do.
I have work to do.
And I think that
that's a second antipath,
and making sure that
what you're giving them,
what you're asking them to do, makes
sense to begin with.
It isn't going to frustrate them.
If you want to use tools, you need
to make sure that what's
coming out of the tools
is targeted and makes sense and isn't
just a whole load of work
for no actual security value.
(01:20:50):
I think maybe a third thing there
is meeting them where they are.
I think that often,
because, again, there's
a gap between security
and development people,
they're maybe using
different applications.
They're using
different ticketing systems.
They're using different
paradigms for how they work.
I think often, if you can
go into and use the systems
and use the processes
(01:21:11):
that developers are used to,
then you're coming into
their usual work processes.
You're not pulling them
into some other system
or some other
interface or some other process
they're not used to.
But you're becoming part
of their overall process.
So I like to try and
incorporate security activities
into their general ticketing.
Most development organizations
are using something like JIRA
or GitHub issues or GitHub projects,
(01:21:33):
whatever it is, to track their work.
If you can put security
activities in there as well,
obviously, in
coordination, not just throwing stuff
out of the fence, but if
that's now part of the work process
they're used to, the
system that they're used to,
you're already making their lives easier,
you're already reducing that mental load
it's going to take
for them to actually get
(01:21:53):
into that mindset of, OK, I need to
address this security issue
now.
Yeah.
One of the biggest things that I've
seen on pushback from developers,
especially with security tools that we
think they should be
using like SCAs and stuff,
is the ungodly large
amounts of false positives.
And it's one of those things, it's
like we know they're going
to come up because you're--
(01:22:18):
I don't want to say code quality issues,
but there's definitely
some blame on both sides.
Security doesn't want to deal with trying
to verify every false
positive, so they throw it
over to the developers.
Developers don't have time
to look at all these false
positives or they know
that it's a legitimate issue,
but because the code needs to
be able to do rest unsafe kind
of methods or whatever, the
(01:22:39):
garbage collection doesn't
need to work on this or whatever.
There's a lot of time
and effort spent there.
It's just one of those unfortunate things
that we have to deal with.
Yes, the beta wolf, it's
clearly 99% the developer's fault.
Burnley said tongue in cheek there.
So these are the kinds of cracks that
start in a beginning app sec program
(01:23:00):
where the gaps start being created.
Security has, I would
say, unrealistic ideas
of what we need the developers to do.
The developers have
unrealistic expectations
that they're just going to be
able to do whatever they want
without any kind of
impunity or any kind of bits there.
Is that what you're saying
in terms of where gaps begin?
(01:23:21):
Can you go in when you're
into what they call a mature
organization and go, OK, I can see
already security is just
telling devs what to do and devs think
that it's complete freedom?
What are some of the
trends you see there?
Is it the go fast culture?
Is there other things there involved
that maybe I'm not privy
to that cause these gaps
and are there ways to remediate those?
(01:23:44):
So I think that you
mentioned the go fast culture
and that's a massive, massive thing.
I think that security tends to be slow.
And that's-- especially if they're
trying to act as some sort of gate
or some sort of review
stage or something like that,
security tends to be slow.
Whereas developers are
often moving pretty fast.
(01:24:06):
Like I say, a typical development
improvement might be two or three weeks.
It may be that within three weeks,
they've coded a new
feature and deployed it.
Now, if you have a
backlog on your security
or you haven't got time to
carry out security activities
that some security people need to do,
then they're going to be either a blocker
or it's just not going to happen.
It's just going to fall to the wayside.
So my boss is a guy called Avi Douglas.
(01:24:27):
And he works-- he does a lot of work
around threat modeling.
He's very well known for threat modeling.
And recently, he teaches a
lot of threat modeling as well.
He was training for threat modeling.
But he sort of tried
to shift focus recently.
So he's not just
talking about threat modeling,
but he's talking about, well, how
can we make this sustainable?
How can we keep this process going?
It's not just, OK, well, I
now know how to threat model,
but it's like, well, how within an
(01:24:48):
organization can you
make threat modeling into a
process that can be done when
it needs to be done, whether that needs
to be every few weeks
or whether it needs to be
for particular features?
How do you make sure that carries on?
Because that's one of
the things he sees often,
that he'll go in and
train security people
about threat modeling or certain
developers about threat
modeling.
But they won't have
time, because everything
is moving insanely fast, unless they've
got those plans up front,
(01:25:09):
that mechanism up front,
saying, OK, here's how
we're going to make it work
in the time available.
So I think that nowadays--
I joked at the beginning about, OK, well,
apps like five, 10
years ago was a pen test,
and now last few years it's tools.
I think now there's a lot more talk
about a more comprehensive, secure
software development
(01:25:29):
lifecycle.
But all too often I've
seen that as a project.
It's like, OK, well, we're going to build
this long document of
secure development lifecycle.
Here are all the
activities we need to do,
and now you need to go and do that.
I've worked in an organization.
I've worked as a consultant
while I was working on sake.
We just built this giant SSTLC, this
giant secure development
program.
You now need to implement it.
(01:25:49):
You're going to spend
three weeks implementing it
in each development
group, and then move on
to the next development group.
I've looked at this thing, and it's huge.
It's 20, 30 activities.
Developers aren't going to
start doing that immediately.
They only have time to
start doing that immediately.
Just understanding and accepting, OK,
what do we need to do
for one activity is effort.
It's not what they're used to doing.
It's not their usual activities.
(01:26:14):
So one of the things
that we started doing now
is saying, OK, well, we
need to not look at this
as a project, a
short-term thing that we're
going to write a very big document,
and it's going to magically happen.
But it becomes more of a question of, OK,
we're going to take a
slowly, slowly approach.
We're going to think, OK,
what are the high-value things
that we want to start off
with, and then how are we
going to make sure this actually happens?
So forget, OK, well, how
do you actually do this?
(01:26:34):
What are the mechanical processes
of carrying out this activity?
So I talked about threat modeling.
OK, we can talk about
how you do threat modeling,
whether you use stride, whether you use a
time-boxed approach.
But forget all that for a second.
Who is going to do this?
Who is going to have
responsibility to do this?
How do we make sure they
have the time to do this?
How do we make sure that,
OK, for every new feature that
is rated as, say, as high security risk,
(01:26:54):
or something that's likely to have
security issues in it,
how do we make sure that there is a
two-hour slot that's
agreed for the developers
to get together in a room
and actually think
about what could go wrong
and come up with some
sort of basic threat model?
How do we make sure
that's going to happen?
Who is responsible for that
within the development team?
Up and down the chain.
We started using Raki
(01:27:15):
matrix as a saying, OK, well,
who's accountable for
making sure that happens?
Who's actually responsible for doing it?
Let's actually treat this like the
project activity that it is.
It's not just
something we can say, OK, well,
from now on, you're
doing threat modeling.
Here's a model called stride.
Off you go.
It's about saying, OK, we
want to build in a new activity,
which means we need to
make sure that we know who's
going to do it and how
(01:27:35):
it's going to happen.
And then, then, obviously,
how it's actually measured.
How we're going to make
sure that we understand,
is this working effectively?
So one of the key things is saying, OK,
we're not going to throw everything
hit developers at once.
We're not going to say, OK, here's
all of the whole
security development process,
but rather, we're going to go slowly.
We're going to say, here's how we
want to start off a few activities,
and here's how we're going to
make sure that they actually
happen, and then they stay sustainable.
(01:27:58):
Nice.
You were talking about threat models.
I think we've talked
with a couple of people
about threat models this year.
Kevin Johnson, our good
friend Jared DeFredis,
you mentioned the bounce
security does threat modeling
at scale.
I would assume for large--
(01:28:19):
I assume the threat model is for
platform-esque level,
like large levels, and
they'll drill down necessarily.
So you're going to have to go beyond the
threat model, right?
You can't just hand developers a threat
model and go, oh, yeah,
here's what you need to protect against.
Do you do things like user test cases?
(01:28:42):
Or what deliverables, other than just a
Vizio diagram or draw.io
that you give them,
there must be other stuff
that you're adding there, like regular
hygiene kind of things
to help mitigate some of those issues.
Yeah, I use threat models
as an example of an activity
rather than sort of talking
abstractly about activities.
(01:29:03):
Certainly, we do want
to have a situation where
we've got security activities spread
throughout the development
lifecycle.
We might start off early on by, OK,
you're building out
functional requirements
for this new feature.
What are the security requirements
from a business perspective?
What's it going to make--
you usually have a
product manager whose job it
is to decide, OK, this is what this
feature needs to do.
(01:29:24):
This is what this new functionality
needs to do from a business perspective.
So what's going to ruin
that product manager's day?
What's going to ruin the user's day
if it happens from a security perspective
and have that already
considered at an early stage?
Then move on into
something like threat modeling
once it becomes more clear
how the new function is going
to be laid out.
And maybe you look at having
(01:29:45):
some sort of designer review
or having some sort of simple
checklist that goes through.
Here's things we need to
think about when we're building
the design.
And then we want to make sure there's
guidance available at the
implementation stage as well.
So it's all very well
saying, OK, well, you
need to make sure that you're
authenticating users who
are coming through this way.
You need to make sure that you're
protecting your database
access against SQL injection.
(01:30:06):
But you also need to have
that guidance for the developers
to actually use at development times.
OK, well, here's how I prevent this.
Here's the in-house library
that we use in my language
to prevent SQL injection.
Here's our in-house
guidance for how you make sure
you authenticate users.
You have to check this particular token.
You have to check
these items of the token.
You're having that guidance as well.
And the idea is that you want
(01:30:26):
to build this process that--
it comes into what they're already doing.
It gives them that
guidance all the way through
and gives themselves that hand holding
all the way through.
So it's not just one stage
early on and then you forget,
OK, you need to secure app.
Oh, how am I going to secure my app?
Oh, I don't know.
That's your problem.
You're a developer.
You figure that out.
No, that's not how we want to work.
We want to make sure
that each stage, they've
(01:30:47):
got that guidance.
And ideally, you've given
them some sort of resources,
some sort of training.
I think training about secure coding
is one of those tricky things.
I think it's very hard to get right.
It seems to be a lot of organizations
do that because their
regulation requires they do it
or they've got some
sort of compliance that
requires them to do it
(01:31:08):
without really investing in it
and thinking about how
to do it effectively.
But I think making sure developers
know where to look, know where to ask is
a key thing as well.
Make sure they've got that
continuity all the way through.
Because also, once
you've got those requirements,
once you've got that, OK, well, we
need to make sure this happens from a
security perspective.
That then comes in later on as well.
OK, well, we're now at the QA stage.
(01:31:29):
We're now at the testing stage.
Well, QA used to test
against requirements.
They know that, OK, well,
this function should do this.
It should provide this
feature in the application.
It should allow the user
to change this piece of data
in their profile.
But at the same time, you
define security requirements,
and QA can verify those security
requirements as well.
You continue that continuity.
(01:31:49):
You continue that consideration, OK,
how does security impact
each stage of the process?
But ultimately, security
needs to act as a guide
and a resource for
this to provide advice.
But we need to be in a situation where
either the architect
or the developer or the
QA person, the QA engineer,
is able to do that
themselves with the resources they've
(01:32:10):
provided.
Nice.
Yeah, you were talking
about some RACI charts
when I was trying to be a
proper PMP certified program
or project manager.
The RACI charts came
in, stakeholder engagement
or stakeholder identification, being able
to understand those.
I don't want to say it
almost feels like we don't need
(01:32:30):
to necessarily interact
with the developers a lot,
according to using examples
of QA and talking about that.
But I would almost suggest
that your organization should
have some kind of project or program--
sorry, product development process
or when you're
creating whatever it is you're
going to be creating.
(01:32:51):
The PMs or the project
managers, if there is one
or whatever that amounts
to for a development team,
if it's the sprint planners or whatever,
it feels like by direction
those people should be engaging
developers, product people, privacy.
There should be other
people involved in the product
(01:33:11):
development process there.
And I don't know if you're agile based
if there's actually a project manager.
But it feels like if we do it right,
security can reduce
the amount of friction
between the developers
because the developers are just
implementing what
they're told to implement,
I think, in some cases.
(01:33:31):
So if you go to the people
who are asking the developers
to implement those things and say, hey,
security also needs to be in there as
well or what have you,
I think we tend to just go, OK, let's
go to the code crunchers
first instead of maybe going
to the PMs or somebody like that to kind
of build that up later.
The PMs would be like, ah, yes, we
need to talk to security about this.
(01:33:52):
Or maybe we should invite security
to the sprint planning meetings.
Thoughts on that?
Is it better-- I wouldn't
say a run around or end run,
going around these folks.
But they're just doing kind of what
they're told, right?
They're doing it by direction.
So what you allude to, the idea of having
(01:34:12):
all the different
people involved, very much
underpins the shift-off approach.
When you're going to
leadership, you're saying,
there are jobs for people here.
There is work that people
are going to need to do here.
It's not just developers who
are going to have to do it.
It's going to be architects.
It's going to be developers.
It's going to be QA people.
It's going to be product managers.
It's potentially going to
be project managers as well.
(01:34:33):
There are all sorts of different people
who need to be involved.
And that's why we end up
in this situation of having
these racy charts,
because it's not just, oh, well,
it's either security or it's development.
It's actually a
variety of different people
who need to be involved.
I gave the example earlier
on of having business security
requirements.
OK, what's going to ruin
(01:34:54):
the product manager's day?
What's going to ruin
the day for the person who
oversees his products?
And that's very much a
product management question.
That's a business question.
It's not a-- you can delve
into certain technical details
afterwards.
But if you've got a
product where you're saying, OK,
well, we have an e-commerce platform
and we want to offer discount vouchers.
(01:35:14):
But if 1,000 people
get a discount voucher
when they're not supposed to, that's
going to ruin everyone's day.
That's not a technical question.
That's not a technical vulnerability.
That's a failure of a business process
in something that would be
considered a security failure.
But ultimately, it's a business question.
That's why someone
like the product manager
should be having
those thoughts and saying,
(01:35:34):
what can go wrong here?
What do we need to
prevent against as well?
What do we need to
make sure doesn't happen?
So I think that's a key
aspect of it, saying, OK, well,
there are different people involved here.
There are different--
an engineering team,
a development team, the terminology
differs depending on the company.
But there are lots of
different personas here,
lots of different people here.
And by having that racy concept basically
(01:35:56):
helps to say, OK, well, who is going
to be involved with each activity
and who's going to be doing it, who's
going to be overseeing
it, making sure that not only is someone
covering that effort,
but also the effort is spread across
different stakeholders,
across different personas.
Right.
Yeah, I think it's
important to spread the wealth here.
And I would imagine the developers,
(01:36:17):
as we learned last
week with Mary Gardner,
security is the Department of No,
even though we don't want to admit it.
I think that ship has
sailed, and we're now
holding onto the door with Rose in the
middle of the Arctic
and the ocean right now.
And being able to, I
(01:36:37):
think, think smarter.
If we're always going
to the same developers
or we're going to the
same development teams,
they're going to start seeing us coming.
And I would say start
actively resisting or avoiding us,
because they only see us when we want
something from them.
It depends on that relationship.
It depends on that relationship.
I've worked in an organization, and I was
working as a product security
(01:36:58):
consultant.
So my day-to-day was
working with developers
and working with the architects.
And that was quite a big organization.
And they had mainstream
security people as well.
And these mainstream security people
were coming as developers
with all sorts of requests
and all sorts of requirements.
And there was one guy
in his security team,
(01:37:19):
sort of more
infrastructure security team,
like a different part of security.
Let's call him Fred.
And Fred was just--
I don't know.
Very much the old school security
department of no insanely negative.
You get on a call with him, and suddenly
all the happiness and joy
will be sucked out of the call.
You get on a call with him, and it'll
just be endless negative,
endless no endless.
This is terrible.
(01:37:40):
That's terrible.
That's terrible.
And I could get on a call with him.
And I could say, well, yes,
you are correct about that.
But that is less important.
That's less important.
That's lower priority.
And we're going to focus on this because
we know this is higher priority.
And by being someone who's working a lot
with the developers,
I was able to act as their
advocate the other direction
and say, well, actually, let's focus on
what's really at risk here.
Let's focus on the real concerns here.
(01:38:00):
And that did a lot to
actually build up that relationship
and improve that
relationship because as the app set guy,
I was able to say, well,
here's what's important.
So developers, if I tell
you something's important,
you know that we need to focus on this.
But the outside
world, I can help push back
on things that are going to be less
important or less high priority
so that you're not getting
this sort of endless input
(01:38:21):
from external security of things that
vary in how important they are.
So I think that there definitely is the
opportunity to flip that around.
Security does have the
reputation of the partner of no.
But at an individual level, you can
improve that relationship.
You can improve that interaction.
And you can make
developers happier to see you.
You can help them see the
(01:38:42):
value in what you bring,
the knowledge that you bring.
Because ultimately, the security person,
you do have a lot of understanding about.
These are the things
that are really important.
But we also know that there
are all sorts of pressures,
all sorts of different considerations,
all sorts of different findings.
And security people, we generally have
some perspective on, OK,
well, how do we prioritize that?
Right.
Yeah.
And I think, yeah, again, this is--
(01:39:04):
you said there-- we know
what's important for security bent.
But like Beidle Wolf
said, sometimes we don't even
have the ability to say no.
So we kind of have to--
I don't want to say pick our battles.
But it is kind of picking our battles.
We can't go after every memory leak or
cross-site scripting that we find.
I think sometimes if we're trying to
(01:39:26):
fight all those battles,
every battle, every
cross-site scripting that is found,
or every memory leak, or whatever, and
everything is important,
then it quickly becomes nothing is.
But ideally, you want also some kind of--
I don't-- how do I--
some kind of risk acceptance process, I
think, in all of this,
to be able to say, look, we've identified
(01:39:48):
these as a potential
risk.
It may not be-- it may be
considered a high risk for us
because it leads to potential customer
record loss or something.
But the organization
doesn't want to fix it right now.
Raising that awareness might actually
be a change for the better,
(01:40:08):
being able to say, OK, look,
we told you this is a problem.
We're recording it accordingly.
That way, we're CYA, right?
Security raised this issue that way
whenever it does happen,
or we feel like it's going to happen.
That's a problem.
How important is it to--
that's a process, and
there's no tool connected to it.
But I mean, it feels like that
that would be a very important
(01:40:29):
process to have to raise those risks,
not just from the threat model, or maybe
follow on pen testing,
or whatever.
But our priorities are not always
the priorities of
people who create features
and have to put those things out.
So how important is it
to have a risk acceptance
process in the organization?
So the answer is very important.
(01:40:51):
I hope so.
Yeah, I want to preface it by saying
that I think one of the
key things to be able to do
is to understand the
product you're dealing with,
the application you're dealing with,
well enough to be able to
take a good view on that.
You may often would take a very dry
security perspective, OK,
well, this is a vulnerability.
OK, what application
(01:41:12):
is that vulnerability,
and what data is in the application?
How urgent is that
compared to other things?
So I think often having
that in-depth knowledge of what
are we actually dealing with here
is critical for this process.
And it's critical to be able to make
accurate assessments.
So I think that's important to highlight,
because I think it's also a classic--
(01:41:34):
it comes up a lot, let's say, bug bounty.
When we're interacting with people who
have found a bug bounty
finding, like, oh, you've
got this finding in this application,
it's terrible, it's diabolical.
I'm like, well, that's a nice finding,
but there are reasons
why that's less bad,
why we'd lower--
reduce the priority.
A bug bounty person won't necessarily
know that, because
they're all on the outside.
But as the internal person,
I need to be aware of that.
I need to know enough to
be able to say, well, OK,
(01:41:55):
this bug bounty person says it's critical
and everything's on fire, but I know
enough to know, OK, well,
I'm not going to create a war room
and drag in all the
developers, because I know that actually
there are other factors here at Play.
So you have to have that, I guess,
domain-specific knowledge,
product-specific knowledge,
applications-specific
knowledge to make that determination.
(01:42:15):
But once you've got that, then it's
sort of another part
of the shift-up concept.
It's say, OK, well, we
need the time to do this.
We need buy-in from the leadership.
We need buy-in to say,
look, if we say that this
needs to be dealt with, then we need
to get this dealt with within an
acceptable period of time.
We can't be in a
situation where it's constantly
being pushed off by, OK, well, this
feature or that feature.
We need to have agreement that we're
going to be careful about
(01:42:36):
when we blow that big whistle,
saying, OK, there's a critical issue.
We need to deal with it.
But we need to, as a quid pro quo,
we need to have
leadership buy-in to say, OK, well,
if security say it's important, we're
going to take it seriously.
We're going to deal with it.
And we're going to address it.
And if it's not critical, OK, we'll
have a slightly more relaxed time frame,
(01:42:56):
but we'll still have a time frame.
And have that agreement up front saying,
for this product or these applications,
this is the time frame in
which we have to deal with this.
And yes, we're going to fit
in features at the same time.
We're going to maintain
velocity of development.
We're not going to lose sight.
It's not going to be
something that's just pushed off
and pushed off and pushed off.
But again, that's something that needs
that sort of leadership
(01:43:17):
buy-in to say, well, yes, we accept
that we want the product to be secure.
Therefore, when security
say we need to address this,
then we're going to have to address it
within the pre-arranged time scales.
And if we don't, then we're going
to have to have a
process that says, well, look,
security raised at this point, we've not
dealt with this for whatever reason.
(01:43:37):
At the leadership
level, it's understood, OK,
we have not dealt with this.
We understand what the
business reason for not dealing
with this is.
But it's not something
that's just stuck with insecurity.
And it's just a security problem.
But it's something that is understood
at a level higher than
security or a level more broader
than security that, OK,
we have this issue that's
not been dealt with.
Right.
(01:43:58):
I like the idea of using
things like your bug bounty program
to help inform risk.
Because usually what you have
to do for a bug bounty program
is you have to set
material damage externally
because every vulnerability found
has been found by
somebody not at your company.
So that should
automatically raise the risk
(01:44:18):
that it is discoverable externally, which
is one of those things.
So you can realize that risk.
You add a dollar value
to a lot of these things.
It's like, OK, this
vulnerability cost us $20,000.
Or this vulnerability cost us $10,000.
Or because the rules of engagement
have labeled them as critical.
And I think probably getting
buy-in from development on,
(01:44:39):
hey, what is a critical?
If it's a configuration
issue, or if it's a memory leak,
or what have you, having them
have buy-in on your bug bounty
program and what they consider critical
so that they will also
treat it critically is very important.
I think a lot of that is just
collaboration and breaking down
some of those communication silos.
(01:45:01):
We used to call them swim lanes.
I call them silos now
because that's what it is.
Anytime we're not
collaborating with other people,
it's a silo for me.
So some of those can get
very expensive over time.
Yeah, that's one of those things where
we talked earlier on about
(01:45:21):
the greenfield ideal state
and the existing legacy state.
And I guess bug bounty,
it feeds into the concept
of the expert comes from outside.
And an internal security person says, oh,
there's this problem.
Then depending on what
the relationship internally
is like, that may be well
received or not well received.
(01:45:42):
It depends on how
good the relationship is
with the security team
with a wider organization.
But if someone from outside of a bug
bounty person finds it
and suddenly you've got
something to point to,
say, look, that person found it.
And they're all this
expert bug bounty hunter finder.
We now need to deal with it.
So I don't love that because it's
one of those less optimal situations,
(01:46:05):
but 100% can help, both because bug
bounty has a real cost
because you don't end up paying them a
certain amount of money
for their finding.
Because it clearly
demonstrates not only is this an issue,
but someone on the outside has
looked and they've found this
issue and it is completely validated.
So it certainly can be a powerful tool
from that perspective,
(01:46:25):
although it makes me a little bit sad.
Yeah, I mean, well, even before then,
you can use that example.
It's like, look, we
have a bug bounty program.
If we're going to consider this a risk
and not going to fix
it before it launches,
you could add that
discussion is, hey, hey,
we have a bug bounty program.
If this ships, would
somebody from the bug bounty
(01:46:46):
researchers, would they
be able to find this issue?
And that would help in
the risk acceptance process
to maybe not accept that
risk and say, okay, yeah.
If they do this, then they'll be able to,
and you kind of have to
level with your developers,
I think, with a lot of folks.
And I think we're
going to talk about product
versus application
security here in a second,
(01:47:07):
but you kind of need to be
real with your developers
and folks about what
researchers will do to your bugs.
And some of it's a smash and grab,
or if you're dealing with hardware or
something like that,
they will find the JTAG on your device
and they will dump your firmware.
So any code quality issues in there,
(01:47:28):
that's why SCA tools are important,
because when they dump your firmware,
they're going to be able
to analyze your malware
and they'll find your memory leak.
They'll find your
libraries that are incredibly old
and outdated and what have you.
So shifting to that question,
one of the questions
you added, thankfully,
to the chat here, to the document is,
(01:47:50):
how does product security differ for
application security?
And I asked the question, I said,
what if the product is an application?
I don't know if you make a delineation
between what a product
and an application is,
but is there a difference
between product security
and application security?
So, there's definitely a difference.
I think there are lots
of different opinions
as to what that actual difference is.
(01:48:11):
I guess the way I see it is that
application security
is focused on how you build a piece of,
you build some code security,
how you write code security,
how you build features security.
Whereas, I guess the
way I see product security
is a wider view.
It's saying, okay, I'm
not just build writing
a piece of code, I'm not
just writing an application.
I am building some
(01:48:31):
sort of software product
or possibly even a
hardware product as well.
There is a wider deliverable here
that I need to be thinking about as well.
It's not just, okay,
am I, you know, if I've got
SQL injection bugs in my code,
it's more, okay, well, there's gonna be
a whole life cycle around this product.
And maybe if this is
a self-hosted product,
then it's gonna be all
sorts of considerations
around how we host our product,
how our cloud environment works,
(01:48:53):
how we get this
application from our developers
and laptops to our cloud environment,
what that process looks like.
These are all sorts of
wider considerations.
There's also the response aspect as well,
the product security
incident response team here.
We talk a lot about incident response,
and then we probably
talk a little bit less
about incident response
when it's in your application,
(01:49:14):
when it's, okay, you're
hosting this application
or even you're
delivering this application
to be installed on premises.
Okay, there's a vulnerability in it.
Now what?
How do we respond to that?
How do we deal with that?
How do we discover that?
What the whole process
around there looked like?
So I think it's about saying, well,
application security came to
solve a particular problem,
came to solve a
particular issue around, okay,
or how do we write code securely?
(01:49:35):
How do we build an application security?
And product security expands on that,
says, okay, well,
we've now got a whole load
of other considerations that are relevant
in the life cycle of this product.
And another part of that is that,
thinking a little bit
more widely as well,
say, okay, we're not just thinking about
the security
vulnerabilities in our product
or building features in a secure way,
but also what
security features do we want?
(01:49:57):
What features do we want in our product
to actually enable security,
enable security for our users?
Things like multi-factor authentication,
things like logging of the way
that different security
controls in the application work
so we can detect
attacks on that application.
All sorts of features
that aren't necessarily
about how we build features security,
but rather how do we
(01:50:17):
make sure that our product
offers security features to its users
or to people hosting it?
Or, again, it's this wider movement.
It's thinking about,
okay, we're not just dealing
with a piece of code here,
we're not just dealing
with application here,
we are dealing with a product
with considerations around that.
Right on, yeah.
Yeah, I can definitely see it.
(01:50:38):
So applications feed up into a product
or like you said, with
hardware and software,
you've got applications
that run on the product
and every one of those
represents a potential weakness.
And I think what you're talking about,
what I'm alluding to is
here supply chain security.
And we've had a very
recent issue with that,
with the XZ library, we've seen a log4j,
(01:51:00):
we've seen it with libcurl,
libwebp, you know, ffmpeg,
any of these open source
vulnerabilities that come out
for the most part, I think they're mostly
open source vulnerabilities,
but they represent a needed visibility
into our supply chain,
into what is working here.
(01:51:22):
In 2018, of course,
CISA started up the NTIA,
the National Telecoms
and Information Administration
Stakeholder Process.
So S-Bombs are all the new hotness.
Do you see S-Bombs as a, I mean,
they're gonna have to be put in.
So it's gonna require a
non-zero level of effort
from security and from
(01:51:42):
developers in the long run,
depending on who you work with,
with eventual I think
adoption across the entire industry.
But what do you see that
looking like in the future?
Because I mean,
S-Bombs can be very generic,
or they can be really, really dense.
And there's a happy medium there,
but what do you see as a
happy medium in regards to that?
(01:52:05):
So I guess when I think about S-Bombs,
I think of SCA tools,
because it's sort of, you know,
ultimately the SCA tools, the software
composition analysis,
it's sort of the basis of that.
You know, what's going
into my piece of software,
what's going into my application,
what's going into my product.
And SCA tools are, I guess,
very sort of well known for coming up
(01:52:26):
with all sorts of vulnerabilities
and you suddenly
raises all these questions
about reachability and am I actually
vulnerable to this issue?
Am I not vulnerable to this issue?
But one of the things that SCA tools
or a good SCA tool should give you,
aside from all that, is
situational awareness.
It gives you visibility
of what's actually going on
in my application, what's
actually going on in my product.
And that's where I
think the S-Bomb concept
(01:52:47):
comes really important.
Because it may be that
there are all sorts of reasons
why you're going to have a backlog
of potential actual vulnerabilities,
but having the power to say,
okay, I know that
there is this, you know,
burning down the internet
vulnerability in this library,
where am I using that library?
And that's, you know, comes from, okay,
where am I using that library
within my own
(01:53:08):
applications that I've built in?
Where am I using that library
in other products that I'm using?
You know, maybe third party products.
Do I have an S-Bomb for
that third party product
so I can figure out, well,
do they have this issue?
Do I need to speak to
them about getting a patch
or getting some sort of update?
You know, having the ability to know,
okay, what is going on in my environment?
What do I have in my environment?
And I think that, and
that's a pretty key thing
(01:53:31):
to be able to have.
And, you know, like I
said, we could talk,
and I do talk lots about, you know,
how to deal with the
vulnerabilities aspect of SCAs
and, you know, clapping to all of those
and how you prioritize and how you triage
and what's real, what's not real.
But just that situational awareness,
just that inventory, I think,
is really, really important.
And I think that
factor of having an S-Bomb
for your own product, for
(01:53:52):
products that you're using,
is really, really useful to, you know,
when something does happen,
to be able to react to it fast
and be able to deal with it quickly
and mitigate where it's necessary.
Right, right.
That makes sense.
So you talked about
S-Bomb as having a goal
of being able to do
visibility or, you know,
in some ways kind of a
light threat intelligence.
(01:54:12):
You know, you understanding
what's going on under the hood.
You also mentioned that
tools should have a goal
and I'm trying to figure
out what you meant by goal.
Is it a goal from metrics wise
to show how effective your system is
or how secure it is
or is it a bit of both?
So I guess when I talk
(01:54:34):
about tools having a goal,
the way I see it is that too many
organizations have said,
"Oh, we need a SaaS tool.
Oh, we need an SCA tool."
And their goal has been
to implement that tool.
But that's not security.
That's not bringing you security.
You know, you want to have a tool
or you want to have a process
to solve a particular problem.
(01:54:56):
I am worried about
developers writing insecure code.
I'm worried about them
writing vulnerabilities
into my product.
Now I could, as a
security person, go line by line
through the code that
developers are writing
and try and figure that out.
Or I could bring in a tool
that can help me with this problem,
that can find
vulnerabilities at the code level
and flag them up to
me, or at the very least,
(01:55:17):
point me in the right direction
so I know where to look and
where I might be concerned about.
So my goal is to make sure my developers
are not writing
vulnerabilities into my application.
In the same way, I'm concerned about,
I've got all sorts of
third-party libraries,
all sorts of other content
going into my application.
There are risks associated with that.
So my goal is to have
visibility about it,
(01:55:39):
of what's going into my product.
And maybe that my goal is
also to not deploy things
that have a CVSS of 10
into my released version
of the product.
That's my goal.
Now I can use an SCA tool,
software composition
analysis tool to help me with that.
But the goal here isn't
to implement the tool.
I'm not measuring, can
I implement the tool?
I'm measuring, can I solve
the problem using the tool?
(01:56:02):
And I think all too often, like I say,
App State becomes
implement SAS, implement DAS,
implement SCA, implement this tool,
implement that tool.
The goal should be,
okay, what's the problem
I'm trying to solve?
And then how am I going to
use the tool to achieve that?
I think that reframing is very important
when we come to actually plan a code,
how are we going to use that tool?
Who's going to be involved?
And we talked earlier on about race,
about different people.
(01:56:22):
And the same thing
comes with tools as well.
You know, let's say, for argument's sake,
you've got a tool that you're
implementing on-premises.
It's a little bit, maybe old school,
but I'm sure there's
still plenty of organizations
that want their tools to be on-premises,
their codes on-premises.
So suddenly you have lots
of different people involved
in that, you know, okay, this is an SCA,
this is an App State tool.
Well, it's not just App State,
people are supposed to
be dealing with this.
You may need IT people to actually get it
(01:56:44):
installed internally.
You may need DevOps people
to build it into the pipelines
that are compiling the code
and that can then be
used to scan the code.
And obviously you need developers
to actually
potentially address the output,
or you know, after the security people
look at the output to
address that output.
So again, we have a goal,
we want to build a
process to solve that problem,
to achieve that goal.
(01:57:05):
And we need different people
involved in order to do that.
We're probably going to
use a tool to do that.
We're probably going to use a tool
to accelerate that process.
The tool isn't the
process, the tool isn't the goal.
I think that's an
important sort of mind shift,
an important way of looking at K-O-R.
How am I going to address
a particular software security problem?
Right.
So could the
(01:57:26):
developers have a different goal
for solving the same
problem that security would?
Is it necessary for
developers and security to align
on having the same
goal for the same problem?
Or is it like an enemy of an enemy is my
friend kind of thing
where it's like,
developers are wanting
to solve memory issues
because of blah,
security wants to fix it because of blah.
(01:57:47):
Completely different goals,
but the same thing is achieved,
but they have
different ideas on, you know,
on why they need to fix that goal.
So,
yes, for the most part in security,
it's generally okay where, you know,
our goal is to improve security
even in some particular dimension.
(01:58:07):
Our goal is to solve a
particular security problem
for the developer,
their primary goal might be,
"Okay, I want this to not
give me a massive headache.
"I want this to not take
up the Bb2 time consuming."
But I think that if you
can get to a situation
where the developers themselves
seeing the value in the tool,
I think that's even more powerful.
(01:58:29):
So we did something interesting.
There's,
we started looking at,
there are certain SaaS tools,
the sort of the
secure code scanning tools
that let you write your own rules.
I'll name drop it, I don't mind.
It's a tool called Semgrep,
which is, has an open source version,
lets you write your own rules,
and then you can scan code
and find issues in your code
(01:58:51):
using those rules.
Now, you know, you can
build all sorts of rules
to find all sorts of security issues.
And we spent some time,
I did a talk a few months back now,
well, it was more than,
almost a year, time flies,
with my colleague, Michal.
We did a talk at
PyCon, a Python conference
about basically using Semgrep,
(01:59:11):
building custom rules and
solving sort of security problems.
And we showed the
people that came to PyCon,
Python developers,
and we showed them this,
and they're like,
"We can think of so
many non-security problems
that we can solve with this tool,
by just general awareness,
information about our code
or finding things in our code.
It's not a security thing,
it's just we want to search
our code for particular thing."
And they're excited about this tool
(01:59:32):
because it solved other problems
we hadn't even thought about.
So that's also a really
powerful thing as well.
Michal is also doing
a course at Black Hat.
She's doing a course around building
sort of custom security testing.
And very much continuing
on from that idea of saying,
you know, how can we take these tools
that are ostensibly for security,
or ostensibly to
solve particular problems,
(01:59:53):
and how can we customize them
to solve other problems as well?
Not every tool can do that,
but if you can get to that situation
where you have this, I
guess, win-win process
where you're solving a security problem,
but you're also bringing
something developers like
and helps developers,
that's a really great position to be in.
So, Semgrep, I don't know,
Semgrep was a security tool that
(02:00:13):
developers could use.
I assume that there is,
is this a failing of security
to not be able to sell
Semgrep to developers,
or maybe we don't know
how to suggest to them
how this would benefit them?
We want them to use the
tool because it benefits us.
You know, what's in it
for me is kind of the thing
that I like to ask at work.
(02:00:34):
It's like, yeah, this
is great to suggest,
but what's in it for me, right?
What do I get out of it?
What is the developers
gonna get out of using Semgrep?
And like you said, you showed it to them,
and they're like, holy crap,
I can use this for so many other things.
Is this, I don't wanna say,
is this a failing of security to say,
hey, here's a good tool we should use?
And on the other side of the coin,
are there developer tools
(02:00:55):
that developers are using
that would be like, hey, security people,
we're using this because of this,
but it might actually be a security tool
that would be useful?
I think it all comes down to this.
You talked about the Department of No,
I think that this
historic mindset of security
being the gatekeepers
against the hordes of mass
(02:01:16):
on the outside and
your security is the one
who makes the rules and
everyone must obey security.
And it comes from that
mind shift change of saying,
okay, well, let's think ourselves
as advocates for security.
We're trying to get buy-in for security.
We're trying to get
people to like security.
We're trying to get
people to want to adopt
what we want to do.
We're not using fear,
we're using sort of this
(02:01:36):
love and outreach approach.
And I think that once you
have that mindset shift,
mindset shift, and
you have this situation,
we're thinking, well,
how can I make this good
for developers as well?
How can I make this
easier for developers?
How can I find
positives for them as well?
I mean, we talked
earlier on about training
and this idea, okay, well,
we can give the developers
a whole load of videos
and that'll teach them
about secure coding,
but I don't wanna do that.
(02:01:57):
I want to find some sort
of interactive platform.
I want it to be fun for them.
I want to think, oh, I'm
gonna spend three hours
working on this fun platform
and it's gonna be
challenged and it's gonna be gamified
and I'm gonna learn, but
I'm also gonna have fun
at the same time.
I want them to have a
positive experience.
I want it to be useful for them.
And I think that is a
really important mindset change.
The idea that we're not,
maybe once security department, no,
(02:02:18):
but when we talk about applications,
when we talk about product security,
that has to be the person who the
developer is happy to see.
The person the
developer wants to work with
and that is making the developer happy
and bringing them things
that are gonna be beneficial
for them.
And yes, there is gonna be some
requirement required
from developer,
hopefully we can get in buy-in
from senior people in
order to make that easier.
But there's always gonna be that
(02:02:38):
investment from them.
We wanna bridge that gap.
We wanna make it a happier
experience for them as well.
And yes, Synvro is just one example of,
well, there's a plus side as well
because you can use this tool
and you can do fun
things with this tool as well.
But there are lots of other ways
to try and build that engagement,
make people happy.
You talked about DevRel.
It's very much the same thing.
It's advocating for security.
It's trying to make
(02:02:59):
security more accepted
amongst developers.
Very nice.
Yeah.
I was trying to find that
PyCon talk that you gave
with your friend there.
I was also looking up the training
that you'd mentioned
on Black Hat for the...
I'll find it.
They have the list of
(02:03:20):
them already available.
Looks like you have until the end of May
to get in on the Erleberg pricing
if you're gonna go to Black Hat.
So...
Yeah, I'll sign up. I can know.
It's got the links afterwards.
You can add things to
show notes or something.
That'd be great.
Okay.
Cool.
I think I wanna be
(02:03:41):
cognizant of everybody's time.
I know that it's late for you
and I have to actually hit a
meeting in about 20 minutes.
So any last thoughts?
I know that you're also...
I wanna say you're also big in ASVS,
which is the OWASP,
Security Verification System.
But has there been any change?
I wanna say, has there been any changes?
(02:04:02):
There's always changes to ASVS,
but I mean, it's
changed quite a bit since 1.0,
which I think we looked at back in 2017.
Are you looking at what worked then,
what works now for the
next iteration of ASVS?
How has that process changed since you
first envisioned it?
(02:04:24):
So I got involved with
ASVS a few years back
when version 4 came out.
I've had some minor changes since then,
but no major changes.
And what we're trying to do now
is trying to push out
a more major change.
Things have moved along since 2019.
We wanna try and make sure the
way we're accommodating that.
And we also wanna make it easier to use.
(02:04:44):
I think one of the
biggest challenges of ASVS
is it's very large.
It's a very, very big
piece of documentation.
I mean, ASVS is the Application Security
Verification Standard.
It's got basically
requirements to help people
either build software in a secure way
or verify that a piece
of software is secure.
And version 4 in total
(02:05:05):
has about 200 and something
requirements, so it's quite large.
And I think part of it is gonna be about
trying to make it easier to
adopt and easier to get into.
So right now our goal
is very much trying to go
through the different
chapters and try and refresh them
and ask the community,
is this still important
or is this less important?
Or the other things we
should be adding to here
and we develop an app
(02:05:26):
in the opening GitHub
so everyone can sort of
see where that's up to.
But we're definitely trying
to push out that new version.
And if anyone wants to get involved,
anyone's got comments or is interested,
then certainly look up
the GitHub repository
and we're very active in the issues there
and pushing our, making
our way towards version 5.
Yeah, pretty good.
I have posted up a link.
(02:05:47):
It'll be in the show
notes for the podcast
and the VOD as well.
So you work for OWASP.
We had Kevin Johnson
on a couple of weeks ago
as a member at large for being in OWASP.
It's kind of how I met you.
I think we mentioned Kevin was coming on
and you said, "Hey, I'd love to get on."
I was like, "Yes, I'd love to have you."
(02:06:09):
What's the dynamic of
working in an organization
like OWASP where it's, for
the most part I would say,
and correct me if I'm wrong here,
but it's mostly a
volunteer kind of thing.
So people are, I wouldn't say doing it
out of the kindness of their hearts,
but it's a volunteer only thing.
So they get out only
what they put into it.
(02:06:33):
What does the dynamic look like for OWASP
for certain things like
ASVS or there must be a lot of
projects there that get
very little attention,
but are very instrumental to OWASP.
Everybody thinks of the top 10.
Some people think of the ASVS stuff.
There's some threat
modeling, like threat dragon.
Threat dragon, threat dragon.
(02:06:56):
How is it some days for
you to motivate people
or to get comments for things like ASVS?
How often do you get
comments that aren't the same 15
or 20 people talking about something
because those are just the normal people?
So, I mean, OWASP is
certainly not my day job.
In my day job, I work as a consultant.
(02:07:18):
I work with development organizations
and other
organizations who develop software.
And OWASP is very much
sort of a passion project
and something that I'm
involved with on a volunteer basis.
And it's, again, it's something you
sometimes have to fit in
around other things.
It's something that
obviously can't be your nine to five.
(02:07:39):
And therefore there's
challenges around that.
But on the other hand, it's
a fantastic way of meeting
people, a fantastic way of interacting
with other people in the industry.
And it is generally getting a building,
better network of
people who understand this.
You can ask for help,
you can ask for ideas.
(02:07:59):
And I think there are a lot of benefits
to actually being
involved as a volunteer,
even though it does take up some time.
In terms of ongoing contributions,
ongoing involvement.
So for most projects have leaders,
all projects have leaders who are sort of
the primary people
involved over the course
of the project's life.
Somebody who's more active,
(02:08:19):
somebody who's less active.
And then the idea is we want to then try
and get other people involved,
especially at busier
times or times where we've got
something new that's come out that we
want people to look over.
So we're currently working towards
version five of ASPS.
I'm hoping to have a draft of that.
Some point over the next
few months at that point,
we'll probably make a lot
of noise, both personally
and through OWASP saying,
(02:08:40):
okay, here is the new draft.
Here's the new release candidate.
Have your say, have your input
and let us know what you think.
I think it's hard to have
a very large pool of people
over a large period of time,
but having a lot of people involved
in over a short period
of time where we've got,
okay, here's a release
candidate, here's a new version.
Take a look, I think
people are definitely keen
(02:09:01):
to be involved in
that and definitely keen
to have that say.
And we certainly saw around ASPS four,
a lot of people were
involved in reviewing
and making suggestions
on the release candidate.
And we expect similar
for version five as well.
But I got involved with OWASP in 2017.
And since then, it's
been a fantastic experience.
It was really interesting and it taken me
(02:09:22):
a lot of different places and met a lot
of different people.
Very cool.
Yeah, I don't come
from a coding background
and Kevin tells me that I don't have
to have a coding background
or I don't have to be
necessarily a developer
to do these things.
I just wonder what
would a contribution be
for somebody who's in
upper lower middle management
with a dearth of experience across many,
(02:09:43):
many industry verticals.
It's like, do you use
PMs like you use PMs
at normal companies?
Do you need somebody
who's a cat wrangler?
Or is it a somewhat thankless job to be
somebody who's like,
okay, hey, there's another RFC.
I mean, what does being a PM look like
for the OWASP organization?
How do you use them like that?
(02:10:04):
So I think that certain projects
have got this figured out
better than other projects.
I think ASPS we sort of be
very focused on our core thing
of, okay, we write security requirements.
And I think
historically we've been less good
at thinking about the other roles
that would well help us as well.
So there are other projects that are,
start, I've sort of
adopted that slightly more,
something like Cyclone
DX, which is the OWASP,
(02:10:28):
I guess it's put a
lot of terms of S-bombs
and supply chain.
And Cyclone DX have a quite large team
of core people, some of whom
work on the standard itself,
others work on promoting on social media,
others work on managing their own project
or interacting with community.
And I think there's
certainly room for that as well.
And ASPS, we're trying to think about
(02:10:48):
how we can broaden our appeal as well.
It's certainly having project managers
to try and allocate particular tasks
or make sure you have
particular tasks to doing,
work towards a particular goal.
And that's certainly
something that's needed.
A lot of the projects also are
documentation projects.
There are all sorts of things around,
just making sure the
documentation comes out right,
looks right.
(02:11:09):
More technical sides of that,
might be a developer,
might be very good at,
I'm not even a DevOps person,
but it might be very
good at say pipelines
and GitHub actions,
but they're not super
familiar with security.
But a lot of OWASP
projects use GitHub actions
to process the
documents and process the code
and come out with an output.
So I think there's definitely scope
for people with different roles,
(02:11:31):
different backgrounds
to be involved as well.
Very cool, okay.
I don't have anything else.
I just wanted to leave
you with the last word.
Is there anything else you'd like to say
before we take off?
Yeah, I think that,
I think AppSec is, product security,
it's just a fascinating area.
It's a very, it took me a while
(02:11:53):
when I was still coming
up with these scooters
to really understand sort
of what it was all about.
And I guess I'm still
learning to a certain extent,
but I think it's really interesting.
I think it's becoming more important.
Certainly, many organizations,
software development organizations today,
and many organizations are not
software development organizations,
but they end up
developing a lot of software.
I worked for a client that
(02:12:14):
was involved in manufacturing.
And they were a manufacturing company.
They built things, but it turned out
that all their in-house apps
had been built, developed in-house.
They weren't buying off the shelf.
They had people,
developers internally doing that.
Now suddenly they're a
software organization.
So I think it's very much
a very widespread thing.
Something that does
require a certain level
of understanding and skill.
(02:12:34):
And I think part of the
idea of the Black Hat course
is to say, well, for
this particular course,
let's talk to security people about
application security.
Let's talk to you
about how to approach this,
how to get into this.
I guess it comes down
to a few key principles
from my perspective, which first is that
security is not
necessarily a developer's job.
You definitely need to make sure that
(02:12:55):
we're either trying to
get it made their job
or we're accommodating that
and we're building that relationship.
Security, we don't want
to be the department of no,
we don't want to be a
blocker, we want to be an enabler,
we want to be people who
are helping development.
We want to make sure
that we're bringing things
that will help them,
bringing things that will work
where they are, that
meet them where they are
and not just bring them endless problems.
(02:13:15):
And I think by having this mindset,
having this attitude and understanding,
okay, how can we build
into the existing processes?
I think it's to make an
overall software security program,
a lot more successful and hopefully have
a much more significant impact.
So, yeah, hopefully
that's been a useful overview.
(02:13:36):
Yeah, yeah, fantastic.
Josh Grossman, thank you for being on.
Thank you for staying
late with us here today
and doing this on your own time.
I'd like to thank Bounce for allowing you
to have that hour, hour and
a half to be able to do that.
By the way, if you want to
check out Bounce Security,
you can go to
bouncesecurity.com, I believe is,
(02:13:57):
let me go check my, I have
it in the show notes as well.
Yeah, bouncesecurity.com.
If you're interested
in getting Josh's help
or any of their other teammates
to help with your app sec stuff.
Yeah, how would people find you
if they wanted to get online
other than Bounce Security?
I mean, if there's
other places that you blog
(02:14:18):
or anything like that?
So, the best place to get hold of me
is probably either Twitter or LinkedIn.
I'm sort of reasonably
active on both those platforms.
I do a bit of blogging through Bounce.
If you want to go on to the Bounce blog,
you can see some sort
of in-depth in the weeds
fun I had recently with a
particular software library.
(02:14:38):
I do some personal
blogging as well, but less so.
But yeah, most of the time I'm there,
it's Twitter and LinkedIn,
the best way is obviously
to put the links in the
show notes probably easiest.
Yep, yeah, I've got links
to your LinkedIn also too.
Actually, I found your blog site.
You've got a recent article
from February up about Passkeys,
which I plan on
reading a little bit later.
(02:14:59):
(laughs) And cool, all right.
So, thank you, Josh, for
coming, appreciate that.
Appreciate your time.
Yeah, just want to thank everybody
for joining us on the stream chat.
We will have this up on YouTube VOD later
for everybody who might've
missed a little bit of it.
And yeah, thanks everyone
for coming and listening
to our special episode
(02:15:20):
here on BreakSec Education.
You can find me on
Twitter at Brian Break.
You can find me on LinkedIn,
just do a search for my name.
Ms. Berlin, who could not make it today,
or Mr. Batcher, Ms. Berlin's info sister,
I-N-F-O-S-Y-S-T-I-R on Twitter.
Mr. Batcher can be
found on LinkedIn as well.
And yeah, thanks everyone for coming.
(02:15:40):
Hope you have a good day.
We should be on our
regular Twitch stream tomorrow
at 3 p.m. Pacific.
And I'd like to thank all
of our new chatters as well.
And that was it for our show this week.
Thanks so much.
Have a great week.
Take care of yourselves,
because as we're fond of saying here,
you're the only you you have.
And we'll talk to you again soon.
(02:16:00):
Bye y'all. (upbeat music)