Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Joshua Schmidt (00:04):
All right, so
you're listening to the Audit
presented by IT Audit Labs.
I'm Joshua Schmidt, yourproducer.
We're joined by Nick Mellom andEric Brown, as usual, and today
we have our special guest, johnBenninghoff, and he's going to
talk today about safety science.
And, john, can you introduceyourself and give us a little
background, maybe, how you gotinto cybersecurity and safety
science and what you're workingon now?
John Benninghoff (00:25):
Yeah, so, john
Bettinghoff, I live here in the
Twin Cities, minneapolis, stPaul.
I'm actually currently at myco-working space in downtown
Minneapolis at the oldMinneapolis Crane Exchange and I
actually got into cybersecuritya long time ago when I attended
my first SANS securityconference in 1998.
(00:45):
So I've been doing it a whileand you know I got interested in
safety because, partly becauseof my grandfather who was a
pilot flew for passenger jetsfor American Airlines for a long
time.
Wow and yeah, and I saw insafety, you know, an excellent
(01:06):
track record for managing risk.
So the risk in flying a planehas steadily gone down over the
years.
The risk of, you know, housefires has gone steadily down
over the years.
And then more recently, a fewyears ago, I went and actually
pursued a master's degree insafety science.
I was online degree fromTrinity College, dublin and you
(01:26):
know that really kind of, youknow, although I'd done some
reading, this really was, youknow, a deep introduction to the
world of safety and learned alot from that and I'm starting
to apply those lessons tocybersecurity.
Joshua Schmidt (01:39):
That's great.
Can you give us a little morebackground of where safety
science stems from and how thatbecame a thing, Because I don't
know if a lot of people haveheard of that.
I mean, it's been integrated inall of our lives through the
things you mentioned maybe motorvehicles, house fires and
airplanes but how does thatrelate to cybersecurity?
John Benninghoff (01:57):
I found a
really good definition of safety
science from a book, theFoundations of Safety Science,
and it is that safety science isthe interdisciplinary study of
accidents and accidentprevention.
I think that's a really gooddescription of what it is.
It's something that reallystarted with the Industrial
Revolution in the early 1900s.
(02:18):
Mainly you may have heard ofTaylor and scientific management
.
He was kind of an early safetyproponent in terms of trying to
find kind of optimal ways ofworking in factories and part of
that was making it safer forthe workers as well.
But you know, over the yearssafety has expanded A lot of.
(02:39):
It is now a kind of a blend oforganizational, psychology,
engineering and otherdisciplines and you know today
you know it's grown intosomething that you know we call
resilience engineering and youknow there are prominent safety
scientists that are doing workand some people are starting to
adapt that to security as well.
Joshua Schmidt (03:01):
I can't imagine
that it was a very safe work
environment back during theIndustrial revolution.
Lots of dangerous machinery andshifty management techniques.
I'm sure it's sapped the healthof a lot of workers.
John Benninghoff (03:15):
No absolutely,
and I think the history of
safety as a science has beendefined primarily by accidents.
So you know, like early miningand industrial accidents were
kind of impetus for gettingsafety.
But also plane crashes areoften, you know, kind of the
cause for innovation.
The checklist was the result ofthe crash of the B-17 prototype
(03:37):
.
That's when the checklist wasinvented.
And you know, more recently theThree Mile Island accident had
a big impact on safety science,as well as the space shuttle
disasters.
Eric Brown (03:49):
That's interesting,
john.
It's interesting that youbrought up aviation and fire
protection.
One of our customers deals withfire safety and fire protection
.
It's kind of interesting tolearn a little bit about that
industry as we were working withthem.
But on the aviation side,cirrus airplane came out and now
(04:11):
I think others have copied themas well, but there I think they
were the first ones that cameout with the parachute that
would deploy for the fullairplane and they're
headquartered up in Duluth andthey're in Tennessee now as well
.
But pretty cool aircraft thereand a pretty neat concept and
certainly has saved quite a fewlives over the last what 20
(04:31):
years or so.
Nick Mellem (04:32):
So, john, we're
getting into kind of this whole
broad topic of safety science,but I think the first question
that I've been thinking about iswhat does your day-to-day look
like?
Are you working on any specificprojects or how are you pushing
the industry forward?
What you know, somebody that'sgetting into this, you know,
what would your day to?
John Benninghoff (04:49):
day operations
look like yeah, so that's a.
That's a good, that's a goodquestion.
I'll answer it this way.
I mean, so I think one of thechallenges that I definitely
learned from safety is you can'tjust take you know you can't
just take practices fromaviation safety and even apply
them to other.
You know you can't just takepractices from aviation safety
and even apply them to other,you know, to other realms of
safety.
So, like, what works inaviation isn't necessarily going
(05:10):
to work the same way in marinesafety, for example.
So you know, what I try to dowith security is kind of use
safety thinking to kind of likechange, you know, as a new
perspective, as a different wayof kind of looking at things and
, you know, taking those lessonsand kind of adapting them to,
you know, to the world that wehave today.
One of the you know conceptsfrom safety is that we can't, we
(05:33):
can't, we can't be successfulif we only focus on preventing
bad outcomes.
So with phishing, we we don'twant people to click on the link
.
We send out training to, youknow, to basically try to
prevent them from clicking on alink, but that you're trying to
prevent something that is a badoutcome.
I think a better way of doingit is to actually focus on kind
(05:55):
of the positive.
So instead of focusing on justdon't get them to click the link
, focus on getting people toactually report phishing and
that allows you to take actionbased on those reports.
And if you use, you know, ifyou apply kind of look at the
system a little bit moreholistically you can say, hey,
like we're actually going totake those reports and if we get
(06:19):
early reports from reliablereporters because we've scored
them, then we're going toautomatically block the links in
that phishing email.
So we can actually useautomation and the humans in our
organization to actually createa secure environment for us, I
see.
Nick Mellem (06:34):
So it's kind of
like the study of the actual end
user as well, and how you canprevent cybersecurity incidents
from happening.
John Benninghoff (06:40):
Yeah, I think
it's part of it.
I mean, I think the emphasis insafety is really kind of gone,
you know, gone from looking atyou know historically looking at
individuals and preventingpeople from making mistakes, to
looking at looking kind of moresystemically and kind of
understanding how can we designthe system as a whole with kind
of the assumption that peopleare going to make mistakes?
Nick Mellem (07:03):
But, you know,
control, control, you know I
kind of capture, control andprevent those mistakes from from
causing further damage and nowthat we're getting further into
the conversation, in a past lifeI worked in the oil fields in
north dakota and through uh,hess, training and osha.
So you know, just thinkingabout this off the cuff, I'm
sure they are tied into yourindustry heavily as well.
(07:25):
Right, maybe obviouslydifferent swim lanes, but, um,
you know, safety as a scienceI'm sure is big for them no,
absolutely.
John Benninghoff (07:34):
And you know,
I'm actually curious to know,
like so nick, from yourexperience, like how effective
was that training and how did it, you know, match up with how
you actually did?
Nick Mellem (07:44):
your job?
That's a great question.
I think we went through it waslike a four or five day training
from hess this is 15 years agogas masks, all those different
situations that could arisewithin that.
You know field and you practicethose things, got real life
training and I think having thathands-on training not just book
(08:06):
training but then experiencefrom people that had those
issues or been through thatspecific scenario, were able to
teach us.
So I think you know that wasvital to us and it saw while
this really can happen, this isvery dangerous and it did help
make us operate a lot safer andit did help make us operate a
lot safer.
Eric Brown (08:25):
John, on the safety
side, there could be instances
where the pendulum may haveswung too far and I'd like to
get your insights on that andI'll give you, for instance.
So done some travel in Europeand have been to a few different
countries where you'reexploring, maybe, some of the
(08:45):
historic sites and, for example,if you're exploring, some say,
ruins, and you know they let yougo through the entire area, you
can walk right up to the edge.
You might be three stories upbut you can walk right up to the
edge, if you wanted to, of, say, an old castle or what have you
(09:07):
.
And I've done that in a coupleof different countries Finland,
maybe Greece, a couple others,right, but not third world
countries, but countries at thesame level that we are here in
the US.
But then you go to somethinglike that in the U?
S and you're going to haveyellow bars around the area.
(09:28):
You're going to have all thissafety stuff, like you can't
even approach the edge of it andit in some cases it feels like
it's it could be over protectingus from ourselves and and
taking away from the enjoyment.
And it was so unique to me thatI remember remarking like wow,
(09:51):
if this was in the US, wewouldn't even be able to be
within 10 feet of the edge ofthis thing.
But here we're, right up to theedge.
Nobody's falling over, itwasn't a big thing.
So I just kind of think aboutthose things from time to time,
of how much protection is justenough without going overboard.
John Benninghoff (10:11):
Yeah, no, that
actually is that comes up a lot
in safety science.
It's interesting.
You talk about kind of visitingdifferent you know kind of
sites and I recall you'retelling the story.
I recall I visited Ireland andwent to the Cliffs of Moher,
probably about 20 years ago, andyou could walk right up to the
edge.
There's no barriers, no nothing.
(10:32):
And you know I've been told,nope, it's all fenced off now
and you can't get up to the edge.
And so I think that it does comeup in the context of, rather,
do the safety rules interferewith work?
And I think this actually Ithink reflects on security as
well.
So one of the challenges withkind of a centralized policies,
(10:53):
centralized procedures andcentralized controls is they
have, you know, kind of one wayof working, what we call work as
imagined, of working.
Um, what we call work asimagined, and you know work is
done, almost never lines up withwork as imagined and often can
deviate significantly.
So you know it's it'schallenging to try to find the
(11:14):
right balance where you'rekeeping, keeping people safe but
not interfering with work or,as you say, interfering with
their experience of you know ofa historical site, interfering
with their experience of youknow of a historical site.
So you know there's even reallyyou know what are seemingly
obvious examples.
Like you can, you can't driveyour car unless your your
seatbelt is buckled right.
(11:35):
That seems like there's nevergoing to be a reason why that
would be something you wouldwant to avoid.
Well, in fact, you know likethere are cases where that that
happens.
So the case that I heard on, uh, one of my favorite safety
podcasts, which I'll plug, whichis the safety of work Um, I
think it's really accessible fora broad audience.
Um, they talked about a oil andgas company, um, where they're
(11:59):
out in you know kind of ruralareas and they have to go
through several fences andthey're going, you know, like a
few miles an hour and so theyhave to, like they're going to
buckle up, drive 10 feet andthen unbuckle, get out of the
car, open up the fence andbuckle back in.
Probably not.
So I think that you know thereal challenge is is how do you
find that right balance ofallowing people to kind of make
(12:21):
their own decisions about safetywhile still keeping them safe
and while still having, you know, the rules that you have to
have?
Eric Brown (12:28):
and how much of that
responsibility is on the user
or the individual versus thecorporation?
Right and and right?
Probably some of it is becausewe're a litigious society and if
those guardrails weren't inplace then there might be
lawsuits.
John Benninghoff (12:46):
With safety.
That's one thing that'sdefinitely different than
security, Because when you'remaking decisions about your own
personal safety, you're makingdecisions that could directly
impact you.
So if you're doing somethingunsafe and something goes wrong,
you could get killed or injured, but when we're talking about
security and making securitydecisions, worst case scenario
(13:09):
is probably you get fired.
So kind of the personal risk isdifferent, and so that's again
one of the things that I try tokeep in mind when adapting
lessons from safety to security.
Nick Mellem (13:19):
So I guess,
branching off that, john, what
are some of the more commonsafety risks that you're seeing,
maybe in software developmentand I think we just saw that
with CrowdStrike and Microsoft,so maybe you can touch on that.
Or what are common trends thatyou guys are seeing?
John Benninghoff (13:36):
CrowdStrike is
a really interesting story.
I think you know I've actuallydid a session at a conference
where we kind of talked throughthat whole incident and you know
it's easy, with the benefit ofhindsight, to kind of say that
(13:57):
hey, you know, crowdstrike mademistakes.
They did it wrong, they screwedup and therefore they caused
this outage wrong, they screwedup and therefore they caused
this outage.
One of the big lessons fromsafety is that, put succinctly,
blame is the enemy of safety,because blame is the enemy of
learning.
So blame impedes learning.
And so if we can avoid blameprobably not when we're dealing
(14:19):
in the legal system but outsidethe legal system then we're
going to actually learn morewhen things go wrong or when
things kind of go almost wrong.
So we get to like a near messWith CrowdStrike.
I think what's interesting isthat they had a lot of very
difficult trade-off decisions tomake.
So you know they have you know,from what I've read and I read
(14:44):
reports from both reports fromCrowdStrike they did invest a
fair amount in kind of qualityassurance for their products.
They did a lot of testing ofthe code but they did less
testing of the configurationchanges that they pushed, but
the whole reason why they havethose configuration changes is
so that they can quickly get youknow, basically security
protections out to their clients.
(15:05):
So like, how do you make theright decision without the
benefit of hindsight?
It gets tricky.
I guess kind of getting back toyour question is you know, like
you know, what safety risks do Isee in software?
That was actually part of myacademic work.
I made the argument that oursoftware systems are in fact
becoming safety critical, bothin very real ways that it could
(15:26):
impact life and health safety,but also in kind of the broader
sense of it actually impactsorganizations and their ability
to operate.
And I think the time has cometo bring engineering and safety
engineering principles tobuilding those software systems.
Joshua Schmidt (15:44):
That's
interesting.
I'm going to serve this upbecause I'm curious to see what
Eric has to say.
I know we talked a lot aboutthis on the podcast and in
person balancing that safetyprotocols or implementations
versus the fatigue that maybepeople in an organization may
experience when being educatedconstantly about this or being
(16:05):
warned about this type of stuff,whether it's in software or
phishing campaigns and I'd loveto hear maybe how some of that
safety science has shown up inEric's work and kind of go back
and forth between Eric and Johnon that topic a little bit.
Eric Brown (16:20):
Sure.
Thanks, josh.
You know this is interestingbecause I ran into this just
this afternoon.
So I'm helping my mom throughsome some healthcare challenges
and we were on the phone with alet's call it a very prominent
hospital that's Minnesota-basedwith a global reach.
(16:42):
I was on the phone with thathospital getting her account
password reset.
So, going through the steps todo that, it was something where
we couldn't do it over email.
It had to be done with a liveperson.
(17:06):
And she's at the point whereshe's creating the username.
So she creates the username andthen she's putting in she's,
you know.
So she's saying to the ladyokay, I'm putting in the
password, do you want me to puta password in?
And the lady's like, yes, andthis is the tech support person
from that hospital.
And the tech support personsays, yes, you need to put a
(17:27):
password in.
Here I use my pet's name andthe year and I almost fell out
of my chair.
And you know to your point, josh, of you know what's that
balance between safety and whereis that intersection of
education?
And you know what's thatbalance between safety and where
is that intersection ofeducation.
And you know we've beeneducating users on passwords for
(17:49):
20 some years longer.
And then the lady proceeded toexplain that the password had to
be at least eight characters.
And I was just thinking, wow,they were very concerned that my
mom was sharing my account heraccount with me for HIPAA
violation reasons, versus havinga strong password that anybody
(18:12):
could get into.
So I found that quiteinteresting and we do have to
put those guardrails in place onthe technology side of
cybersecurity to prevent theusers from injuring themselves
or the company as a whole.
And the shift away frompasswords I think is a great
(18:33):
thing as we look at passkeys andother forms of multi-factor
authentication, because we justcan't rely on the users and the
training to provide thatbaseline.
Nick Mellem (18:43):
It's probably safe
to say they didn't have any MFA
after that.
That was an option, but notmandatory Interesting.
Joshua Schmidt (18:49):
Okay, I want to
take it here.
I'm really interested what yousaid, john, about implementing
safety science that maybe hascome from engineering and taking
those principles and applyingthose to cybersecurity In an
instance like passwordmanagement.
It seems like there's a lot ofpsychology behind it, or maybe
(19:09):
you could speak to multiplethings, balancing that fatigue
versus safety protocols, and howdoes that tie into the
engineering aspect of things orthat viewpoint, you know?
John Benninghoff (19:23):
there's
definitely an analog in safety,
and one of the studies withinsafety psychology is ergonomics,
because even very early on inthe history of safety they
realized that ergonomics reallycan help promote safe outcomes.
So kind of the classic examplefrom aviation safety is that
(19:46):
pilots had controls and thecontrols were right next to each
other.
One control, I think,controlled the flaps and the one
control controlled the landinggear, and they were getting them
confused.
So a brilliant engineer justbasically said oh, here's what
we're going to do On one of thetwo levers we're going to put a
little set of wheels and on theother one we're going to put
(20:08):
like a little wing, and thenthey won't actually mix them up
again.
And you know, that kind of hasbeen carried through kind of
throughout aviation safety.
So if you talk to pilots today,you know they'll talk about
things like cognitive load.
You actually might start tohear people in technology
starting to talk about that aswell.
(20:29):
It's an acknowledgement thatthere's just limits of what we
can do.
We need to absolutely applythose principles to security
understanding human limits andsupporting people to make better
decisions rather than justblaming the user for doing it
wrong and I think we're startingto make that transition in
security.
I think Passkey is a greatexample of it.
(20:50):
It's an acknowledgement thatpasswords just don't work.
I actually was in my last job,fortunate to work with a
research psychologist whobasically had done research and
said, hey, guess what?
Passwords just don't work,especially for older folks who
have harder time withremembering things, and you know
like we can do anything otherthan passwords make them much
(21:12):
better off.
So you know, kind of takingthat approach of changing the
design of our systems to be morehuman, friendly, like pass keys
is is, you know, is one waywhere you know lessons from
safety can be adapted tosecurity.
Nick Mellem (21:25):
Along this whole
conversation we're having right
now with fatigue, josh and Ericand I, we read an article a
couple weeks ago about you knowa gentleman in a workplace that
was saying that he's not evergoing to go offline, his
passwords don't matter, his datait doesn't matter.
I'm just a small fish in a bigpond.
I don't need to be thatprotective of my data because
(21:45):
nobody cares.
So I guess I'm kind of curious,john, on what your take is on
that, because we seem to beevery day working so hard to
control this data.
You know making environmentssafe, as safe as possible, being
good stewards of their data,and we get one or two, you know,
bad apples, let's say, that canpoison everybody to not care
(22:08):
about the company's data ortheir data, and so this directly
I think, like you said before,a near miss of the same topic of
fatigue is are we driving themto that point or are we not
doing it enough and maybe thewrong way?
John Benninghoff (22:23):
So curious
about that there.
I think within an organization,you're going to get a range of
attitudes towards security.
In the safety space, theyactually talk a lot about safety
culture, and how do youactually create a culture where
safety is valued and important?
Right, I think that's what wetry to do with security as well.
How do we actually create anenvironment where security is
(22:44):
valued?
Technology workers bothdevelopers and infrastructure
engineers increasingly dounderstand the importance of
security and they actuallyunderstand security pretty well.
I actually was talking withsomeone a couple of months ago.
He's like I don't really knowmuch about security, but I know
you really probably should usemulti-factor authentication and
(23:07):
apply your patches, and I'm likewell, you've just hit two of
the top three things that youcan do to improve your security.
I think that the fatigue thatyou're talking about often comes
from something that I'll callsecurity clutter.
So there's an idea in safetythat was, I think, created by
(23:28):
well, I think the paper is DrewRay and I'm blanking on his
partner's name.
They're also the co-host of theSafety Work podcast.
But the notion of safetyclutter is that, over time, we
have a lot of policies that tendto accumulate, and it's always
easy to add more policy, both insafety and security.
(23:49):
But it's hard to take it awayand, looking at the policies,
which ones are actually workingmaking things more safe or more
secure, which things maybearen't really doing much and we
can get rid of those.
Eric Brown (24:17):
And that just, you
know like, reduces that kind of
fatigue and kind of you know,makes you know, kind of helps
you improve the security culture.
I'm thinking about a clientthat we're working with.
We recently got brought in toprovide some security,
leadership and oversight totheir organization and, going
through the organization,they've got thousands of people
that work there and there's overa third of the people that have
(24:39):
local admin.
And, just looking, it's likewhat is going on here and it was
something that came from thepandemic, where it was just
easier to give people localadmin when they were remote and
they needed to install theirsoftware or whatever it was.
But then the aftermath ofcoming out of the pandemic and
(25:01):
having malware and phishingattacks coming into the
organization, and then you knowpeople with local admin click on
that.
They don't know that they'veclicked on something malicious
and you know then we've got aproblem.
But going back and working tounring that bell has been really
difficult because the usersfeel like, oh, you're taking
(25:25):
something away from me, you'retaking away something that I
could do and now I can't do it,even though they never should
have been able to do it from thebeginning.
Right, these are work computers.
They're not your home computer.
So I've always found that to beinteresting because it's more
of a psychology problem than itis a technology problem.
(25:46):
Most of the users 99.9% of theusers don't need local admin.
I'd say 100% don't need localadmin.
But yet they were given it toinstall something at one point
in time and then it was neverremoved.
So then when you announce thatyou're going to remove it, it's
like well, you know the world isending.
And I'm sure it's like that inother areas of safety, like when
(26:09):
we were talking about earlieron the call your trip to Ireland
, where you know there were nofences and now there aren't.
It's like I've lost something,because now I can't get as close
as I used to want to.
John Benninghoff (26:20):
That was part
of my early journey into safety
was that.
You know, back in about 2008, Iwas kind of frustrated with the
state of security and kind of ade-emphasis of culture and
psychology and understandingkind of human behavior and how
people think, and I think we'reseeing that change in security.
(26:42):
I think we're increasinglyacknowledging that.
You know we have to take thatinto account and you know it's.
It's I mean the, the, thesafety.
You know the safety programthat I, that I involve my
academic studies, was in thepsychology department, is in the
psychology department for areason because that's, you know,
the primary kind of likeacademic research is really
(27:03):
focusing on psychology.
If you look at aviation safety,a lot of the technical problems
around just keeping airplanes upin the air, like the actual
engineering technology problems,were mostly solved in the 70s,
right, maybe even a little bitbefore that.
All of the advances in safetyand flying after that have been
(27:26):
all psychology Things, likesomething that's called crew
resource management, which youknow was implemented as the
result of the worst aviationaccident in history at Tenerife.
I mean it was kind of juststarting to be come on the scene
then, but that reallyaccelerated the adoption Because
in that case the pilot took offwithout authorization and that
(27:49):
led to the plane crash.
Now, when the crew is workingtogether the pilot flying and
the pilot observing they're noteven the pilot and the co-pilot
anymore are constantly kind ofcross-checking each other and
basically making sure thatneither of them are making
mistakes and they're calling outin a way that's kind of
non-confrontational.
(28:09):
It's designed to basically makethe flight safer.
Eric Brown (28:14):
There was that
incident, and you may recall
this one, and I think it was aflight that was of Japanese
origin, I believe it was, butthere was the pilot flying at
that time.
The pilot or the captain of theaircraft was doing something
(28:40):
that they shouldn't have beendoing, or maybe not controlling
the aircraft in a way that theyshould have been controlling it,
but the co-pilot, who was firstofficer and subservient to the
captain, didn't want to call outthe captain's mistake and I
think that led to a crash.
I'm not sure if there werefatalities, but that type of
(29:02):
seniority or hierarchy betweennot wanting to culturally right,
not wanting to correct thecaptain, led to that and I think
since then the airline traininghas introduced ways to
de-escalate and maybe point outthings that were happening that
(29:22):
maybe one person'sresponsibility, but somebody
else has oversight.
John Benninghoff (29:26):
Yeah,
absolutely I think I remember.
I kind of vaguely remember asimilar story which I think it
was actually a Korean airline.
Nick Mellem (29:35):
Also very, was it
Korean?
John Benninghoff (29:37):
I think it
might have been yeah, but the
rest of the story I heard Iremember part of the training
was they gave.
They said, okay, you're allgoing to adopt English names.
So you know, it was kind of away again, psychologically, a
way to kind of like distancethemselves from their
traditional culture by bybasically using, you know the,
the English names in in the inthe cockpit to address each
(30:00):
other yeah, yeah.
So so, eric are, you are you doyou happen to be a pilot, then I
do fly.
Yeah, what's your take on kindof what, what I've talked about?
Have I kind of accuratelyrepresented it?
Absolutely.
Eric Brown (30:12):
Yeah, spot on.
You know, I I, when I startedflying it was about 25 years ago
, maybe, maybe a little longer,and there wasn't the level of
the crew, resource managementwasn't there, the technology
wasn't there and even theaviation safety wasn't there.
(30:35):
From the perspective of when Istarted flying, not every
aircraft had to have atransponder.
But now every aircraft flying,unless it's, you know, in a
special category, has to have atransponder.
So you know, you can look onFlightAware and see all of the
(30:55):
flights around us and in thecockpit you can see all of the
other aircraft around you at thesame time.
And you don't even have to beworking with air traffic control
if you're in uncontrolledairspace to be able to see those
other pilots, which just itmakes it a lot safer because you
know maybe you're looking outthe window before, but you
(31:16):
didn't necessarily.
It's kind of hard to spot anairplane or a helicopter when
you're flying, but now you seeit on the screen and then you're
like okay, yep, they're overthere and, uh, that has really
helped a lot yeah.
John Benninghoff (31:28):
So I'm gonna
bring a you know draw analogy to
to cyber security.
I think that the challenge forus as cyber security
practitioners is, like how do weactually create the technology
that makes it easier for peopleto make the right decisions
around security?
How do we create the rightculture?
You know, I think it's bothkind of the ergonomic aspects of
(31:49):
like when you're working onyour computer and you're getting
that email and we've done somesimple things like hey, this
email is external, but you knowlike, maybe we can make that
more, a little bit more advanced, where it says, like you know,
hey, the AI says this is like avery high risk of being a
phishing email, right, so maybeyou should be extra careful
(32:09):
about it, maybe contact thatperson on the phone and see if
it's legitimate.
Eric Brown (32:14):
So and that AI.
You know there's some somegreat tools out there today that
that that we work with, helpcustomers with.
But some of the nice thingabout it is it'll remove it from
that user's inbox without theuser even having to make those
decisions.
So 90% of the bad emails areremoved and then the ones that
(32:36):
aren't are kind of monitored, soto speak.
But there's so much, as youknow, coming through from an
email perspective no-transcriptyou can convince somebody to
(33:04):
click on it.
You've really got a goodfoothold.
Joshua Schmidt (33:07):
The air traffic
stuff that you were talking
about, I was reminded of Airbusand that really bad crash in
2001, where the first officerwas using excessive amounts of
rudders after some turbulence tokind of correct the plane when
it didn't need to be happening,and I think that ultimately was
determined what caused the planeto crash.
(33:28):
So I'm wondering, john, what doyou think about having a balance
between giving people somecontrol?
Because if you take it all andput it on autopilot, you know
and this goes for cybersecuritytoo I'm sure people just check
out completely.
You know, and I think you knowto Eric's point, that's maybe
part of the issue of havingguardrails everywhere, maybe
(33:49):
it's litigiousness of thesociety, but if you nerf the
whole world, people kind of stopthinking for themselves, and we
want to have people engaged inwhat they're doing right.
So maybe you could speak to howyou know they're doing right.
So maybe you could speak to howyou know developing software or
implementing these securitytheories.
How do you give people enoughcontrol where they're feeling
(34:11):
integrated in the process butalso like keeping those bad
emails out of their email inbox?
John Benninghoff (34:18):
Yeah.
So I think actually for this.
I think the good news is isthat we actually have an
established practice withintechnology that has a good
answer on that and that's sitereliability engineering.
So if you're not familiar, itwas created at Google.
It's kind of their approach toDevOps and operations and
managing incidents andavailability Not by accident
(34:40):
kind of.
In the SRE community, a lot ofsafety terminology, safety
thinking, safety science, hasactually kind of moved into the
SRE space.
There are a handful of otherpeople who are in technology,
who have master's degrees insafety science, not necessarily
from my school, but a differentschool, still an excellent
school.
Safety principles are beingapplied in SRE.
(35:02):
It's there if you know where tolook for, and part of what they
talk about is with people.
What you want to use themachines for is reducing toil,
so those kind of repetitivetasks that you know just kind of
wear you down, that you know,that you know basically it's a
heavy workload, the things thatare kind of like routine and
predictable and that freespeople up to do what they do
(35:25):
best, which is deal with theunexpected, and the unexpected
is always happening.
You know, in safety, likesecurity, like people are both
the the strong and the weak partof the system.
They're the ones and themachines aren't going to be
flexible and adaptable, but thepeople are, so when they come up
to kind of a novel situationthey're the ones that are going
(35:46):
to be able to do something toactually, you know, like to
actually improve security.
I mean, even if you go all theway back to Cliff Stoll and the
book the Cuckoo's Egg, cliffStoll he's a Unix sysadmin at a
Department of Energy facility inI think it was the 1980s and he
(36:08):
saw a few cent discrepancy inthe accounting logs for his
system.
But that discrepancy I got tofigure out why this is that
whole thing led him to actuallyidentifying like an East German
spy ring and catching them all.
Because you know, like you hadone sysadmin who was kind of
paying extra close attention towhat something wasn't right and
(36:31):
actually kind of kept digging.
Joshua Schmidt (36:33):
Eric, do you
have any anecdotal stories on
how maybe something like thathas happened at IT Audit Labs?
Or Nick, maybe you guys couldthink of a real life scenario
where the human factor has beenwhat saved an organization.
Well, I can think about amillion where that didn't save,
saved an organization.
Nick Mellem (36:51):
Well, I can think
about a million where that
didn't save the organization.
John Benninghoff (36:53):
Yeah, I was
going to say I actually have an
opposite story which I'm happyto report on.
What do you do?
Yeah, at one of the companies Iworked at, we actually had like
a DBA get like a phishing emailand this was probably a little
bit earlier in oursophistication of responding to
phishing but it looked wrong andthey didn't really know what to
(37:17):
do with it because we didn'thave a good reporting system yet
.
But they knew a security person.
So they literally walked overto this person's desk and say,
hey, I got this weird email,what do I do about it?
And as a result, we were ableto stop that attack from working
.
It was like targeting ouradministrators, trying to get
like into our admin accounts,and the security team was able
(37:38):
to respond and prevent that fromblowing up because that one
person literally just happenedto know somebody that they
trusted in security and walkedover to their desk and said
something that's exactly whatyou would want to happen every
time.
Nick Mellem (37:49):
Yeah, you know, go
away and ask did you mean to
send me that email to get 10iTunes gift cards and scrape off
the back and send me the code?
You know, ask the question ifyou're.
I don't think.
I think we can all safely saythat nobody's going to ever get
upset that you asked thequestion.
To be sure, I got a questionthat doesn't involve the cockpit
, but we've kind of been talkingabout.
(38:10):
My question is that we've beentalking about the whole time is
the question what if anorganization wants to do better
at safety science?
Right, if they want to get intoit, if they're realizing
they're falling short there?
From your point of view, john,is there a specific area to
start or is it just sowidespread that you just pick
(38:31):
and choose.
John Benninghoff (38:32):
You know, I
think I actually mentioned
security clutter.
That's one of the ways, right?
Um, so just maybe take stockand take a look at the security
policies and rules that you'veaccumulated over the years and
actually kind of pare those down.
I think part of it really is acultural shift and a mind shift.
So, you know, I've haddiscussions with kind of peers
(38:54):
and it's kind of led to well.
So a peer of mine made anexcellent statement, which was
you know, we expect we don'texpect the CFO to make the
company profitable, but we doexpect the CISO to make the
company secure.
And so I think, you know, partof that is the mind shift.
Right, security is, I don't wantto say everyone's
(39:16):
responsibility, but it's ashared responsibility, like
everyone in the organizationcontributes to the security of
that organization.
And the you know there's themovie that actually I named my
company after, which is SafetyDifferently, which was something
that was created by a safetyscientist by the name of Sidney
(39:39):
Decker, who actually also is apilot.
A gas extraction company saysyou know, I had to realize that.
You know, as the CEO, I'mresponsible for all aspects of
the performance of my company,including safety.
So I can't you know like I'mresponsible for the safety in my
(40:02):
organization, not my safetyteam.
Now, obviously the safety teamhas an important role to play in
that.
But I do think that leadershipof organizations have to
understand, acknowledge andaccept that it is an executive
responsibility to deliversecurity throughout the org.
You can't just put it all onthe CISO, and I've seen that
(40:23):
happen, right.
So one of the CISOs I work withkind of told me about an
interaction he had with the CEOand he's like, hey, no breaches,
right, and that was about it.
And I'm like, basically themessage is like if there's a
breach it's your fault.
I mean like that's not fair tothe CISO, it's not fair to
security.
And I think you know the flipside of that is, as security
(40:44):
people we have to, you know,understand how the work is done,
how the work is done, how thebusiness operates, and work to
support that in a secure way.
Nick Mellem (40:52):
It's on that
thought process too, john.
You were saying about thebreaches.
If it's your fault, that youknow, probably not the safety
science side, but that'scontributed a lot to burnout and
maybe less productive work,leading to you know breaches and
whatnot.
Eric Brown (41:06):
It could come down
to span of control too, right.
So if I come in as the CISO andbreaches are my responsibility
and you give me full autonomy todo whatever I think is
necessary in the organization toprevent the bre that we might
(41:27):
want to do, like remove localadmin or you know whatever else
we think is necessary to do,then that really creates
friction and it really creates aproblem, because now you're
assigning me responsibilitywithout having any authority to
(41:47):
do anything about it.
Joshua Schmidt (41:48):
Eric, I'm
curious is there a time where
you were given the reins toshore up the security and you
did have a positive response?
Or maybe by leading someco-workers or you yourself being
the hero of the day early inyour career or enabling other
people to have a win in thatregard, I had a couple
(42:11):
throughout throughout my career.
Eric Brown (42:13):
But the ones I look
back on um and and have the the
most positive thoughts aroundare ones where we've we've come
into an organization and reallybuilt a team that could then
carry that organization forward.
And the team kind of goingthrough trials and tribulations
(42:37):
of breach response and thenputting in place controls that
help the organization better andhelp the organization evolve
and the team really beingrecognized as helping the
organization is what's a win forme.
We've done that a couple oftimes.
(42:58):
Nick's been part of thatjourney and that I think is the
most rewarding because you canlook back.
As Dan Sullivan says, it can besomewhat disheartening to always
continue to measure forward.
So like, oh, you know, we weregoing to have 15,000 endpoints
(43:18):
protected but we only got to14,000, right, that can be kind
of defeatist Instead of lookingback and say, wow, you know, we
got all of these things done inthe last six months or even in
the last week, because ininformation security it can be a
bit oppressive because thenoise never stops coming in.
(43:40):
But if you look back andcelebrate those successes as
individuals in the team and thenas the team and then as the
organization, it's reallyrewarding and it just kind of
flips it around to be able toreflect on that positive
experience.
Joshua Schmidt (43:58):
Seems like a lot
of this stuff boils down to the
psychology, john, and youmentioned installing those icons
in the cockpit.
I think the seatbelt wasinvented in you know like the
1800s, 1880s or something, butwasn't implemented or mandated
until like 1965 or somethinglike that.
(44:19):
So do we have a lot of thesetools already and it's just more
about, like Eric's talkingabout enabling people psychology
and like how we train andcreate a culture or what's the
future look like forimplementing these safety
protocols and safety sciences.
John Benninghoff (44:37):
Yeah, no, I
think I think, josh, you're
right that we really do have alot of the technologies already.
I think a lot of it is abouthow do we, you know, how do we
use organizational psychology toget them implemented in
organizations.
And you know, eric, you touchedon something I think is really
important.
So one of the big shifts insafety most recently and this is
(44:58):
reflected in safety differentlyin the work of other safety
scientists Eric Hollenagleactually coined the term safety
too, and you know he says, likeyou know, you need to focus on
not just when the bad thingshappen and preventing those.
You need to focus on the goodwork that you're doing, because
(45:18):
you can't have a science of thenon-occurrence of events.
You can only have a sciencebased on things that happen.
You can't have a science basedon things that don't happen,
things that don't happen.
So you know, promoting the work, promoting the work of security
, promoting working securely, Ithink is really important.
(45:42):
I think you know there was arecent academic paper that was
shared with me.
That was a literature review ofwhat works in security, and the
top three things are thingsthat we know how to do, we can
do, and two of them aren't evenreally security activities.
Reduce your tax service wasnumber one.
That has nothing to do withthat.
I mean, yeah, that's sort ofsecurity, but it's really just
(46:04):
about like, hey, do you have agood inventory of your systems?
Are you turning off the thingsthat you don't need?
Number two patching right.
So, patching, maintenance,proactively upgrading your
technologies these things reallypromote security, but they're
not.
They don't just promotesecurity performance, they
promote other forms ofperformance as well.
(46:24):
And you know just like workingwith organizations and trying to
get them understanding, hey, ifyou want good security and good
availability, software is likemachines you absolutely have to
invest in maintenance in ittoday for almost every
organization and you need acertain level of minimum
maintenance done, otherwiseyou're going to see bad outcomes
(46:45):
on the other side.
And the third one multi-factorauthentication.
It's a very well understoodtechnology.
Now it's increasingly adopted,but adopting it wholesale can do
a lot to improve your security.
Eric Brown (47:00):
John, that's real
poignant and I'm reflecting back
on what you said about patchingand I think there's room that
we still can grow in those areas.
And we've seen all too often inorganizations where they've got
(47:21):
environments that are set upand then, for whatever reason,
on the network side, server side, they're not maintained or
patched.
They're not maintained orpatched and then you run into
(47:58):
legacy environments that are endand there's really not a great
reason why they're not patched.
I mean, you can hear a thousandreasons about why they're not
maintained, but there's notreally an acceptable reason.
And you know, as you look attechnologies that are out there
today and I'll pick on one,Meraki, from a networking
(48:21):
standpoint, I think they didsomething really good in that
the patching is just integratedinto the ecosystem of the device
.
So the device is going to goout, it's going to get its own
patches, it's going to apply itsown patches and it's going to
maintain itself.
Of course you have to allow itto do that.
And as we pivot to SaaS-basedsolutions, did a project
(48:48):
recently with Dynamics 365,which is Microsoft's ERP system
and that solution you are onlyallowed to be a maximum of two
releases, so you have to takeone release, then you can skip
one, then you got to take thenext one, so they don't allow
(49:10):
you to fall too far behind, andI think that shift in
methodology and in operating isreally what we need so we don't
end up with these Oracleenvironments that are older than
our kids, because there's thatwhen the technology exists to
maintain and support it in anautomated fashion.
John Benninghoff (49:33):
Yeah,
absolutely, and I guess what I
would.
I'm just building on that.
You know it kind of funny.
So one of the other things thatI've worked on that's not
really not really related to mysafety work, but it's kind of
like you know, let's say,adjacent is risk quantification.
And I say risk quantification,not cyber risk quantification.
And I say risk quantificationnot cyber risk quantification
(49:54):
because I worked with acolleague at our last company
and we had a legacy system andyou know they actually said, hey
, we're worried about outages.
Because at the time I wasactually was working on the S3
side, working on starting an S3practice at the company.
I said, ok, well, my friend canactually analyze your system
(50:15):
and estimate the level of riskin there.
Well, he did the digging andwhat he learned was the risk of
the outages was actually prettysmall.
But along the way, by talkingto the users of the system, the
business owners of the system,and asking the question well,
what else are you worried about?
Essentially learned that it waslike hey, this is a really old
(50:38):
system, it's functionallyobsolete.
We're worried about losingcustomers because of it.
In fact, we already have lostone and the business risk of
running on that system ended upbeing much larger than the
security or the availabilityrisk.
One of the lessons from safetyand security that I think we can
bring to improve just kind ofbusiness operations and business
(50:59):
performance is that people havea really hard time thinking
about risk.
So if we can actually kind ofshow them the risks, if we can
show them the risks in a waythat help them make better
decisions in this case it was aninvestment decision and so we
need to present that risk indollars and cents, right, and it
was the likelihood of losing aspecific amount of money.
(51:20):
When that analysis waspresented to our business
partners, they basically saidwell, we've been delaying
upgrading that system, replacingthat system, for a long time.
But it went from that to likehow soon can you get started?
Joshua Schmidt (51:33):
That's a great
business tip.
Show them what it's going toaffect the pocketbook and it
cuts to the face.
John Benninghoff (51:40):
Well, that's
how businesses make decisions.
Joshua Schmidt (51:43):
It makes sense.
You know we're almost at anhour.
I was just wanting to see ifyou guys had any other questions
or topics you wanted to touchon.
Otherwise, you know I'd like tohear what you've been working
on recently with your company,john.
Maybe you could give us alittle insight there and then we
could wrap it up for the day.
John Benninghoff (51:59):
Yeah, so after
a long career, kind of based on
the success of a talk I did onsecurity differently, I'm
starting my own consultingcompany, which for now is just
me, but I'm excited to offer myservices to people who want to
(52:20):
kind of assess their securityposture through a safety lens.
We'll actually give you the.
Are you doing the positiveactions?
The positive activities likemaintenance that'll give you the
good security outcomes and kindof help them with the strategy
and creating programs that drivehigher levels of engagement
with their employees and takeadvantage of the good work that
(52:41):
they're already doing.
Joshua Schmidt (52:42):
Well, thanks so
much for joining us today, john.
It's been a really stimulatingconversation and I had a great
time chatting with you, and I'msure Eric and Nick did too.
I guess I'll wrap it up there.
My name is Joshua Schmidt.
I'm the producer of the Auditpresented by IT Audit Labs.
You've been joined by NickMellom and Eric Brown as usual,
and then today our guest wasJohn Benninghoff talking about
(53:04):
safety science.
Thanks so much, john.
We enjoyed your time today andhope to see you down the road.
You your time today and hope tosee you down the road.
Eric Brown (53:10):
You have been
listening to the Audit presented
by IT Audit Labs.
We are experts at assessingrisk and compliance, while
providing administrative andtechnical controls to improve
our clients' data security.
Our threat assessments find thesoft spots before the bad guys
do, identifying likelihood andimpact.
Or our security controlassessments rank the level of
(53:32):
maturity relative to the size ofyour organization.
Thanks to our devoted listenersand followers, as well as our
producer, joshua J Schmidt, andour audio video editor, cameron
Hill.
You can stay up to date on thelatest cybersecurity topics by
giving us a like and a follow onour socials and subscribing to
this podcast on Apple, spotifyor wherever you source your
(53:56):
security content.