Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 2 (00:04):
Welcome to another
episode of Simple Talks.
Today I have as my guest MrBill Shearstone, director of
Information Security in theEnergy Sector.
Bill has over 15 years ofexperience focused on IT
security with the Department ofDefense and the commercial
sector, where he is today.
So let's welcome, bill.
(00:26):
Hello, can you tell us a bitabout yourself?
Speaker 1 (00:30):
Hi, how are you Again
?
My name is Bill Shearstone.
I've been in the informationsecurity business for quite a
few years.
I had an interesting stint withthe Air Force and the
Department of Defense and then,once I finished up there, I
decided to go into the civiliansector, where I've worked for
companies in the insuranceindustry.
And here I am here now finallyworking in the energy sector.
Speaker 2 (00:57):
And today, Ray, you
are in a leadership position.
So what do you think is thebiggest challenge for a security
leader today?
Speaker 1 (01:08):
You know what?
It's almost cliche-ish, butit's AI and we're looking at it
different ways.
We're a regulated industry andthen we also have some
intellectual property concernstoo, and then you have to
balance that with the benefitsthat are coming with these new
tools and technologies, with AI.
So one of the things right nowis we realize we have to embrace
(01:30):
it and we have to bring it in.
So we're just trying to figureout how best way to bring it in
in a controlled manner that wedo reduce our risk and then
still capitalize on the benefitsthat AI is showing us.
We've been focusing on that.
Right now we're working atdoing a couple of pilots
different technologies and rightnow, before we go ahead and
proceed with these pilots, we'vegot to give our executive team
(01:53):
confidence that the risks arecontrolled and the risks are
known.
Speaker 2 (01:59):
Perfect, and I'll
tell you, you are now the owner
of the record that we brought upAI in the podcast one minute
from the start.
I think it would be very hardto break that record, so I think
you're going to hold on thatone for a very long time, right?
But yeah, you're mentioningkind of having some initial kind
(02:20):
of work and experience with AIon the defense side, but I know,
right kind of from previousconversations that you also have
experimented with that as partof pen testing activities, so
why don't you tell us a littlemore about that?
Speaker 1 (02:37):
Sure, I'll frame this
a little bit.
You know, when we do our pentests, we have a good
relationship with our vendorthat we use.
We do our traditional oneswhere we just say what's more
vulnerable on the outside aparticular pen test.
We don't tell them much.
But then right away we get intomore of a collaborative work
with our pen testers.
(02:57):
We give them a lot of access toour internal systems so they
get a good view of what theinternal vulnerabilities are
quickly, aside from trying tobreak in through a phishing
attack or social engineering andwhat have you.
So when we're doing thisdialogue and this is almost a
year or so ago now the testerssay we've done some social
(03:17):
engineering with you beforephishing.
Why don't we try a deep fake?
And I thought, well, that'sinteresting.
And then so we thought about ita little bit, and then that
came up with a scenario and thenI thought, well, I think this
would be a great trainingopportunity to go through this.
And then that way, you know, ifwe do have some good, you know,
(03:39):
information that comes out ofthis, I can use this for
security awareness.
So we decided to do, or come upwith a scenario where we would
try to fake our our help desktechnician into resetting a
person's you know password or anMFA, and then we were talking
through.
This is like you know what Idon't really want to target
somebody in my organization todo that.
(04:02):
So what we ended up doing doingis we brought our pen testers
in as contractors.
So basically, we gave them acontractor account on our system
and then we'll go through thescenario.
So when we're doing initialplanning for this, we had to do
a couple of dry runs.
Believe it or not, thetechnology at the time we did
(04:23):
this wasn't as mature as it wasnow.
You know, at the time you readthe article about, you know the
deepfake that happened with theHong Kong company and how they
got up millions of dollars.
Well, the tools that we wereusing, our testers were using,
weren't as good.
For example, the audio wasn'tthere and the video was a little
bit choppy.
So we went through that.
It's okay.
(04:46):
I'm not so sure it's going towork.
They came back a couple ofweeks later with an upgrade to
this tool.
Before I come back a little bit, this tool is open source.
It's free.
Anybody can get it.
Speaker 2 (04:57):
Right, it's not like
they're expanding insane amounts
of money to do a deep work.
They're using open sourcesoftware.
Speaker 1 (05:03):
It's open source.
The only challenge was havingenough compute power actually to
use it.
So, you know, was one of thereasons it didn't work.
So they actually came up with a, you know, a dry run with me
and I was the person they'regoing to impersonate.
And you know, I have hair, I'mclean, shaving, uh.
So what they did is they threwup a video of the pen tester
(05:24):
using my face.
Now, think about a typical pentester.
All right, they have the longgoatee beard, you know.
And this person happened to bebald.
So when I first saw this, I sawmy face with a goatee and bald.
Oh, that's hilarious, it'shilarious.
They were laughing.
So I was like all right, youknow what?
I think it's good enough to getit working, but you'll have to
(05:47):
tweak it a little bit.
So all he decided to do is thetester decided he's gonna shave
and he's gonna wear a hat.
Okay, that may work.
So then we decided to go throughthis exercise.
So we have two people here, orfour people that are involved in
this.
Okay, we're gonna call this onethe target.
It's actually the pen testerwho we want to get the MFA reset
to.
Okay, we're going to have theimpersonator, the person who
(06:10):
impersonates me and that's goingto be the person who's actually
going to be running thedeepfake.
And then we have our victim,which is going to be a help desk
technician within our company.
And then you know the way wehave it set up.
It's hard for somebody on theexternal side to go ahead and
initiate a Teams call.
We wanna do this via Teams, sowe actually needed somebody to
(06:32):
facilitate this conversation.
So we ended up using anotherperson on my security team to be
a facilitator on this.
So basically, we set up ourhelp desk.
You know, again, this would'vebeen a lot harder to do if we
didn't have some inside help onthis.
But again, the purpose of thiswas not to really embarrass or
trick the target.
The purpose was to come up witha training scenario or just a
(06:55):
feasibility scenario so we canuse for training.
So we went along with it.
So the person on my securityteam set up this call, reached
out to the help desk guy andsaid hey, you know we're having
a hard time getting thiscontractor to reset his MFA, can
you reset his MFA form?
(07:15):
And when we went through thescenario, the person was
impersonating me, didn't haveaudio.
So because the tool actuallycouldn't handle the audio.
So you know, imagine in a videowaving his hands, you know no
audio, and he's typing.
It's like hey, can you resetthis contractors MFA?
So you know, because the thecall was initiated from someone
(07:38):
on the inside.
You know he had that, eventhough the facilitator didn't
say a word.
He saw my face and that waslike you know what okay, yep,
this is legit, I'll go ahead anddo it.
So it was good enough toconvince the help desk
technician to reset thecontractor's MFA.
Alright.
So that happened and we closedit all out.
And then we had a feedbacksession afterwards.
(08:02):
I was actually out of the officeon vacation at the time and I
come back and they recorded thevideo.
So I looked at the video and,oh my god, it was uneasy seeing
my face interacting.
That wasn't me.
So what he ended up doing is heended up, you know, being clean
shaving and because he was bald, he wore a hat.
I never wore a hat, but still,just seeing my face on that was
(08:26):
amazing.
So then I decided to break thenews to the help deck
technicians, like, hey, I'msorry we set you up.
The reason we did this is fortraining.
He was embarrassed and he waslike you know what, I'm sorry,
you know, things didn't seemright to me.
It's like what do you mean?
It's like you know that I justhad this gut feeling that things
weren't right.
(08:47):
But because I saw your face andyou know my partner was on the
other call, even though hedidn't say a word, he just saw
his team's thing, it broughtlegitimacy to it.
And then we walked through thevideo and I showed them as like,
well, listen, did you?
If you noticed?
Because it was an inside call,the contractor who has had my
face even had to dip thecontractor name on the bottom of
(09:09):
his Teams logo.
But because he saw my face, hedidn't even pick up that it was
a different name on that Teams.
So again, he was embarrassedand I said you know what?
I apologize, but we wanted todo this for training.
So the outcome of this is weactually proved that even a
company like me, a deep fake, isactually feasible and can be
(09:31):
done.
The good news is, you know, Itook pictures of this Again, we
didn't sell the guy out, wedidn't want to embarrass him
anymore.
So you know, every year we doin-person security training.
I put that up to show everybodythat yes, it's feasible.
And it's neat to see theinteractions from them, because
people are like you know, youread about it.
(09:52):
If you actually see that canactually happen, it really set
in.
Speaker 2 (09:56):
It makes it closer
right to reality.
Right, one thing is reading thenews and the other is seeing
the training.
Right.
Speaker 1 (10:09):
It happened right in
a test in the company I work for
with people that I'm seeinghere live.
It brings kind of it's farcloser than just reading in the
news.
And uh, when I went through thetraining scenario and
everything like that, everyone'slike well, who fell for?
It's like no, I'm not lettinghis name out he ended up, uh,
telling people himself, you knowhe's like, yep, it was me, it
was me, which I didn't want himto, but you know he felt want
him to, but he felt the need toget that off his chest.
I guess that it was him, soit's good.
(10:29):
So, because of that, I think itwas an important tool to again
to give some realism that thingslike that can happen.
Speaker 2 (10:38):
Yeah, that was very
good, and there are two points
and there are two points.
An interesting thing aboutlistening to this exercise is
there are two points that caughtmy attention and they are not
necessarily related to thetechnology or the innovation
related to deepfakes, but thefirst one I want to bring that
back later that is related toyou running these exercises
(11:02):
already given some level ofaccess or kind of internal
access to the pen tester.
I really like that approach andI want to talk more about that,
because I think a lot of peoplekind of do the traditional
black box from outside pen testand they waste a lot of time and
resources doing it over andover again.
But what I wanted to discussnow is the training outcome of
(11:26):
this exercise, because when youare showing this to people,
right, what we want them tolearn is that they may be
tricked by right a technologylike this and that they should
really be more suspicious aboutnon-standard requests or things
(11:47):
that are not following therequired process.
And one of the outcomes forthat would be we are asking them
to challenge authority in amore frequent manner.
(12:08):
Hard right, we know how muchpeople normally fear going into
a conversation with someone thatlooks like the CEO, that sounds
like the CEO, and having tochallenge them in terms of the
requests that they're making.
Right, kind of no, I won't dowhat you're asking because it's
not following the establishmentprocess and this is already kind
of hard enough for the, I'llsay, the regular employee.
Now, one piece that cannot beneglected for this to work is
(12:30):
actually the training for peoplein authority positions right
kind of leadership that theywill need to be used and to
accept being challenged insituations like that.
Right, because it doesn't helpat all if you go and try to
train your entire team aboutchallenging directors, vice
(12:51):
presidents, c-level people inthe organization and then when
you do that, they are beratedbecause they are getting in
front of the business, they'renot letting them do what they
need, et cetera.
So have you experienced thattype of conversation as part of
the training efforts that you'vegone through after this
exercise?
Speaker 1 (13:10):
You know what?
That's very interesting.
I never looked at it thatperspective.
I'm glad you brought that up.
So when we did our in-housetraining, our executive team was
part of it, but I neveraddressed to them.
Hey, if we want people toencourage them, if something
doesn't seem right to challengeyou, are you willing to accept
(13:30):
that and agree with that andactually commend that?
I did not address that.
So, thank you, that's somethingI will definitely look to.
I meet with my executive team acouple times a year to give
them a status and that'ssomething I will definitely
bring up, thank you.
Speaker 2 (13:48):
We are asking people
to challenge you if you ask
non-standard things.
So please don't go back on themif they do it, because that's
what they're being trained for.
That's something that we reallyneed to think about when going
through this type of effort.
Now let me go back to thatpoint related to the pen tests
(14:12):
and running that with scenariosinternally or with some help
even from internal resources.
When I see pen tests it's veryoften that, as I said just a few
minutes before, the black boxscenario coming from outside and
people saying try to get in,and I believe that's a very big
(14:33):
waste of resources because youmay be breached quickly by doing
that in the beginning.
But if you have any type ofcontinuous improvement
capability, you will start kindof closing those initial access
doors kind of quite fast.
(14:53):
So the things that kind of maybe interesting from an external
vulnerability point of view.
It may be hard for thepentesters to find in the
following tests, but thenthey're going to start resorting
into things like phishing, forexample, or going through kind
of putting kind of USB drivesaround the building or kind of
getting kind of physical pentesting et cetera.
(15:14):
But the fact is the initialentry point is almost always
guaranteed to succeed if theytry hard enough.
Right?
We know that it's pretty hardto completely eliminate the
initial access step of attacks.
So why we keep kind of tryingto test and test that scenario
over and over again when we knowthat eventually of course we
(15:36):
can lower the probability ofthat happening?
But we know that that willhappen and if we set the
objective and the standard ofthe pen test to go through that,
sometimes the pen testers willwait a lot of resources doing it
and they won't spend much timein the following steps where you
will test your ability toresist situations where the
(15:58):
initial access have beenobtained.
So I really like, when youmentioned your pen test, that
you provided the initial accessalready for them in a scenario,
because it really shows that youare testing layers of security
that are very often neglected aspart of these pen test
exercises.
I wanted you to tell me alittle more about how you see
(16:20):
that and what type of variationyou put in these pen test
scenarios so you're able to testthe different kind of security
controls that you have in place.
Speaker 1 (16:31):
Yeah, I totally agree
with you, and that's why I take
this stance on, because it'sjust a matter of time before
someone falls for a phishinglink.
You know, a vulnerability thatwe're not aware of today may
come up tomorrow and give access, and so that's 100% why we look
at that.
I do got to say, though, it'salways good, even though, just
to have that cursory check onour perimeter, so we just make
(16:52):
sure we definitely do at least acursory check.
So and there's a couple ofreasons why I do the the more
collaborative and internalaccess.
We have our internalvulnerability scanner.
Absolutely, we see the highestvulnerabilities and we mitigate
them, you know, towards thepriority of the asset.
But even though our tool tellsthem they're high-risk, does
that mean they're exploitable?
(17:13):
And you don't know until youactually have someone try to
exploit those.
Secondly, another thing that Ilike about doing this internal
aspect is we test our response,one of the things, too, as I
look at it, are these guys goingto trip my EDR?
Are they going to trip my SIM?
So that's another big areawhere I want to make sure that,
(17:35):
yes, they are.
If we give them some access inthere, they can start lighting
up some of these alerts that cango ahead and have us exercise
our incident response plan aheadand have us exercise our
instant response plan.
So, with that in mind, what wedo then is we give them.
You know, we there's two thingsthat we do.
We let them put in there theycall it the drone we connect it
to our network and it's it'sbasically unfettered.
(17:57):
You know, we do have somephysical controls on that.
We plug it in and then we makesure we get these alerts on
those and then we'll go aheadand release some of these
physical controls basically ourneck.
And then the second piece is wegive them a user account.
That's not a privileged user,so that way they have an
entrance into our environmentand then, to see what we can do,
(18:17):
we give this user account usingone of our virtual desktops.
I have found that that's achallenge because a lot of stuff
that they try to do getstripped up by our EDR.
So they're actually trying tobattle our EDR and then trying
to run their attacks.
So that's why it's good to havethem do some looks on that
(18:38):
again, exercise our EDR tripping, but then they're spending time
trying to beat a commercial EDRand that's just as almost as
challenging as trying to breakin from perimeter.
You're wasting resources.
So the nice thing about havingthat unfeathered access from
that physical endpoint isthey're able to scan through
things and, again, they're usingtheir vulnerability tools, but
they're able to pick outvulnerabilities that they think
(19:00):
they can exploit, not ones thatare deemed high risk by our tool
.
And that's where you see thebang for the buck, because our
pentesters have been doing thisfor a while.
They do this, they get good atit, and the testers that we use
actually are married with ourinstant response retainer vendor
.
So there's some dialogue therewhat they're seeing out in the
(19:21):
wild, what's being executed andwhat's being, you know,
successful, and they're buildingapply those things.
So when we have them inside andthey're doing their, you know,
but trying their lateralmovement, that's where we're
testing our sim capabilities,and the neat thing that happened
is this last test is theyactually, you know, they lit up
(19:43):
our sim and they lit up someother tools that we have.
So when I met up the nextmorning, I was like, hey, these
are the alerts that we got and Icame up with my actions.
This is what I think you didand I thought I had it Like, wow
, this is great.
You know, these are the alertsI mapped out and these are the
services and servers that Ithink you were on when they told
(20:06):
me what they did.
There was some overlap, but itwasn't exact.
So to me that was extremelyeye-opening, because what that
tells me is my SIM and my toolsmay tell me that something's
going on, but it may not tell meexactly what's going on.
So we see something like thatand if it's, you know telling me
exactly what's going on.
So we see something like thatand if it's, you know, if it's
out of my league, right awayI'll execute my instant response
(20:27):
plan.
Get my you know IR team that Ihave a retainer coming in.
But it's neat to see that theinformation you're getting from
your tools may not be exactlywhat's happening.
Now, if we would have gonethrough our plan and
containerized some of thesetools and these accounts, it
probably probably would haveslowed them down absolutely.
But it wasn't an end-all be-allto see what was compromised
(20:50):
again.
So, even if I see somethinglike this again, I got to make
sure, I got to bring in theexperts to make sure, even if I
contain it, I got to make surethat I really did contain, it
really did eradicate it.
Speaker 2 (21:01):
Perfect and there are
some very valuable lessons
there in kind of what you'redescribing.
I think the first is the valueof investigation.
Normally we look into the flow,like, okay, there is detection
and then you go and respond.
You do the containment actionsas you mentioned.
But, as you described, therewere actions that they took as
part of their attack that youdidn't see.
(21:22):
You saw some of those.
There was enough right for youto get suspicious, eventually
kind of trigger a responsescenario, right kind of to a
risk mitigation action.
But there are still things thathappened that you didn't see.
So we see kind of how importantit is to have an investigation
step after that initialdetection so you can really
(21:42):
understand the fullcharacteristic, the full reach
of the attack that is going on.
And I believe that sometimes,when you look into how
technology vendors present, howtheir tool works, many times
they paint a scenario where,from the detection point, you
(22:05):
have the full picture of whathappened.
But while kind of thisinvestigation step is very
important because there will bepieces that may have touched
blind spots in your environmentor things that you haven't
covered from a threat actionpoint of view or they were just
not suspicious enough, even ifyou're looking using something
like anomaly detection-based, itmay be something that it didn't
(22:27):
hit a threshold that will raisesuspicions enough to be looked
on.
So I think that really showsthe importance of investigation
and many times there's a type ofinvestigation that still
requires humans.
We talk a lot about bringing AIto the defense side, and I
think it can really help a lot.
(22:48):
Right Kind of sometimes and Ithink it's feasible we may talk
about eliminating level one typeof analysts just kind of with
AI capabilities.
But when you hit this point ofinvestigation, when you're
looking for things that youweren't able to detect initially
, that's really kind of wherehumans shine and I think that's
(23:10):
kind of where we're going tokeep seeing humans kind of being
very different from AI for sometime.
Speaker 1 (23:16):
It's neat the person
I work with.
He's all for looking to see howAI will take care of that level
one and I think, if you look atit, ai is just a tool.
Okay, at least today, you can'ttotally rely on that.
It helps you 100%.
But you're absolutely right, Ithink you need that human
validation, human, you know,inquisitive piece on this, like
(23:40):
that human gut feel, as Imentioned with our know the
victim that we had on our deepfake.
It's like something didn't feelright and I think sometimes
when you look at things and youpiece it together, you may not
have it quantitative in front ofyou, but like it does, you have
the gut feel that this isn'tright.
We need to do some more diggingand I need to bring in some
(24:01):
help on that.
So I think that piece is theintuitive nature of the human is
important at least you knowfrom my aspect when I look at
these things and that'ssomething that can't be replaced
by any type of technology.
Speaker 2 (24:15):
Perfect.
And another thing that youmentioned kind of in terms of
these pen test exercises comingfrom the inside was that battle
right between the pen tester andthe EDR technologies.
And you know, I think thathappens a lot when they are very
focused on replicating thebehavior of typical malware,
(24:36):
because these EDR technologiesare very good in detecting
malicious software, right.
But if I wonder if thepentester goes more towards
using the kind of living of theland type of techniques or even
looking more on applicationlayer when they're trying to
move laterally or obtainadditional permissions or
(24:59):
additional privileges, gettingaccess to information right.
So instead of trying to runsoftware on that endpoint or
trying to move laterally usingscanners on the network side,
all that, we know that happensand we have a lot of
instrumentation to detect whatwould happen if the pentester
just start opening the businessapplications that they can see
either on the desktop or kind offrom the intranet many times,
(25:23):
and start trying to get in tothose systems.
What I used to do when I was apentester, right kind of in
these internal scenarios I wouldstart doing kind of SQL
injection on internal apps.
So you see all theseorganizations taking a lot of
time to secure their externalfacing technologies applications
(25:44):
, but their internalapplications are really easy to
break in.
We just use typical SQLinjection and get full access to
a database full of proprietaryinformation, sensitive
information that wouldn'trequire me to run a single
malware or malicious tool on thedesktop right on the endpoint.
So what do you think about kindof this difference in the
(26:04):
behavior of the pentesting rightand, instead of kind of trying
to replicate malware, going moretowards an attacker profile
that is looking to the businesslevel on the application level
to try to accomplish theirobjectives?
Speaker 1 (26:18):
You know, you're
absolutely right.
If you just have a compromiseduser and they're just using
their own hands behind thekeyboard, it's a lot harder to
detect.
What our testers tried to do,though, is they tried to use
their tools to make that jobeasier, and the tools that they
were using were getting flaggedby the EDR.
I think the challenge there isright.
(26:40):
If I had, if I was able to givethis pen tester a month of time
, sure he can go ahead and hecan do that.
You know hands-on the keyboardtesting, but since you don't
have that large time, try.
They try to use their tools.
So when we do do the, we givethem the account.
There are some apps that wepoint them to look at, and they
(27:01):
do try to do those cursory lookson those apps.
So you're right, it's importantto actually test some of those.
The only problem is thechallenge is the time and the
resources that you have, becausewe only have them from a
limited time.
Yeah, in a perfect world, 100%,it's almost like another level
of application testing.
(27:23):
That's right.
Another level of testing ourroles in our database, for
example.
If you have the time to do that100%, that's definitely the way
to go.
Speaker 2 (27:35):
Yeah, and to be fair,
we also want to replicate the
threats that are more likely tobe present or to happen in our
environment, and we know that weare most likely to face this
typical malware-based attackthan someone that will spend
time behind the keyboard tryingto break into internal
applications.
Of course that may be a moreextreme scenario.
(27:57):
Of course that may be a moreextreme scenario, but sometimes
I just wonder kind of why pentesters kind of keep kind of
using kind of the standardtoolbox, of kind of
malicious-like type of toolsthat will be so easily detected
by a well-deployed EDR, and kindof we end up kind of just
(28:18):
making sure that we detectthings that we are already
prepared to detect, right,detect and respond to.
And also, I think that alsoenables the adoption of
solutions like breach and attacksimulation, because many of
these solutions end upreplicating this type of
behavior of the pentester.
And then you have somethingthat if you instrument your
environment with them, you cando that more frequently and in a
very consistent manner, right?
So it gives you an additionallevel of assurance to know that
(28:40):
your environment is workingproperly.
Now let me change gears a bithere and, as I told you, right
kind of when we are chatting.
Initially I was going rightkind of to your profile and I
noticed a three-letter agencythere, right Kind of in your as
kind of some of your previousjobs, and that's always brings a
(29:01):
lot of curiosity right.
So we look and say, oh, nsahere, that's cool.
So tell me a bit about workingfor a three-letter agency,
especially kind of the very,always very suspicious NSA kind
of.
How does it look like?
What is the job?
How does the job feel?
What are the typical challengesthat you have in an environment
like that?
Speaker 1 (29:23):
I'm retired Air Force
and you know I got to say my
career in the Air Force isabsolutely rewarding.
The people I worked with weretop notch, having the mission
oriented, you know, culture wasgreat.
I'm just so thankful I was ableto culminate it with working
for the NSA.
It is by far the mostchallenging and rewarding job
(29:45):
that I did in my career.
Nsa has two pieces.
You have your offensive sideand then your defensive side.
Well, my first assignment there,I actually worked on the
defensive side, actually workedon a defensive side.
So at the time we're looking atthe point, you know, deploying
potential satellite technologies.
That was basically using, youknow, the networking protocols
(30:05):
that we have here on, basicallyon the earth.
So they're looking at, well,how do we gonna secure these,
basically these routers in thesky?
So I got a chance to work atthat.
You work with some of therequirements and then work with
the contractors, identify withyou know some of the attacks
that happen on a network that'son the earth.
Well, that's, those types ofattacks can happen out with our
(30:26):
satellite system.
So you kind of need this at thesame type of controls and you
know I'm dating myself.
So these are basicallyfirewalls and what have you.
And then I had the opportunityto to work on the offensive side
, and this was during the globalwar on terrorism.
You know, just like any spyagency, you have these assets,
(30:46):
you have these accesses that youhave that you gleam
intelligence, you gleaminformation on.
So my job at the time was, youknow, during the global war on
terrorism sometimes we needed togive this information to the
boots on the ground to takeaction.
So I had to do what was calledan intelligence gain loss
(31:06):
assessment.
So whenever you're gaining theintelligence on some sort of
access and you go ahead and youdo an operation, a lot of times
you're going to give up thatintelligent asset.
It's basically out in the opennow, so the bad guys can't use
that.
So it's my job to quantify howimportant the intelligence was
coming that and then the, thecommanders you know the horses
(31:30):
on the ground with this sidewallis this action warrant me
losing that intelligence?
So that's basically called anIntel gain-loss assessment.
Sometimes they decided, youknow what, that information is
too important for us, we'regoing to hold off on this.
Other times it says nope, thismission is too important, we're
going to go ahead and press.
You know, sometimes lives wereon the line with this
(31:52):
information, so that was ano-brainer.
It's like no matter what thisIntel was, we're going to go
ahead and we're going to try anduse this to save the, you know,
when people's lives are injeopardy.
Unfortunately, I can't go intoany more detail on that, but I
do got to say that it's neatbecause it's like an attacker If
(32:13):
you relate it to what I'm doingnow.
If somebody has access to anetwork, are they going to kind
of sit tight and wait, or arethey going to kind of take a
little bit more risk and seewhat more they can do and run
the risk of getting caught?
Speaker 2 (32:30):
So it's similar to
that aspect and it applies, you
know, still applies today.
Yes, it is an interestingtrade-off assessment, right?
I remember the reading kind ofthe memories related to the
Bletchley Park operation withAlan Turing etc.
And they have a similarsituation when they start
breaking the Enigma codes theycould see the position of the
(32:52):
U-boats from the German sidemoving to go after the convoys,
the ships in the North Atlantic,and they have to think, ok, are
we going to save these shipsand make it clear that we are
listening to theircommunications, or should they
let them get those so we can usethis information or
disadvantage in a more strategicsituation?
(33:15):
So it is a very hard trade-offto assess, especially because,
as you said right, kind of manytimes it involves lives.
I can't imagine how hard it isto go to that type of assessment
.
Speaker 1 (33:28):
Yep, and you're
exactly right.
That's exactly the same thingwe're doing back in the 1940s is
the same things we're doingtoday Exactly.
Speaker 2 (33:38):
And Bill, let me ask
you it's probably my favorite
question for the podcast thatI'm asking everyone since we
began what do you think that we,as in the cybersecurity
community industry, what are wedoing right?
Speaker 1 (33:57):
We're not resting on
our laurels.
We, uh, we realize the threatis consistently improving, it's
dynamic, they're findingdifferent vectors to get in and
we realize that.
So we always have to keep onworking to to get better.
And we do that by realizingthat our technologies can't keep
(34:17):
up with the bad guys.
So we have to make sure thatour incident response plans are
thought through, actionablepractice, so then, when it does
happen, we're actually able tocontain a threat.
If I look at an attack coming inand I'm able to contain it and
not have it impact my businessoperations tremendously, to me
(34:40):
that's just as good aspreventing an attack, because my
business worked fine, you know,it did not cost my company any
money, I did not get any dataexposure.
So that's a win to me thatthat's not an exposure, that's
not a problem, you're, it's justa something that you did to
combat that threat.
You know, granted, it'd be niceif you didn't have to do it in
the.
You know, granted, it'd be niceif you didn't have to do it in
(35:00):
the first place.
Sure, but again, to my mindit's still a success if you are
able to contain it, able torecover and able to move on
without, you know, reallyimpacting the business.
Speaker 2 (35:13):
Great, yeah, that's
true, right, I think we wouldn't
be able to rest in our laurelsanyway, because in a couple of
years there wouldn't be able torest in our laurels anyway,
because in a couple of yearsthere wouldn't be any laurels
left.
Everything would be breachedvery fast.
We are forced to evolve.
I think it is indeed a goodthing, in terms of what we do,
that we keep evolving.
(35:33):
We keep up with the threats.
I also remember a quote fromMarcus Reynon.
He said well how unsafe theinternet will be.
Right, it'll be as unsecure andsafe as we afford it to be.
We do right.
We keep doing our work in a waythat we can live with the risk
(35:56):
that is out there.
Of course, we won't do muchmore than that, because it's
quite expensive.
As much as we try to avoid, weget in the way of people doing
business doing their thing.
So we try to avoid disruptionfrom our side as well.
But I like that point that hemakes and we're going to keep
being as insecure as we canafford to be.
(36:19):
And before we close here, asinsecure as we can afford to be,
and before we close here, Iwanna just bring something that
you were discussing and bringsome technology to the
conversation here.
First, because record of myemployer is a SIM provider.
So and you mentioned the kindof using, kind of or looking at
(36:40):
seeing some of that activityfrom the pen tests on the SIM
and kind of all the work ondisrupting the pen test activity
done by the EDR, and many timesI end up seeing questions
related to why to keep a SIM orwhy to have a SIM if the EDR is
doing such a great job.
And so let me ask you that thenkind of, why do you have a
SIMIEM instead of just relyingon that EDR that is doing a good
(37:04):
job?
Speaker 1 (37:06):
The EDR is on that
endpoint.
Granted, it does a great job onthat endpoint.
Well, I do a lot of cloudservices.
So then I have my identity, andmy identity isn't really
covered by that edr.
So I need those things comingin also.
You know we do have networktraffic.
(37:26):
You know, basically, firewalllogs.
There's some stuff there I need, need that as well too.
And then I have my, you know,other systems that feed into
that.
You know, you know my vpn, uh,other things, and the thing
about the SIM is like, when wehave that you know I'm trying to
track those pen testers it'd behard for me to go to my EDR,
(37:49):
hard to be, go for my VPN, checkon my firewall logs, go ahead
and check on my, you know, idprovider logs that's a lot,
whereas I can rely on my onepane of glass to go ahead and
see all my logs that come inthere.
Secondly, is you, since youhave these disparate sources,
(38:09):
you need to be able to kind ofput them all together and kind
of pick up those anomalies.
And that's where I think, yeah,you need to correlate these
things from my identity side,from my my network side, from my
EDR side, and be able to putthose together and then marry
them together like email logs,for example.
My email is a good input inthat and I think, because of
(38:30):
that correlation, because ofthose behavior analytics that
you see, hey well, this normallydoesn't happen.
This picks it up.
That's where you have the valueof the sim and, honestly, I, you
know, I, I can't, I would bevery hesitant to work for any
organization that does not havethat type of technology that
brings in these disparatesources to give me my single
(38:51):
pane of glass, something that Ilook to not only for my threat
hunting but also for my response.
And granted, you know, I mightget triggered from the SIM and I
might have to go to my EDR andlook at some more specifics of
that, and that's okay.
But still, having thecorrelation of all those
disparate sources, I think isinvaluable.
Speaker 2 (39:12):
Yeah, that's right.
Having a point where you canhave this unified view is really
crucial.
We know that because of howfast some of the other detection
can have this unified view.
It's really crucial.
We know that kind of because ofhow fast some of the other
detection and responsecapabilities grow.
We're always playing catch upin trying to unify everything on
the same right Kind of.
You are in that point whereyou're about to have everything
(39:33):
there and then you just have toacquire or to buy something that
it's not fully integrated intothe SIM.
Right, oh, I'm buying a newcloud detection, threat
detection technology and that'snot fully integrated into the
SIM.
So now you have this sideconsole and you need to do a few
things.
But normally I think these daysit's very hard to find a
(39:54):
technology that can at leastsend alerts to the SIM so you
can start the decision-makingprocess of kind of, should I go
and look further, should I dosomething about it?
From that central point.
And I think that function isreally very important.
I think some people come and say, oh, it's the most important
piece of the architecture.
(40:15):
No, I like to say it's afoundational piece, but it is a
base for everything else thatyou have and what I believe.
Of course I may be a littlebiased because of who I work for
, but I believe it's going to bearound and still going to have
a very strong role in a securedarchitecture for a long time, I
(40:37):
agree.
Okay, we are kind of on thattime limit here.
So I want to thank you forcoming to the podcast.
It was a great conversation.
I really like going in-depth inthose pen test scenarios, how
we define which scenarios aregoing to run, where the pen test
will happen from all thesepoints about using the deepfakes
(41:01):
and kind of the outcomes andkind of what are the follow-up
actions related to training,training the employees, training
the leaders.
I think we end up having a veryinteresting conversation.
So I'd like to thank you forcoming and going through that.
Speaker 1 (41:16):
Well, thank you, this
was very enjoyable.
I really enjoyed this dialogueand you know, it's nice when
you're able to interact withsomething like this and you, you
take a piece back, and fromthis conversation I'm going to
take a piece back to make suremy executive team is aware that,
hey, it's okay to getchallenged, to make sure it is
you who is you who's actuallydirecting that action.
(41:36):
So so, thank you.
Speaker 2 (41:39):
All right, great.
So thanks everyone forlistening and see you in the
next episode.
Bye.