Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:04):
You're listening to
the audit presented by IT Audit
Labs.
I'm your co-host and producer,Joshua Schmidt.
We have the usual suspects, EricBrown, our managing director,
and Nick Mellum coming to youfrom an undisclosed location
today.
I'm in a different place when Iusually am at.
And today our guest is LeslieCarhartt.
She's a technical director ofincident response at industrial
(00:24):
cybersecurity company Dragos.
SPEAKER_01 (00:25):
It's my absolutely
pleasure, absolute pleasure to
be here.
Thank you for having me.
SPEAKER_02 (00:29):
Yeah, maybe you
could start off by telling us a
little bit about yourself andDragos and what you've been
working on and can I go fromthere.
SPEAKER_01 (00:36):
Sure.
I have a very strange job.
Let's start with that.
Um I am one of maybe under 100people on Earth who responds to
hacking of computers that don'tlook like computers.
So industrial stuff, things liketrains and ships and power
plants and manufacturingfacilities, cranes, things like
(00:59):
that.
And I've been doing that forquite a long time, almost 20
years now.
Um I've been doing incidentresponse in that space and
digital forensics.
Um and specifically, you know,response to places where
somebody might have died,somebody might have been injured
because of this equipment thatis now digitally connected.
(01:19):
So we're talking about physicaldevices, real life things that
are connected to computers.
And those computers don'tnecessarily look like computers
that you have on your desk.
SPEAKER_02 (01:30):
Are we talking like
Internet of Things or are we
talking about No, this is freeInternet of Things stuff.
SPEAKER_01 (01:36):
The Internet of
Things has a place in there.
There is industrial Internet ofThings stuff, absolutely.
Um, but this is like PLCs, RTUs.
Um sometimes you hear about theoverarching control systems that
manage those devices, thingslike SCADA.
Um, and they are runningeverything in society, and they
have been for a very, very longtime, long before there was like
(01:58):
smart things.
Like um there, there's been ICSfor a for a long time.
And uh ICS was mechanical atfirst.
It was things like gears andpulleys that handled timing for
industrial processes.
And then it became electronic inthe 20th century.
So like the systems and powerplants and things were switched
(02:18):
always in wires and thentransistors, and then eventually
they became computers.
SPEAKER_00 (02:24):
So Leslie, we we do
uh have some customers in the
same space where they havecritical infrastructure and
they're they're working withdevices that control
transportation or wastewater,things like that.
Um curious uh in your in yourwork in um um looking into maybe
(02:46):
attack surfaces or where threatactors may have gotten into
those environments, and we wetalk about those pieces of
critical infrastructure being onseparate networks or completely
uh offline and not connected toanything, but inevitably
(03:07):
something is connected somewhereor there there is some way to
bridge that gap.
Just curious in your in yourwork, what have you seen as far
as how the threat actors arebridging that gap?
SPEAKER_01 (03:21):
The gap is mostly a
lie uh today.
Uh in the last 10, 15 years,there has been such a need for
connectivity to thoseenvironments.
So think about like just in timemanufacturing and think about
trains.
Everything, everything hastelemetry now.
Everything is computercontrolled at very fast speeds,
sometimes across multiplefacilities.
(03:42):
There's trucks involved, there'sshipping involved.
And when you're talking aboutthings like utilities, like
power and water, a lot of thepeople who used to do that have
now retired or been laid off tobe replaced by computers.
So that means that their jobsthat involve dispatch and going
to places have also gone awayand they've been replaced by
(04:03):
centralized staff.
So that means everything has tobe connected now.
I see maybe two air-gapped,really air-gapped environments a
year, and I do this full-time.
And they're in like things likedefense and nuclear.
Everything's connected now, andit's increasingly connected.
We're talking about environmentsthat have not just one way
through from the enterpriseenvironment, but tons of like
(04:26):
external VPN concentrators,modems, things like Team Gear
installed by vendors.
The vendors want to be able toremotely access things too.
And that's really hard to get ahandle on when we're talking
about sometimes remotefacilities, things without
proper, you know, modernsecurity architecture.
There's no ability to installmodern monitoring.
(04:47):
Um, and then you're dealing withincreasingly connected devices.
And a lot of those devices don'tlook like modern Windows 11
computers.
They are things like reallyarchaic things.
I see Windows NT, Windows 95 ona routine basis, um, as well as
Linux from that era.
And then there's those very,very abnormal PLCs and things
(05:09):
that are running customfirmware.
SPEAKER_00 (05:10):
Why do you think it
it's common practice?
I won't say acceptable, but I'llsay common practice for some of
those devices to run thoselegacy operating systems and and
not be upgraded to a modern OS.
SPEAKER_01 (05:28):
It's very clear
reasons, actually, and it's very
consistent across multipleindustries.
When you go out and you go to avendor and say, I would like to
buy a crane or a power plant,what you get is a ton of
different devices, a system ofsystems that works together.
So you get the crane, and thenyou get the crane's motor and
(05:50):
control systems, and thatincludes things like TLCs, but
then the crane also has to havecomputers that display its
status and can control it.
And some of those are remote,and then there's the network
infrastructure that connects tothose devices and maybe the
servers that update thosesystems and do things like
domain services for them.
So you get a whole network withyour crane.
(06:12):
Now, if something goes wrongwith those systems and you say
boo your control systems or yoursafety monitoring, you have to
shut down at a bare minimum.
It's a really big deal.
At a worst case scenario, you'regetting people injured or dead.
So the systems, the systems ofsystems, as I call them in that
process, are tested for like ayear by the vendor in safe
(06:35):
environments before they're putanywhere.
They're duplicates of oneanother, they are structured and
engineered to a super lowlatency and function in a very,
very specific safe way.
So if you want to upgrade thosesystems, you don't just swap out
one of the Windows computers.
Um, that is something that istested, vetted, warranted by the
(06:58):
vendor.
And when you want to bring down,let's say, the power plant to do
that, that's like, you know, youhave to switch down power
generation to another facilityfor a period of time and you
have to shut down your powerplant and you do that maybe once
a year.
So you have that considerationof not wanting to make the
system have greater latency orinstability or function in an
(07:19):
unexpected way.
And you rarely have outageswhere you can even do that
maintenance.
So these systems are expected tofunction much longer than the
computer on your desk will.
They are expected to have liketime near even more life cycles
for the Windows computersincluded with them because they
were tested for two or threeyears before they even got sold.
SPEAKER_03 (07:40):
Well, it's like I'm
really curious to hear about,
you know, some of the uhmemorable cases maybe that
you've you've worked on.
SPEAKER_01 (07:48):
Well, I'm an
incident responder and I do
consulting incident response.
And so I'm under very, verystrict MDAs.
Um tell you generalities.
I can't tell you about specificcases.
Like in terms of generalities, Ican kind of divide my cases into
three categories.
The first one being commoditystuff.
Ransomware impacts theseenvironments too, increasingly
(08:08):
so, because people have realizedthat this is a target-rich
environment.
But when it goes down, peoplereally, really notice.
So they've been targetingindustrial verticals more often.
And the ransomware doesn'tnecessarily again impact like
those low-level PLCs runningcustom firmware, but it
definitely takes out all thesystems that let you control the
(08:30):
crane and or make sure your oilplatform is safe to be on.
And that's a really, really bigdeal operationally, too.
It doesn't matter that it's notthe PLCs.
SPEAKER_00 (08:40):
The adversaries are
getting better, the systems are
getting more connected, butwe're still in that legacy
mindset of the old operatingsystem, and you know, it's comp
it's complicated to um updatebecause of so much testing that
(09:04):
has to go into it because of thelife safety aspects, and it's
likely not going to get anybetter from the adversary
standpoint as things become moreconnected.
Do you see a solution on thehorizon that would maybe reduce
the testing time or force themanufacturers to not be able to
(09:29):
run on legacy software?
Like what are your thoughtsthere?
Because otherwise it's justgonna get out of hand uh if it's
not already.
SPEAKER_01 (09:36):
There's a lot of
different elements to that
question.
So the first one is legislation.
Legislation is going into placein these environments,
especially things that we wouldconsider critical to our society
globally.
So not just in the UnitedStates, in Canada, in Australia,
in Europe, in the UK, thingslike that.
There's been um a push towardssaying, well, you do have to
(09:57):
have an incident response plan,and you do have to have some
basic cybersecurity controls.
You do have to have some kind ofdetection appliances and some
types of segmentation and someplan for updating systems that
you can update.
Um it's hard.
It's a it's a balancing act.
Uh and organizations, they watchthe news too.
(10:19):
They know that there are threatsout there.
They know that there are umincreasing cyber attacks that
have made a dent in theseorganizations, and they don't
want that to happen from a riskperspective.
They're concerned about the samethings.
They've always been concernedabout the process going down,
people dying, their theirbusiness not functioning like
they expect, being able toproduce things like they expect.
(10:40):
So cybersecurity is just oneelement of that, and they they
realize that, but they arebalancing that out with other
risk and also the practicalityof being able to do things that
impact their process, likeupdate systems, install
monitoring, things like that,and the expense of that.
So um what you've got to do,you've got to shift your
(11:01):
thinking here.
Um, as cybersecurity people werejust concerned about like
hackery stuff, like is thedomain controller compromised?
Is there an infection?
Is there an intruder on thenetwork?
In these cases, your priority islife safety and process.
You can have a network that isinfected on every computer with
five pieces of malware, and ifit's not impacting the process,
(11:22):
it would be more, moreimpactful, worse for the the
process for you to shut it downto fix things.
SPEAKER_02 (11:28):
So as we're thinking
about our infrastructure and
vulnerabilities, I was wonderingif maybe uh Leslie and you,
Eric, could shed some light onthis question I have about like
how vulnerable are our publicinfrastructures or just
infrastructure in general,really.
Like most people like me don'tthink about it every day.
Obviously, we just kind of counton um the power going on, the
(11:48):
water working.
Um how often are we seeingthreat actors attacking these um
attack surfaces?
And what's your take on that?
SPEAKER_01 (11:57):
It's increasing.
All all those three types ofcases that I I talked about, the
the insiders and the commodityactors and the state actors have
all realized that this is areally good target for their
various purposes.
Um, because the cases make thenews.
They're they're they're verynoticeable when an industrial
product uh process is impactedby some type of cyber thing.
(12:19):
And that's a huge noteworthything.
We've seen it used as um a toolof war in Ukraine multiple
times, and we've seen it uhimpact operations of major
worldwide manufacturers.
And so for all those purposes,whether it's money making,
sabotage, espionage, or just youknow, getting back at an
(12:42):
employer pissed off at, that'sthey're they're all uh viable
cases, and so people are doingit more.
And um in in a lot of ways, forlike the sabotage and espionage
and even making money, like it'sit's a more efficient way to do
that.
And there these are not usuallylike teenage bored teenager,
like that to have the capabilityto conduct these types of
(13:04):
attacks and the knowledge youneed to do it not as a hard
role.
Usually you're a prettywell-resourced organization or
team of people, and you have aobjective and a mission.
You're a business and you'retrying to do things as quickly
and efficiently as possible.
And if you have an environmentthat's relatively exposed and
it's running XP embedded andit's mostly a flat network like
(13:27):
2010s and uh doesn't have EDR onit, and it's also doing really
important things, it becomes avery viable remote target for
all the things that I mentioned.
SPEAKER_00 (13:38):
I think one of the
one of the things that you know
in corporate America we're usedto is the structure of corporate
America, where you know we wehave meetings and you know, we
we have people that we report toand we have uh reviews to give
and you know, all of those othersorts of water cooler
(13:58):
conversations and all of thosesorts of things.
Um and the the amount of timethat we're dedicated to either
threat hunting or um looking inthe environment for things that
we might want to improve or youknow, what have you, is kind of
all part of our job ininformation security.
(14:19):
When we have those pointedthreat actors, while they do
have the they're verydisciplined and and if certainly
if they're nation-state wellfunded, then they're not gonna
have the same constraints thatwe have, where, oh, I want to
reboot this system, you know,oh, I got to go to CAB, gotta
(14:40):
get it approved, gotta wait twoweeks, you know, all of the sort
of nonsense that we deal with incorporate America.
Just think about operating ifyou didn't have any of that.
You didn't have to attend anymeetings that wasted your time.
You could just sit and focus onone objective, how efficient you
(15:02):
would be.
So then you take the the sidewhere we work eight hours a day,
nine hours a day, roughly, andwe have all of that nonsense
that we have to deal with.
And then you have the adversarythat doesn't have a lot of the
nonsense and isn't constrainedto eight hours a day, and you
(15:23):
could just see it's grosslyunmatched.
And it's really hard forcorporate America to kind of
wrap the mind around well, youknow, there's this adversary
that we can't see, we don't knowwho they are, we don't know how
they act.
Um, they're going after us, andsure, we can see the IOCs come
through on the tools, but wedon't really know what that
(15:45):
means.
And there's very few people inthe organization that actually
know what it means, and thosepeople are are trying to defend
the organization as well as doall of the other corporate
stuff, it gets really hard.
And then you could compound thatwith if they really are going
(16:06):
after that organization andthrough social media, they could
identify a person that theydirectly want to target, either
through cyber attack or a bag ofmoney, getting an insider um to
help either knowingly orunknowingly attack that
(16:29):
organization is reallydifficult.
So um I think to answer yourquestion, Josh, it it's really
difficult on the corporate sidewhen you have a motivated
adversary.
SPEAKER_03 (16:42):
And then you add in
the fact that these threat actor
organizations have like HRdepartments now and all kind,
you know, all these resources,unlimited funding, you know, of
basically, you know, it we'rewe're way behind the eight ball
on that.
SPEAKER_02 (16:59):
So Leslie, is that
is that kind of where you're the
tip of the spear and and and isthat what you do at Drago's?
Is that kind of your role to tokind of head up those campaigns
and be a little more proactiveso you're not stuck in a
boardroom meeting and you'reyou're like actively hunting
these kind of threat actorsdown, or is it more of a
response type uh of a stance?
SPEAKER_01 (17:20):
I personally, not
not Dracos as the organization,
but I personally am much more onthe reactive side of things.
Catastrophic things havehappened.
If you see me, I joke to my tomy SANS class students, like, um
I I understand if you never wantto see me again, because uh it's
very nice to get to know peoplein the community, but if I'm
there, it's been a very, very,very bad day.
(17:42):
I do do some like tabletopexercises and incident response
planning services, and I ofcourse teach.
Um, but my work is veryreactive.
We have people who do proactiveservices like OT or operational
technology process specific pentesting, architecture
assessments, planning, audits,things like that.
(18:02):
And that's very specific totheir nuances in these
industrial environments.
And yeah, I mean, these arechallenging environments to
secure for a lot of reasons.
All the reasons I talked about,like the things you can't touch,
the things you can't alter, youhave to get much more creative
in doing the proactive securityin these environments.
Again, you cannot come in andjust like update everything.
(18:23):
And uh just to give you asample, and I'm not trying to
pitch something, but it's a goodway to understand these types of
environments because if you'reif you're coming into them as a
cybersecurity person and you'reresponsible for one, there's a
white paper that came outthrough SANS a number of years
ago called The Five CriticalControls for Industrial
Cybersecurity.
And the to understand the levelthat most organizations are in
(18:48):
here, it breaks down for likeleaders in that space, like five
things you should start doing inthese environments.
And they are defensiblearchitecture, like not having a
flat network and having somesegmentation, and then having
some kind of incident responseplan for these environments,
having some control over yourremote access and knowing what
(19:09):
it's doing and where it's going,um, having some network
monitoring, like something, likeat least like passive network
span port-based monitoring, andfinally having some knowledge of
what you can and can't updatefrom a vulnerability management
perspective.
So they produced a whole whitepaper, and it's like pivotal in
(19:30):
the industry right now.
And it's like, this is wherepeople are at.
It's not like, oh, we're gonnado AI, ML, next gen, like
whatever.
It's like, hey, maybe, maybe weshould start segmenting these
environments.
That'd be real good becauseadversaries are very aware of
them now.
Um, but that's like we'remeeting people where they're at.
Like they don't even havenetwork maps or acid inventories
(19:52):
in a lot of these environments.
So um, you're not gonna tellthem to do whiz bang 2025
security measures.
You're gonna tell them to dofoundations and fundamentals.
And yeah, that makes the jobchallenging for the proactive
people.
Um, you have to get realcreative and you have to think
about a lot of the devices andprotocols in these environments
staying inherently vulnerable.
(20:14):
There's no way they're evergoing to not be vulnerable.
Industrial protocols aren'tencrypted, they on design,
because again, if you hit thebig red button on the wall to
stop the process if somebody'sarm is getting chopped off, you
don't need like extra protocollatency in that or like a
potential point of encryptionfailure.
Like the button needs to work.
So um, yeah, you you don't add alot of security controls to a
(20:36):
lot of the lower level devices,so they have to be vulnerable.
So that means you have to thinkabout creative solutions to keep
those bastions very monitoredand very secure.
SPEAKER_02 (20:47):
That's really cool.
Um I we also you know talked alittle bit when we were getting
to know each other about the AIpanic versus the reality and how
this kind of dovetails into thenew threat surfaces that are
appearing.
And um everyone's eitherterrified of AI or is or is
thinking it's magic at thecurrent current state of things,
it seems like.
(21:07):
Um do you actually use it inyour work or or how are you
seeing that manifest in yourday-to-day job?
SPEAKER_01 (21:13):
I'm I'm more on the
AI skeptic side of things.
I'll tell you why.
I I worked on AI in the 90s.
Um I've I've been working withAI for a very long time and I
understand very, very well howit works.
And it's of course frustratingto a person who spends a lot of
time understanding how the worldworks and how things impact our
society to run into a technologythat people grossly
(21:36):
misunderstand in very dangerousways.
Um, in terms of using AI for mywork, uh we've been using
machine learning and detectionand forensic analysis for ages.
For well over a decade, we'vebeen using it to analyze logs.
What things like LOMs are goodat is looking at big data sets
and pulling out means andaverages, getting answers to the
(21:58):
most consistent common answersto questions.
And of course, that's somethingwe do in cybersecurity, looking
at massive forensic data setsand logs over years, things like
that.
And we've been using machinelearning for those purposes for
a long time and it's gottenbetter at what it does, and it
eliminates some tasks that you'dused to have to be done manually
by analysts.
(22:18):
And that's great.
Um, if you understand what itdoes and doesn't do, and what
it's doing is taking big datasets and finding the most common
thing.
And um, from a misunderstandingperspective, a lot of people
think it's conscious, a lot ofpeople think it thinks, a lot of
people think that um it'screative and it's none of those
things.
It takes a big set of data andit pulls out the most common
(22:39):
answer.
Um, it is a screwdriver.
You use it for the purpose it'sit's good for, and if you're
using it for other things,you're probably gonna jam a
screwdriver into a light socket.
Um, but anyway, uh in terms ofwhat else I see see it doing in
my job, I see it doing a lot ofnefarious, horrible things in a
space that I'm in.
Because adversaries know what itdoes.
(23:01):
They know that it's good attaking a big set of data and
finding the most common answer.
And what are some common answersthat somebody might want to look
for when they're doing likesabotage to industrial
facilities?
Well, um, if you were a, I'mgonna use it, I'm gonna use the
term take a drink, APT, if youwere an APT like 10 years ago
and you wanted to like saypoison the water.
(23:22):
Um, so you wanted to break intoa power plant and into a water
treatment plant and you wantedto increase chemical levels or
decrease chemical levels, alterthem so that you could kill
somebody, um, you'd have to goout and find a chemical
engineer.
You'd have to blackmail one,extort one, something like that,
or you'd have to do a great dealof research as a team, but
(23:42):
usually it involved finding someway to get subject matter
expertise and maybe a lab uh totest your theories.
Now that's still the case.
Like the really well-resourcedstates still have like chemical
engineers, electrical engineers,people who are specialists on
target industrial systems, whothey gain because they're
nationals of their country orthey're extorting, et cetera, et
(24:04):
cetera.
But now you can use an LLM andyou can say, hey, if I had this
model of power plant or thiswater treatment plant, if I had
this municipal facility and Iwanted to get into the water
treatment systems, and I wantedto change the level of, say,
chlorine in the water to killthe most people possible, what
(24:25):
would that level be?
And it's gonna go across allthat that snarfed up internet
data, and it's gonna say, well,the most common answer to that
is XYZ.
It's going to extrapolate that.
And it might be wrong, it mightbe hallucinated.
Who's hallucinating?
It might be right, but uh thatmight be right possibility is
real bad there, and you didn'thave to call a chemical
(24:45):
engineer.
And uh then you need some somecustom logic for that device to
do that once you get onto it.
And that's like weird ladderlogic stuff that doesn't look
like normal programming.
And uh, but you can go on to theLLM and you can say, hey, can
you can you write me some logicto increase the chlorine levels
on a Dys YZ model that I see onthis target environment to
(25:07):
increase the chlorine to thisvalue?
And yeah, it'll write you someladder logic.
It'll go through Stack Overflowand and years of posts about
industrial stuff, and it'll doas fast.
And again, it might be wrong,might be right.
And so that's a alarmingdevelopment on top of the normal
like LMMs for phishing, LLMs forfor polymorphic malware that
(25:28):
we're seeing in the rest of IT.
So it's mostly mostly a badthing in my world.
Um, it's either something that'sbeing woefully misunderstood for
defense, or it's being used inreally gnarly ways by
adversaries.
SPEAKER_02 (25:40):
I've seen that
recently pop up that there is
like an online cult now thatthinks that AI is like some kind
of a god, and it's actuallycreating kind of a movement of
people that are are nowguagulating around this idea
that they're spending timetalking to these LLMs and it's
Where are you on the internet,Josh, that you saw this?
(26:01):
Well, I'm going deep, man.
SPEAKER_03 (26:02):
Believe it though.
I have not heard that before.
SPEAKER_02 (26:06):
Yeah, there's
actually like it's I can't
remember what it's called, butit's it's an AI religion.
SPEAKER_01 (26:10):
So people are
falling in love with them.
Like people are getting marriedto them, so it's not very hard
to see.
SPEAKER_00 (26:16):
Yeah.
We're gonna need another tinfoilhat episode.
SPEAKER_03 (26:21):
I'm getting worried
we're going down that route, but
Josh, you gotta write.
Sounds like Leslie, that's not asurprise to you.
No.
SPEAKER_01 (26:28):
Oh, I I I worked on
I worked on primitive chat bots
in the like Eliza AIML days, thevery, very early days.
And I was a I was a vulnerableteenager back then.
You know, I'm I'm old now, I'mvery old.
It updates me a little bit.
But um, even then when I had tocode everything myself and I
(26:49):
understood the the the holisticengineering of how that bot was
working, it was still temptingto try to make friends with it.
It's like, oh, if I meet aperson, like and I can talk to
it.
And uh yeah, emotionally, yeah,that's very, very tempting to
get emotionally attached to as ahuman being.
And uh I I cannot even imaginebeing a vulnerable teenager
(27:10):
today and working with LLMs thatcan appear very human through
emulation.
The Turing test was such a badidea because the the smart
people who thought, you know,created the the concept, the the
common understanding of theTuring test, um, never
anticipated people being happywith emulation as opposed to
(27:32):
consciousness.
Um and uh the the Turing test issupposed to test for
consciousness, but what iteffectively also mistakes for
consciousness is just very, verygood emulation of being a human
being, what you get from exactlywhat it's described, taking a
large data set of what everyhuman being has responded to
every question with and givingthe most common answer, the most
(27:54):
typical answer.
SPEAKER_02 (27:55):
Eric, have you seen
this AI stuff pop up in your in
your like defense posture?
Have you seen like any kind ofunique attacks come through?
I mean, we just saw that one inSt.
Paul.
I'm not sure what the update ison that.
Um, but I'm curious to know likewhat your experience has been in
the last couple years on that.
SPEAKER_00 (28:12):
We saw a neat one,
or we were talking about a neat
one last week where it was aside channel attack through a
vendor that was using AI towrite prompts on the end user's
device to gather information andthen encrypt it and send it
back.
So that that one was was prettyinteresting, but it the the
(28:34):
compare and contrast to me isjust really stark and drastic.
Where earlier today I was in ameeting um on the local
government side with a with alocal entity, and the discussion
for a half hour was around umhow do we govern an a trial of
(28:58):
AI.
So there's such a reluctance toeven leverage the basics that
are already around us, justfiguring out how do we get these
tools in the environment so thedifferent teams can understand
(29:19):
how they might use them incoding or how they might
leverage them to you know toeven do things like the basics
around um making a document, um,translating it into a different
language, or um you know, makingit maybe more accessible for
(29:42):
readers of different um levelsof um uh language ability,
right?
So just some of the really basicthings of AI, not even able to
have those conversations becausethey're hung up.
About how do we bring in AI intothe organization in a way where,
(30:04):
you know, it's not gonnaexfiltrate data or the the
data's not going to train themodel, or you know, all of those
questions that were answered,you know, a while ago are still
being discussed.
And I, you know, it would beokay if this was the first
meeting that I sat in, but thisis probably the 20th, where the
(30:25):
topic it's the same topic.
So, you know, here we have thethreat actors that, you know,
are are well versed in LLMs andmachine learning and you know,
the the the people that are areare are really diving in on the
forefront, you know, bleedingedge.
And then on the side where we'rebeing attacked, we can't even
(30:48):
figure out how to leverage it inthe environment at the very
basic level because we're tooworried about what it what what
the possibility of what it coulddo.
And every day that we don'tbring it in, we're falling
farther and further behind asfar as attracting talent and
letting people really learn anddevelop a skill set around the
(31:09):
tool.
So then they have to do it ontheir home machines because
people, you know, they want tolearn and they want to get
better.
So now they're doing it on theirtheir home machines and likely
taking data sets with them thatthey shouldn't be taking with
them.
So it's like, you know, it justit gets frustrating sometimes
around where we are, where wewant to go, and how we get
(31:34):
there.
SPEAKER_02 (31:34):
Well, thank you so
much for taking the time.
And I know it's early in themorning, so thanks for getting
up out of bed and and joining ushere on the audit.
Um, we really appreciate havingyou on and and spending some
time chatting about this stuffwith us.
So you've been listening to theaudit presented by IT Audit
Labs.
My name is Joshua Schmidt, andyou've been joined by Leslie
Carhartt.
And we also have our managingdirector, Eric Brown, and Nick
(31:56):
Mellum here from IT Audit Labs.
Please like, share, andsubscribe and check us in the
next one.
SPEAKER_00 (32:02):
You have been
listening to the audit presented
by IT Audit Labs.
We are experts at assessing riskand compliance while providing
administrative and technicalcontrols to improve our clients'
data security.
Our threat assessments find thesoft spots before the bad guys
do, identifying likelihood andimpact, while all our security
control assessments rank thelevel of maturity relative to
(32:25):
the size of your organization.
Thanks to our devoted listenersand followers, as well as our
producer, Joshua J.
Schmidt, and our audio videoeditor, Cameron Hill.
You can stay up to date on thelatest cybersecurity topics by
giving us a like and a follow onour socials, and subscribing to
this podcast on Apple, Spotify,or wherever you source your
(32:48):
security content.