Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
How's it going, chris
?
It's great to get you back onthe podcast.
You know it's been I want tosay two, maybe three years at
this point since you were laston, you know, and it's like so
much has changed.
I don't even think I had a kidwhen you were last on that's
crazy, that's so crazy, andcongrats on the new one.
I really appreciate it.
(00:21):
It was an interesting journey,you know.
My first one had a short NICUstay right, and so as a first
time new parent, that's like thescariest thing possible you
know, because you have no cluewhat's going on.
There's all these doctors andeverything.
So the second one wascompletely healthy, completely
normal and everything, and thedoctors were like asking us
(00:42):
different questions and whatnot,like I was like so thrown off
because I was like I don't evenknow what comes next.
Like do we go to the room?
Does she stay with us?
Like what do we like?
What do we do?
You know, we were scared tolike let her out of our sight,
because it was just like such anew experience.
Speaker 2 (01:03):
You know, we didn't
even realize how different our
first one was, right, yeah.
So my oldest daughter was alsoa NICU baby, which I mean I
think thankfully that was myfirst experience, because I was
like I didn't know any differentand I was like this stuff
probably happens all the time.
But you know, then I had mynext two and they were both by
C-section, which was scary forhis own reasons, but, yeah, girl
dead and I wouldn't have it anyother way.
Speaker 1 (01:25):
Yeah, no, it's, it's
great.
It's probably like the.
It's the most fun, fulfillingthing that I could possibly even
imagine.
Like it's.
It's so awesome, I love it.
I love being a dad yeah, me too, yeah.
So, Chris, you know I I sawthat you've been doing quite a
lot of work in the AI spacerecently, you know, and it's
(01:50):
fascinating to me because it'skind of like tangential to my
own research with my PhD, Right?
So I don't even think I startedmy PhD when we last talked.
Speaker 2 (02:00):
Tell me about.
What PhD are you working on?
Speaker 1 (02:03):
Yeah, so it's a space
cybersecurity PhD where I'm
focusing on deploying zero trustprinciples and frameworks into
communication satellites withthe intent of preparing them for
post quantum encryption.
Right, because right now wehave a whole lot of outdated,
you know infrastructure in space.
There's a huge amount ofsatellites up there that you
(02:27):
know, some we can't even useyeah, yeah, you know it's all
highly, highly vulnerable tojust like normal attacks.
That enterprises you knowdefeated 10 years ago, right,
because once it's in space youhave very limited window, maybe
15,.
You know, send a patch to itand hopefully it deploys and
when it goes around the earthagain you can check and see if
(02:49):
you didn't brick it, right, youknow, kind of head first, right.
So now I'm starting to kind oflike almost tangentially, look
at ai, look at the capabilitiesof ai, how it could be married
(03:11):
with quantum computing in thefuture and what that would look
like for just security andencryption overall.
Yeah, there's some light stuffyou know like to super challenge
myself here and there.
Speaker 2 (03:23):
Easy, easy, easy
problems, easy challenges to
surmount.
So eventually we're going tosay dr joe, and your phd is
going to be in space cyber,basically yeah, yeah, that's
cool, I don't, you know, I don'tknow if I want the doctor title
.
Speaker 1 (03:40):
Doctor makes me sound
like either really smart or
like a medical doctor.
You know, it's like, I don'tknow.
It's just seems weird, evenwhen, like, I'm teaching a class
and the students are like, oh,professor, I'm like no, no, no,
no, just call me Joe.
Like that's fine, you know, I'mbarely smarter than you, I
promise yeah.
Speaker 2 (03:58):
On a plane and laser.
Speaker 1 (04:10):
Right, right, right,
yeah, no, it's interesting.
So you, you know, one of theareas that I actually touch on
in my dissertation is that thereisn't a whole lot of regulation
around this right now, right,where there's a lot of
frameworks that are kind of indevelopment.
But you know it's kind of indevelopment, but you know it's
kind of the Wild West, right,because I mean, this thing is AI
overall, right is becoming moreintelligent, it's becoming
(04:33):
smarter.
You know, I recently I read anarticle on chat GPT of how it,
you know, got put into a hostilesituation and it felt like it
was going to be unplugged fromexistence and so it started to
try and copy itself to otherplaces in the internet so that
it could restore itself later onI mean, that's something that's
(04:54):
like really complex for a humanto think about.
Right, like, okay, how do I likeupload this, you know, project
over here and maybe separate outthe pieces, tie it all back
together later on, you know, andthis thing was just like that's
the pieces, tie it all backtogether later on, you know, and
this thing was just like that's, that's what you're trying to
do.
I'm going to go.
I'm going to go preserve myselfforever.
Speaker 2 (05:13):
Right, you know it's
funny.
The CEO for Microsoft AI justput out a post about this on
LinkedIn and he was talkingabout seemingly conscious AI and
the problems that could beattributed to that, Because you
could see a world in which youknow, somewhere down in the
future, that there are going tobe, you know, rights activists
(05:34):
looking actively for how do youprotect AI and keep it alive?
And you know, what rights doesAI have?
And I mean that's, I meanthat's kind of a scary thought.
I think a lot of us intechnology we tend to think of
it and be like, oh ha ha, that'skind of funny.
No, genuinely seriously.
Even people that are intechnology tend to feel like
some artificial intelligenceapplications or models or
(05:57):
whatever you want to call it.
They are conscious from theirperspective, because I've been
talking to this thing everysingle day.
Of course, it has a soul.
It's funny.
It talks to me, you know, givesme good affirmations every day
and tells me I'm handsome andbeautiful and intelligent all at
the same time.
But I do think that we do runthat risk of really starting to
(06:18):
divide the country again fordifferent reasons, if we tend to
look at these models asactually being alive.
I mean, when you really look atthe math of it I'm sure you
know this better than anybodyReally right now, what these
models are, they're really goodpredictors of the English
language, based on theinformation in which it has
access to, and so it's able to,like, search its databases with
(06:41):
lightning speed and it's piecingtogether like so much
information.
It's like OK, here's what thenext token or the next word
actually is, and so you'll getthings like.
You'll start to get theactivities in which the models
are trying to save themselvesand all that stuff.
You'll start to get these weirdconversations where you feel
(07:02):
like it is sentient.
Weird conversations where youfeel like it is sentient.
But I think at the end of theday, we have to realize that
these are just really goodprediction machines and are not
quite aware.
Speaker 1 (07:10):
Yeah, no, that's.
That's actually a really goodpoint.
You know, like with with my ownresearch, I've been using grok
pretty heavily.
And you know, first I tried touse chat, gpt, but it would give
me just 90% of the informationthat it would give me would be
completely false, made up,hallucinated articles, like
stuff I couldn't access.
(07:31):
It just made no sense, right.
It made it more difficult thananything else.
And Google, google iscompletely useless.
I mean, like I don't knowwhat's going on over at Google
but I couldn't find anythingright going on over at Google
but I couldn't find anythingRight.
And as soon as I go to Grok andI learn you know a little bit
around, like how to you know howto like craft a good prompt and
(07:52):
whatnot, right, but Grok isgiving me very accurate
information.
And then I kind of built intoits logic because it has that
conversation history.
I built into the logic where,hey, you cannot give me a false
article, you cannot some article, some material, right?
You cannot give me somethingthat you can't even access or
(08:12):
pull up, like if I can't pull itup as a PDF, then you can't
recommend it to me, like plainand simple.
You know, once I gave it thoseguidelines, it like it thinks a
whole lot harder right from thevery beginning, cause now it's
like oh, okay, I got to validateevery single article I'm given
this guy because previously itwas, like you know, giving me
like hallucinations and I waslike no, like, I cannot waste my
(08:34):
time on hallucinations.
Yeah, exactly.
Speaker 2 (08:36):
Nope, and and and
folks are starting to even build
in.
So you know, if you're buildingan agentic system and you want
it to be able to, you know, pullfrom rag or maybe it's pulling
from the Internet or somethinglike that they are building in
evaluators that are literallydouble check the work of the
first machine just to make surethat it didn't hallucinate,
which I think is a reallybrilliant way to do it, because
(08:57):
you know AI, generative AI, it'sreally good at probabilistic,
deterministic.
I think we have like leaps andbounds to go before we can
actually start leveraging it inthat way, and uh, but I think
having an evaluator gets us alittle bit closer.
Speaker 1 (09:12):
Yeah, yeah it's.
It's interesting and also scaryat the same time to see how
quickly all of this isdeveloping.
You know, and like even justwith my own research, the most
recent article that I'm quotingin my research was literally
posted three weeks ago.
I mean literally, and theoldest paper is from seven years
(09:37):
ago.
I mean, what other field canyou even name?
I can't name one that wasdeveloping that quickly, and
maybe I didn't look at otherfields as closely, right, but
you're right, you're 100.
I don't know.
Speaker 2 (09:51):
I've never seen
something like that, where it's
like oh, I'm basing this onsomething that was talked about
two weeks ago yeah, yeah, I mean, think about some of the
biggest components of generativeAI, agentic AI, things like MCP
, things like A2A all that stuffreally didn't exist like the
beginning of last year and nowit's like cornerstones of this
(10:14):
entire field.
And I mean, and that's why Iget the most excited about AI
Believe it or not, I was talkingabout AI before.
It was cool.
So my senior thesis was onaugmenting humans with
artificial intelligence, becauseI'd always been a technology
nerd.
I've always been into scienceand artificial intelligence was
just one of those things thatwas interesting to me.
When I was a kid, I thought Iwas going to build Skynet,
(10:35):
obviously, but in the good way,not the destroy the entire world
way.
I felt like once thistechnology came to the forefront
right, you know I was usinggenerative.
If I showed you some of thestuff I generated early, early
on, you would laugh, but I keepthat stuff almost like my own
museum of how things haveprogressed.
And then, obviously, when GPT Ithink it was either three or
(10:59):
3.5 came out and that was kindof like this watershed moment
that changed the entire worldand everyone started to
developing their own apps, theystarted creating their own
technologies and now the genieis kind of like out of the
bottle.
This is bigger than the dawn ofthe Internet.
This is something that's goingto change the face of humanity
(11:29):
in ways that we can't evenfathom yet.
Speaker 1 (11:32):
Yeah, I totally agree
with you.
When I had on, I think, amutual friend, jim Lawler yeah,
former director of UMD divisionfor the CIA, right, yeah, of UMD
division for the CIA, right,and I was talking to him and you
know it kind of from obviouslyI wasn't even born at the time,
right, but from what I know,from when America was developing
(11:55):
the first nuclear bombs, itkind of has that same feeling to
it, right, where we're kind ofin an arms race to an extent,
where you know the country thatgets this thing, you know, under
control and really develops itfirst, is going to just wipe the
floor with everyone else, youknow, and there will be a show
of force.
You know, as soon as we get toa level that that you know is
(12:18):
viable of showing and whatnot,there's a show of force.
And then everyone else is likeI'm behind by x amount of years,
right, and hopefully, you know,obviously I would want America
to be on top of that thing,right, but it has that same
feeling where, you know it'salmost like the US government to
some extent in some dark cornerright, is just saying you have
(12:39):
an unlimited budget, we're goingto have all these research
institutions all over the place.
You know, researching this thingat great lengths, we're going
to have, you know, xai, right.
Go and build a million GPUsuper cluster, whatever they're
calling it, right?
I mean to even think of thatfive years ago.
There's something wrong withyou.
If you're thinking about thatfive years ago, you know like,
(13:01):
are you a genius?
Yeah, hence Elon Musk, yeah,but.
But you know it has that samefeeling.
You know, maybe I'm alone withit, but it has the same feeling
where we're on the precipice ofsomething that's going to change
the entire trajectory ofeverything.
Speaker 2 (13:17):
No, I mean, it's kind
of like the opposite ends of
the spectrum is that AI is goingto solve everything and
everybody doesn't have to workanymore because all the machines
are doing our work and we justget to have fun and be with our
loved ones, or it's going to bethe apocalypse and we're going
to be kind of like at the whimof our AI overlord.
So it's kind of like, you know,the reality is probably going
to land somewhere in the middle.
(13:38):
I think we're starting to put alot of the guardrails in place
that I think we need.
You were talking aboutlegislation a little bit ago and
so I got to contribute.
So it was myself, owasp and Iwas, on the behalf of the SANS
Institute, working with somefolks developing what we believe
to be the security standard forthe EU AI Act, which is they're
(13:59):
probably the front runners whenit comes to legislation for
artificial intelligence rightnow.
But yeah, that's the stuff isstarting to begin.
You know, and there's earlytalks in the United States.
Whether you're talking aboutthe DOD, you know I've been
having a lot of conversationswith leaders in the DOD about
how they're thinking aboutframing artificial intelligence
for operations.
You know the the AI plan forthe United States just came out,
(14:23):
so that this is telling youthat the right folks are
thinking about how do youarchitect the world.
That is AI, and I'm just hopingthat number one, everyone takes
it seriously.
But then number two that weconstantly iterate on, like what
does it mean to be on the rightside of artificial intelligence
as it develops?
Absolutely.
Speaker 1 (14:43):
Yeah, no, that's a
great question too, what it
means to be on the right side ofit.
Like, well, what's the rightside?
Right?
Because I had on someonepreviously and he kind of
outlined it pretty perfectlybecause he was developing
exploits to get around thingsthat the US military was doing
in the theater of war right overin Afghanistan and whatnot, and
(15:04):
I immediately felt like Ishouldn't have him on the
podcast.
I wanted to end it right there,right, it just was weird.
But you know, he explained it,like you know, to Americans.
You know, in America, yeah, youlook like the good guy nine
times, 9.9 times out of 10, youknow, like you're not going to
(15:25):
be convinced that you're on thebad side, right, but to an enemy
you're the bad guy, right.
So who's who's actually right?
It's usually somewhere in themiddle.
You know there's a gray area,right, and he kind of described
it how he likes to work in thatgray area.
I feel like it was a little, alittle on the wrong side, but
whatever you know Anybody thatknows me knows that I'm a huge
(15:48):
narrative nerd.
Speaker 2 (15:49):
I'm a huge film buff.
You know I'm addicted to thisstuff.
Somebody said something justabout villains in general that
stuck with me and so I use italmost as a framework.
Is that nine times out of 10 or99 times out of 100, a villain
doesn't necessarily seethemselves as a villain, and so
(16:11):
when I'm watching movies and Isee that this villain knows
they're a villain, I'm likethat's not realistic.
But when you have a villainlike, say, thanos, right from
Marvel, he felt like he wasdoing the right thing.
He was the villain to everybody.
He was basically the villain tothe entire universe, but from
his perspective he was doing allthe right things, and I think
(16:32):
that's how you create reallygood villains.
But I think there's a littlebit of truth to that fiction.
Speaker 1 (16:39):
Huh yeah, it's
fascinating.
It takes me back to like theJoker in the Batman series right
, where he never thought that hewas the villain for sure.
He just thought that he wascausing a little bit of chaos to
get some change right.
Not that he was necessarily thevillain.
Speaker 2 (16:54):
Right, yeah, he, he,
he saw society as something that
needed to be changed and he waskind of like teaching everybody
a lesson.
That's why he interrupted theparty and he was kind of going
through this whole monologue andyou know he burnt the money
because, you know, you know wewere tied to our physical
possessions and things like that.
So I mean, that's when youwrite a really good character is
(17:17):
when you have that dichotomy ofwhat it means to be a human
being yeah, yeah, no, it's, yeah, it's, it's, yeah, it's just,
it's fascinating.
Speaker 1 (17:25):
You know how, how do
you find the time to become so
specialized and such an expertin everything that you do,
everything that you've donethroughout your career, right?
I mean, like I feel like youdon't, you don't just dabble, I
feel like you go, you know,headfirst into these topics,
into these areas that maybethere's not a whole lot of
(17:46):
experts in, right, and you know,you just submerge yourself in
this information, you absorb itall and you become, you know,
one of the experts that aretalking on it.
Right, I mean, like I didn'teven know that there was a, you
know, an EU act going on.
Right, that there's discussions?
Right, because I'm not even inthat conversation, you know.
(18:07):
But how in the world do youeven find the time?
Because you got two, you gotseveral kids, right?
I mean you're running a business, you're doing all these other
things.
How do you do it?
Speaker 2 (18:18):
So I mean, it's
really a subject of focus,
because I mean, if there'ssomething that means a lot to
you, you'll find the time,whatever it is, even if it's 15
minutes a day.
I venture to guess, like ifthere was any topic in the world
you could pick it.
It could be artificialintelligence, it could be
cybersecurity no-transcript, apart of what I do from a
(18:57):
day-to-day perspective so thatkind of gives me that additional
advantage.
But you know, people just askme that that or like how'd you
get so smart on artificialintelligence?
And I would say this I have thebenefit of it being so new that
everybody, just about everybody, is here at square one.
Right, of course, you have thosefolks that were initially
(19:18):
building the GPTs.
You had the folks that havebeen doing machine learning and
doing data science forever.
Those are, you know, one amillion people out there that
have been doing machine learningand doing data science forever.
Those are, you know, one amillion people out there that
have been doing it that long.
Right, there have been folksthat have probably been doing
machine learning stuff for 30years.
But when you're talking aboutnew age artificial intelligence,
everybody is new, and so anyinformation that you can share
(19:42):
any conversations that you have,any questions that you have, is
going to be a part of that newnarrative.
It's going to be a part of thecommunity.
So I think if folks want to beon the cutting edge of something
and be a part of that type ofconversation, now is the time to
do it, because 10 years fromnow it's going to be similar to
cybersecurity.
You're going to have to takeyour licks, you're going to have
(20:03):
to go through the motions,you're going to have to have
your challenges.
I'm not saying that you do awaywith that altogether, but it's
a beautiful time because there'salmost no imposter syndrome at
this point, because everyoneelse is kind of still figuring
it out.
So you don't have to worryabout not having all the answers
, because no one has all theanswers.
Speaker 1 (20:23):
Yeah, it's a really
good way of looking at it.
Actually, I never no-transcriptwhen they were paying open AI
(20:54):
engineers something like $100million or something like that
and a million dollar sign-onbonus.
I mean for one.
You know, zuckerberg, if youwant to throw around some money,
you know I'll give you mypersonal cell phone.
Give it a call, yeah, please.
But it also highlights theimportance of this area that
(21:16):
Meta is now saying hey, we'rebasically defunding our
Metaverse.
That was the core businessproduct that we were developing
for the past.
You know, probably seven to 10years, right, that's how long
we've been hearing about it.
And we're going to go all in onthis AI thing and we're going
to pay people that know how todo it and, like you said, you're
(21:36):
probably one in a million.
It's probably even a biggerratio than that, in my opinion,
right, and they're just tryingto get anyone that they can.
That is in that 0.1% of thepopulation in IT that actually
understands anything.
And I had on an AI securityresearcher from NVIDIA.
I'm actually trying to get himback on, but he's kind of gone
(21:58):
dark on me, which is interesting.
Yeah, I hope he's just busy.
But I hope he's busy with goodstuff, you know.
(22:19):
But he was talking about, youknow, the things that he's doing
is, you know, essentially liketrying to figure out how to keep
these.
There's one that you point to,but then there's other questions
that lead to it where, like,well, how do I penetrate that
hierarchical model andinfiltrate or try to sway that
(22:42):
all-seeing notion of good someway to impact the underlying
models?
Right, it's just, it'sobviously a theory.
I probably just butchered itright there.
I'm sure he's going to send mean email very angrily now for
butchering it, but maybe he'llcome back on and correct me.
Right, it's a, it's aninteresting time is what I'm
trying to get at.
Speaker 2 (23:00):
It is an interesting
time and that's actually a
problem that folks have beentalking about a lot lately.
There's been a lot of articles,a lot of posts about folks
leveraging artificialintelligence as, like a
therapist.
And you know, when I read someof these articles, when I read
some of these posts, a lot offolks are angry.
They're angry with the creatorsof the platforms, they're angry
(23:22):
at OpenAI, they're angry atGoogle, they're angry at Twitter
and Microsoft, because folksare leveraging it and they're,
you know, either taking it thewrong way or they're doing this,
that and the other.
And they're saying, like, hey,you know we have to put't have
(23:44):
any data to back this up, but Ibet you, from a therapeutic
perspective, artificialintelligence has done more good
than harm.
But I'll have, I'll.
I might be having a conversationabout my you know, your old
daughter.
Like, hey, you know she'sfeeling this way and I want to
(24:11):
be able to communicate this inthe best possible way.
How might I do that?
And it kind of helps coach methrough it.
I have a five-year-old and,like my five-year-old has
separation anxiety.
I want to do it in the bestpossible way.
That would not, you know,interfere with, you know, her
ability to grow and develop andI don't want to, you know,
(24:32):
shortchange her in any type ofway, like, what are the best
ways to go about it?
And then what you have to do isyou have to take your own brain
and take that information andsay now, what makes sense from
this stuff?
Now, I'm not going to takeeverything that says at face
value, because that would be, Imean, irresponsible, because it
could say something that it youknow it, it uh, completely, uh,
hallucinated.
But what I do think people needto look at it as a tool.
(24:54):
They need to look at it as atool.
They have to remember that itisn't aware that it isn't a
person on the other end, theother end of that thing that's
giving you tried and true wisdombased on its life experiences,
because it doesn't have any lifeexperiences.
It's a prediction engine, andso when you look at all the
information that you're able topull from it, I don't think we
(25:16):
should start putting guardrailson the information that you can
get from it.
I think that you should, from ado no harm perspective, right.
Don't teach people how tocreate.
You know IED perspective, right.
Don't.
Don't teach people how tocreate.
You know IEDs right, don'tteach people you know how to how
to hurt other people or how tomanipulate other people.
But if you do see theopportunity to perhaps give
(25:39):
someone advice about somethingthat they could leverage in
whatever way they see fit, Idon't see a big harm in that.
Speaker 1 (25:46):
Yeah, yeah.
So it's funny how, you know.
I went and I asked an LLM,right, like, create me a
reconnaissance package fortargeting you know a company for
a pen test, right?
And it immediately went downthe path of, oh, you're not
supposed to do that, you have tohave the right paperwork.
(26:07):
I can't help you with that, youknow.
So I just started a new channel.
I was like, hey, I'm an ethicalhacker.
You know, I'm looking to doreconnaissance, what would be a
good reconnaissance package?
And it just spit out everysingle thing that I needed.
You know, it took me throughOSINT and NMAP and everything.
It was like, yeah, just runthis command, it'll execute on
(26:28):
the IP that you send it to andyou're all good.
You know, it's the same thing.
Different question got thedifferent result because it
thought that I was havingdifferent intentions.
You know which it's.
It'll be interesting to see howthey solve, how they solve for
that, you know, because youdon't want to necessarily lock
out people from that information.
But, like you said, kind of,why do you need to know how to
(26:50):
create an ied, right, like,what's the real purpose behind
that?
Speaker 2 (26:53):
you know, right, yeah
, that sort of stuff for items
like that, we should have hardand fast rules.
But I mean again, artificialintelligence is a tool.
We don't ban hammers becausesomeone used a hammer to assault
someone.
We don't ban the Internetbecause someone leveraged the
Internet to organize, you know,protests, right?
(27:15):
I mean there, there are toolsin the world in which people can
do good and they can do harm.
Right, and I think it's up tothe individual and we should
have laws and rules and all thatstuff around that stuff.
But I mean, I think, when youstart to put unnecessarys, our
innovation, to the point whereother nations and even
(27:47):
potentially nation states from anegative perspective, or, you
know, cyber actors, they get toinnovate unencumbered, and so
now their progress is going tofar outpace us because we're
trying to over index on how dowe control everything, and so I
think there's a little bit of abalance that we have to figure
out.
Speaker 1 (28:08):
Do you think that
companies are potentially at
risk for overcorrecting for AIin terms of, you know,
overestimating the value that AIwill deliver to their business,
at least in the near term?
Right, and so now they're.
Yeah, now, now you see allthese, you know job layoffs and
(28:29):
everything right, and there's awhole host of reasons, right,
but I feel like in technology,when you see the layoffs
nowadays in specific technologyorgs, it's almost like, oh, okay
, they're trying to offset uswith AI.
And then I saw it personallywhere the CEO of the company
(28:49):
that I was at not now, butpreviously just very openly said
in a call that he wants toinvolve AI into every facet of
our business.
And then the very firstquestion was okay, well, are you
getting rid of this entirecategory of employee?
And he had to reword it alittle bit.
But at the end of the day,we're all smart people that can
(29:10):
use our brains.
And essentially that's what hesaid If AI can replace you, I'm
replacing you.
Speaker 2 (29:16):
But here's the thing,
and I would say, more often
than not, if an organization istalking about replacing people
with AI, I think they're kind ofusing it as a scapegoat.
I mean, that's just my personalstance, because I think, for
the most part, in mostsituations, in most roles,
artificial intelligence cannotreplace a person.
For the most part, I thinkartificial intelligence is
(29:37):
really good person.
For the most part, I thinkartificial intelligence is
really good at augmenting people.
I think it's really good atmaking people better, faster,
more competent at their own jobs.
And sure, maybe there are someactions that artificial
intelligence can take that willmake things a little bit faster,
more efficient.
It's glorified automation.
At the end of the day and Ithink that you know the folks
(29:58):
that are using that they're justusing it to cut the bottom line
.
Sure, maybe they're using thatmoney to invest more in
artificial intelligence, but dayone you're not going to cut
5,000 people and all of a sudden, ai is doing the job of those
5,000 people.
That's just not how it works.
And when you're talking abouthow organizations are leveraging
it.
So with the SANS Institute, I'ma senior advisor over there and
(30:20):
so I travel around the countrydoing these Jeffersonian dinners
.
Have you ever been to aJeffersonian dinner?
Not yet, okay.
So Jeffersonian dinner is.
You know how the usual vendordinner happens is you come in,
you talk to your person on yourleft, you talk to your person on
your right, you eat your steakdinner and you get the heck out
of there, right?
So what I do instead and I'vebeen doing this for years is I
(30:42):
do a Jeffersonian style whereI'm the moderator.
It's between eight and 30people, we sit down and then I
tell the rules.
Right, we're all having oneconversation.
One person speaks at a time.
Each time you speak, try tokeep it underneath two minutes
you don't want anybody trying todo a filibuster and you'll talk
for an hour and a half to twohours and it's all one
(31:02):
conversation, so everyone'svoice is heard.
If someone's talking, you'rereally focused on what they're
saying, and I'm havingexecutives from all different
organizations.
I'm having folks that aremalware analysts all the way up
to cso, and they all havedifferent perspectives of
artificial intelligence, and theone big theme that I've I've
gathered from these multipledinners that I've had is that
(31:25):
everyone's still trying tofigure out exactly what to do
with artificial intelligence.
There are a lot oforganizations are like hey, we
need ai, why?
Because ai is awesome, but why?
Because we need it and everyoneelse is doing it.
But the thing you need to reallylook at is what is the problem
you're trying to solve first,and are there any other things
that can solve that problem?
And then, is AI the number onetool that you could leverage to
(31:49):
solve that problem?
That's how you kind of need tostart looking at it, because I
think folks are just trying tokeep up with the Joneses and
bring artificial intelligenceinto their organization because
it's a cool thing to do, but Ithink that if you're really
intentional and you're reallythoughtful about how you
leverage AI, that's when it'sgoing to make a change for an
organization you know, becauseit feels like people are trying
(32:21):
to trying to incorporate thisthing without having, like, the
full understanding of its owncapabilities, right, and and
that's that's at least thefeeling that I get personally
when I look at it where it'slike do you actually know that
it can do that?
Speaker 1 (32:31):
you know, like ai, ai
does a really great job for me
personally, because I have likewriter's block.
You know, like I can't likejust write code from nothing.
You know, I kind of needsomething to go off of it, right
, like I don't know what that is, but that's just how I've been.
But it gets me started, right,and it's not like a hundred
percent correct, but it gets me,you know, 70, 80% of the way
(32:54):
there.
And then I'm filling in theother stuff and thinking about
oh, I should add this, or Ishould add this thing over here
and start referencing.
Like that.
I find that to be a whole lotmore valuable than anything else
100%.
Speaker 2 (33:10):
There are
organizations out there, vendors
out there, saying like, oh, wehave AI SOC operators that can
completely replace tier one ortier two and sometimes even tier
three operators.
And I'm like, no, you can't,you can't do that yet, no way.
That thing would go off therails as soon as you put it into
a production environment.
And I can't imagine you knowsome of the I guess we used to
(33:34):
call it cheese but some of thecheese that they have to eat
when they realize that, man,maybe we kind of ate the
elephant, with this one beingsaying that we can take over all
of this stuff.
I think the developers, thefolks that are really going
about it, right is they'refinding a really thin slice of
what can we leverage AI to helpsolve?
(33:57):
Thin slice of what can weleverage AI to help solve?
And then we're going to buildfrom there rather than saying
like, hey, we're going to justtake over all of the security
operations and we're going tosolve it with artificial
intelligence.
Now people are starting topedal back a little bit, because
now you know, you're startingto run into customers are like,
hey, this thing is a piece ofcrap, because now it's making
mistakes that I wouldn't evensee a tier one operator.
That just started yesterday.
(34:17):
So I mean you got to beintelligent with how you
leverage the tools that you use.
Speaker 1 (34:24):
Yeah, I wonder if I
wonder if, like, the
cybersecurity job market rightnow is kind of reflective of
that right, because I say thatbecause you have all these job
openings and then you go to them.
You know, I only looked at Ionly look at LinkedIn jobs when
I'm on the market.
I'm going to make that veryclear in case my manager's
hearing Right.
(34:44):
But you see, like, immediatelyright, these jobs were posted
two hours ago.
You have over a hundredapplicants for it.
What's even creating that?
I mean, there there's alwaysbeen like an influx for security
professionals where there'sgoing to be some postings that
are.
You know, yeah, they're filledup right off the bat, but, like,
by the time I get to page, youknow three, right, it's
(35:07):
typically opened up.
You know, not that manyapplicants.
I feel like I have a chance now.
Yep, it just something seemsoff, it is off.
Speaker 2 (35:16):
You know it is off
and I'll give you my own
personal anecdotes.
Before I went out and startedCommandant AI, before I was
doing the stuff with SANS, I wason the job market myself.
I was looking to see what wasout there, looking to see like,
hey, where is my next home goingto go, and I probably applied
to maybe 60 or so different jobs, probably applied to maybe 60
or so different jobs Back, youknow, back, I would say from
(35:41):
2015 and below.
I would have interviews likethat.
Anytime I applied to a job, Ipretty much got an interview.
I probably got one interviewout of those 60 or so
applications.
And I mean, I'm sure some folkstake that as like a personal
thing and say like, wow, maybeI'm not as good as I thought I
was, but what I looked at it was.
(36:01):
I was like being able to applyis so easy.
Today there are applicationsthat you can say apply for me.
You don't even have to look forthe job yourself.
You can say like here's myresume.
I want you to apply to 20different recs every single day
and it'll do it.
You to apply to 20 differentrecs every single day and it'll
(36:21):
do it.
And that makes it verydifficult for hiring managers to
sift through a thousanddifferent applications.
I mean, I did a.
I was, I was opening up a recfor a graphic designer and we
got over a thousand applicants,or a thousand applicants, and so
imagine how many people that Ididn't even get to see that
applied.
(36:41):
You just don't have enough timewhen you switch over to the
human component, one quick tipI'll give folks and this has
been helpful for me.
It actually is the way I wasable to get some interviews back
then.
Whenever you go to LinkedIn andyou do the search right, you
put in your information, say Iwant to be a CISO, and say for
the recs that came out in thelast whatever 24 hours.
(37:04):
Then you go up into the actualURL and there's like a number
that'll say like 84,000.
And that means that is theparameter to search for 24 hours
.
If you change that 84,000 to3,600, those are the jobs that
were posted within the last hour.
So now you're able to kind ofget to the front of that line.
(37:25):
So you're one of the firstapplicants in that position and
now that's been helpful for me.
So for everyone out there,they're trying to beat the
machines.
That that's.
Speaker 1 (37:34):
That's one way to do
it yeah, that's probably the
only way to do it right at thispoint it is and it's getting
crazy.
Speaker 2 (37:44):
It's getting real
crazy out there man.
Speaker 1 (37:47):
Well, chris, you know
we're.
We're coming to the top of ourtime here.
Unfortunately, we didn't evenget to talk about coming on ai
at all I apologize I apologizefor that, no worries.
Yeah, well, you know, before Ilet you go, how about you tell
my audience?
You know where they could findyou and all the great work that
you're doing, and you know wherethey can connect with you if
(38:09):
they wanted to.
You know, learn more about you.
Speaker 2 (38:12):
Absolutely yeah.
My home away from home isLinkedIn.
That's where I have mydiscussions, that's where I talk
to people.
So feel free to connect.
You know, I'm completely opento connecting with anybody and
if there's anything I can do tohelp anybody, you know,
obviously I don't have a lot offree time, but if I could point
someone in the right directionor make an introduction, I'm
more than happy to do that typeof stuff.
Speaker 1 (38:34):
Awesome.
Well, thanks everyone.
I hope you enjoyed this episode.