Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to CybersecurityToday, the weekend edition.
My guest today is Rob t Lee.
He's the Chief AI officer and Chiefof Research at the Sans Institute.
Now, Rob has a unique view beingat the intersection of education,
deployment, and security as it relatesto artificial intelligence, and it's
(00:20):
something if you follow cybersecuritytoday, you've been hearing the stories.
Now, for those of you who might notlet me preface this, I'm neither an AI
apologist, nor am I an opponent of AI.
I think what I am is a realistand a pragmatist and AI in
my never humble opinion.
Is unstoppable.
(00:42):
It offers too much in theway of potential benefits.
but AI also in my opinion, representsone of the greatest threats to
our systems and infrastructure.
And I would argue that as an IT person,I think it's fundamentally insecure.
Not because security's impossible, butbecause like so many other technological
(01:05):
advancements, no matter what we say,we always do, the security should
be built and not bolted on, and thenwe run away and build something.
And in a transformationaltechnology, we do it twice as hard.
So we add that level of risk.
Now there's a second level ofrisk, and that's like any other
transformational technology.
(01:26):
Cloud, SaaS, mobility, these aredriven by a business desire for results
and like cloud SaaS and mobility.
When business desire conflicts withsecurity, security takes second place.
It's the way it's always been.
If you resist, the business willeither steamroll or over you,
(01:49):
or they'll subvert the rules andbring it about surreptitiously.
Now, there may be some businessesthat are so highly regulated that
those who manage risk may have theupper hand and all the best to you.
I wish we had that job,but most of us don't.
And I don't wanna be flip about this.
Business leaders are rewarded for businesssuccess, not for the risks they avoid.
(02:13):
And so I, I may also be a littlecynical, but business leadership isn't
always ready to jump down and fall ontheir sword when there's a mistake.
especially when it involves things liketechnology and cybersecurity failures.
But that's cynicism.
Like my understanding ofbusiness comes from experience.
(02:33):
I've been a product executiveand a technology executive.
I've lived the need to show results.
I've been to the meetings wherethere's been a failure or a crisis.
And as one of the characters inmy new book says, if you can't
spot the scapegoat, you are it.
So, again, I say this not to be flip,but to really point out the difficulty
(02:53):
of those who have to manage thisrisk of AI and those who must try to
introduce the dreaded G word governance.
These people need the wisdom ofSolomon and the courage of David.
so I'm glad to invite Rob in for aconversation on this, not to provide
you with all the answers, but tostart a dialogue that might help
(03:13):
point us in the right direction.
Welcome Rob.
Hi.
Thank you.
Now I'm fascinated.
Can I find out about your book?
It sounds like term, figureout the book about scapegoat.
It's likely you or I thinkthat's what you said.
Yeah, no, the book is, is about afailed it guy who's, who encounters
an artificial intelligence.
So it's, yeah.
There's, there's a bit too much ofmy own corporate career in there.
(03:36):
Yeah.
but before we get started, let'stalk about SANS and your role
there, and then we'll, we'llget back to the subject again.
So, been with Sands for a long time.
but I, like most of theinstructors, you're a practitioner.
And while I was, starting at Sands backwhen I was 24 years old, so this is
like decades ago now, I was working in,the A-F-O-S-I at that point, and then I
(04:01):
worked offensive operations in malware,development and discovery, vulnerability
discovery for about five years doingstuff, you know, for various people.
And then went over to Mandiant,was at Mandiant doing industry
response director over there.
helped, you know, wrote theinitial entrance report coupled
them that ever came out.
(04:22):
And then also really helped spearheadthe cyber threat intelligence
capabilities that they did.
along the lines of Sands, wrote in,developed their initial incident
response curriculum, and I usedthat as a test bed for a lot of.
Capabilities and ideasthat became industry known.
I came on board full-time in Sands aboutfour years ago, running their content
and curriculum, and now I'm actuallygotten back to my roots, working on our
(04:44):
research and focusing on AI and, business.
Not only AI from cybersecurity, frombusiness transformation, transformation.
I've been a part of multiple startups,have consulted, hundreds of companies and
went to Georgetown and got my MBA there.
So I have a massive backgroundin the business side.
And applying that to technology, Ithink is even more prescient today
(05:05):
than has ever been because you'redealing with new technologies that are
fundamentally, fantastic and almostmagical to the point where we also need
to take a look at, how do we secure it?
Yeah, and I think that'san important point.
you said this, two things and, myquestion, 'cause that was what I
thought SANS was about, was youreally, it was a lot of practitioners
(05:25):
who were doing a lot of the teaching.
Yes.
And, which is so important.
I don't know, a single full-timeacademic that Sands has.
They're all in the fielddoing stuff, working for other
organizations, consulting on theirown, and then they come and teach.
You know, it's like you wantthe Navy Seals to be a top by
people that were in combat.
You know, this is kind of the same thing.
No, we're not Navy Seals.
That's a bad analogy.
(05:46):
That's like another next level.
But again, it's a, you actually wantthat level of experience other than
you feel like this person never, youknow, had a single bullet shot at them.
And, you know, that's that experience.
We come in here, here's realworld practical advice, technical
skills, and then here's what it'sgonna look like next week for you.
Yeah.
And I'm, I'm always careful notto dump on academics because the,
(06:08):
although they, we've had my, my sharebeing dumped on, I taught at, at one
of our best universities, and I lookat people who I know had never done.
Any of this in the real world.
Yeah.
Who are looking at me saying,well, you're not academic enough.
And I'm going, well, you'renot practical enough.
But I think there, you know,the research really good.
(06:30):
Research has real value.
Yeah.
Really good practical experience.
Has real value.
But we're, we're all after results.
And so, well, I thinkyou're being too nice.
I mean, most, most you likewe could dovetail on this
pretty quick, pretty quick.
but again, it shows, even whatjust happened at Starbucks, their
previous CEO, the company waslike a hundred, 80 billion, not 80
billion, like 150, whatever it was.
(06:51):
And then that CEO came in andthe value went down by, 40%.
And then just by hiring the formerCEO of I think it was Chipotle and he
turned around all these businesses,the stock valuation went up.
But that's someone who's been there inthe fight doing it versus, I believe the
former CEO was more of a, theoretical.
You need someone looking at bothbecause someone who's in the fight
(07:13):
can't look the horizon view to doresearch on what's coming next.
I'm always careful to sayresearch, not theoretical.
Like, we're not physicists, but gettinggood data is something that I don't
think we do well as an industry.
And I respect the people who getgood data and help us work with
good data a hundred percent.
when I was taking my degree,I had two profs who were both
(07:35):
business people, one of them rana franchise and he took us in it.
And you know what I learned from him?
If you run a franchise,you know the cost of a cup.
And if you don't know the costof a cup, don't run a franchise.
Yeah.
If you're not gonna die on $10,000,you're gonna die on 0.50 cents.
You know, like that's, andhe was just, eye-opening.
(07:55):
And the same thing, in tech we hadone guy was, I don't know what he
did, but he took us down to the lab.
And we were seeing, the firstcomputers there, in the university
lab and we started to play with them.
Yeah.
And you know, that was so that, magic.
And I think that's one of thethings I think that maybe we
love the practical, but it's themagic of doing things, you know?
(08:17):
And as much as my intro was about thetension between security and business.
The reality is, my friend JohnThorpe was a great speaker one time.
Used to put it, say, put your hand up.
how many of you are from it?
And people would put their hand up.
He said, how many ofyou from the business?
And people put their hand up.
He said, nonsense.
You're all from the business.
Yeah.
(08:37):
Anyway, back to our topic at hand.
What'd you think of the intro?
Do you, do you, is that the situationyou see or do you see it differently?
wow.
Such a great question.
I think security in general is runninginto a very fascinating, schism, between
business interest, where we're headedin terms of the application of a new,
technology and how is security beingasked to wrestle with these things.
(09:03):
so being exposed to a lot ofdifferent parts of the organization.
I was at AI four, recently, andI ended up creating, and I think
they called 'em round tables,
What it was is 40 people whoare non cybersecurity people.
I said, I'm gonna lead this.
Take a round table on, concernsand risks surrounding ai.
(09:26):
Inside your organization, howdo you potentially think about
it, you as the business leads ofvarious parts of the organization.
What was interesting about this,I was all expecting, I had kind of
a beats, questions I want to askfolks, get conversations started
because we're all sitting in thismassive circle, about 40 of us.
and we had folks from FinancialInternational Monetary Fund, USA,
Audi, you name it, there were,business military was in there.
(09:48):
Everyone, it wasn't that they'reworried about getting hacked through
AI or adversary TE techniques andprompt injection and all the cool
and various terms that people using.
The one thing that theysaid is the business trust.
The one thing top like 90% is shadow ai.
And that got me thinking andit was like, I'm trying to tie
a couple knots together here.
(10:09):
And I took a lot of notesduring that, a lot of them.
And I said, I think people are havinga hard time in cybersecurity because.
There are three balls that you needto somehow juggle simultaneously.
And going back to some of the beatsthat you said, I'll start with the
last one is the hardest one that youmentioned earlier is the governance.
(10:33):
you need to somehow have a policyand a framework that is applicable
inside your organization.
The second one is you and your securityteams with what you're doing here
on the media side, but also yourtechnical skills, in finance, in hr,
in product and sales, and every partof the organization is being asked
to start utilizing this technology.
(10:54):
We don't know how exactly, go dosome experimentations and how to
utilize security team is no different.
Become a more advanced, superhumanpowered SOC analyst in instant response
offensive operator, you name it ontheir team, is like you're supposed
to use this magical interface, and beable to accelerate your capabilities.
So we have govern, utilize.
Oh, by the way, security team isalso asked somehow figure out,
(11:18):
everyone is using these magicaltechnical capabilities here.
How do we make sure that thosethings are safe for the organization?
We need you to protect those.
And by the way, we're gonna buildour own internal AI capabilities that
are gonna serve our customers too.
So we have, three things that aresuddenly dropped in the organization.
(11:38):
Customer facing stuff.
You know who your clients are,you know what you're doing,
your business services product.
You have your company and yourpeople that are starting to use
these tools for their own workflows.
Protect that, make sure nothingbad's going to happen there.
So you have to protect side andsuddenly security has faced a
problem it's never had before.
(11:59):
You need to simultaneously have answersfor what is our governance, by the way,
you need to use it, figure that out.
And, oh, the, by the way, you needto protect everyone else while you're
doing all these, these other things.
And it's created a littlebit of a meltdown in our
industry because of that issue.
And I, I could come back to, whatthat meltdown is, but I just wanna
pause, like, do you see this?
(12:21):
Yeah.
I mean, am I off my culture?
Are these No, I think you're too, oneof the things that I would've suggested
is we should kill the word governance.
Oh yeah.
We should talk about how shitgets done, and or how things get
done and who's responsible becausewe've lost it in a pretty word.
Yes.
And, I think we tend to do this in it.
(12:41):
We blame marketing for having their term.
We have our own termsand people go governance.
What's that?
It's the reality of how we make, well,it's, no, no, they're, they're using GRC.
You're in a presentation.
Well, G-R-C-G-R-C-G-R-C.
And I'm like, ah.
It's like people using agent, orgenerative ai, they're putting these,
it's like you're, I'm like, okay, Ihate the word agent and the same thing.
(13:01):
Yeah.
But it's, it's no longer governance.
They just like, well, the GRC da da da.
And that caught, you're exactly right.
It's like that word, those words,tend to cause me to like, okay, are
we, are we gonna have the, I'll justsay it like the, the drinking game.
Everyone said every time someone saysit's cybersecurity personnel, GRC.
Okay, everyone, what to do right now.
(13:21):
Tequila.
Yeah.
No, but I think that'spart of the problem.
So let's talk about governance.
Let's break that down whilewe're talking about it.
What is it that governance meansthat we should be talking about?
Oh, in very simple terms, being able toalign the business, to its business needs.
Be able to let the business be innovative.
Let the business flourish.
it's kind of like, okay,take away the business.
(13:43):
It's kind of like letting your kidsbe kids so they could learn, so they
could develop grit, that they've,every now and then they'll fall down
and have to have that pain that hits.
But you don't want it just be like, Hey,you're riding a bike with no helmet.
you still need like basic things andwrappers around the business, your
kids, anything else you want themto flourish, you want them to learn.
(14:05):
You want them to be thebest they could possibly be.
Security's job and governance isto create the bicycle helmet that
creates and reduces and implements,safety and risk reduction.
security done properlywith proper governance.
is going to enable thebusiness in the perfect world.
So when you think about governance,instead of thinking about restrictions, it
(14:28):
needs to be considered enablement safely.
I think your bike example's a great one.
there are rules of the road.
there are things youdo, things you don't do.
You don't go whip it through anintersection without looking.
Yeah.
You make sure you wear a helmet.
If we want our kids to be safe, we teachthem the rules so that they can play.
I think it's a great example, and I thinkthat's what we've lost in governance is
(14:50):
the sense of the practicality of theseare the things we need to do, you know?
Well, it's the same thing,the police officers.
The police officers basically shouldallow you to leave your home and to walk
safely down the street with your kidsto do things that, it's like a na nail,
In other cases, you end up saying nopolice is meant to contain constrict.
to limit.
(15:11):
But if done right, it is not thinkingthat way as the police officers like to
be in an area, it's like I'm creatingthe ability for this neighborhood to
be the best neighborhood it could be.
And neighbors, feel safe and knowif anything ever happens that's what
security teams kind of sometimes lose.
there's this, we're in chargeof things we can now restrict.
(15:32):
We, can not think of ourselvesas enabling folks safely.
It is more of a, oh,we're policing things.
And that's where I think a lot ofsecurity teams, it depends on how their
mentorship, how leadership this is.
So why so much?
The leadership and the CISO is criticaldecision hiring to understand that
(15:52):
is, that's why they're now a C-levelposition and not just like, three, three
orgs down is the CISO needs to returnto, like CTO enables the technology.
CISOs enables thebusiness in a safe manner.
and I think that, again, a goodanalogy, you call balls and strikes.
You call, I mean, if you, if you getpulled over by a cop, they tell you
you're doing 120 and this Canadianin a, in a 100 zone, or you know,
(16:14):
whatever that comes into in miles.
Exactly.
Kilometers per hour.
I'm like, those Canadians,they're outta control.
I know, yeah, I know.
I had an American fan gig above said 100.
That's your speed limit.
Exactly.
No, but, but the, the, the, ifyou just call it as you see it.
(16:35):
Yeah.
This is, this is, you don't,you're doing 120 and a hundred
zone or you know, whatever.
You are exceeding the limit.
That's it.
Now, what the business does aboutthat, I, I think in many cases
people may get frustrated, butwe're just telling the, this is.
We're calling it like we see it.
Yeah, I think that's a,a really a good analogy.
you don't have to be a judge.
And I think, you know, youdon't have to be moral about it.
(16:58):
We're not, we're not tryingto make people better people.
We're trying to make them into peoplewho understand that there are ways in
which we have to operate as a business.
Mm-hmm.
Good, good idea.
But so how do, how, wheredoes that fit with ai?
Why is, why is AI governedat such a problem?
Where do I begin?
Because it, it could be like a 20minute, like Rob on a soapbox thing.
So I'll cut this into pieces.
(17:19):
AI comes in and, and I'll just, let mestart from my own vulnerable concept here.
So I've been working in this field for,let's say, before AI entered 23 years.
And when I started, there was no
(17:40):
books.
I was assigned to the first informationoperations, information warfare, is
what they called it back then unit.
And they basically hired, you know, whatthey thought was, you know, maybe they
still do, you know, 30 extremely talentedindividuals to say, Hey, go defend ninth
Air Force Air Force bases from hackers.
This is like a year after DEFCON started,you have to go back in this early stages.
(18:04):
there's really no SANS, there's notraining, there's anything else.
So I'm thinking everyone else inthis room is like, 10 year expert.
They know all this stuff.
No one knew that we're staringat this new technology.
Windows 95 was a year old.
no one knew what they were doing andwe're all trying to figure this stuff out.
The reason I mentioned this isthat the learning curve that I
(18:24):
was on at that point, it's notthat we knew what the target was.
all I have to do is get from here to here.
We didn't even know what to do.
We didn't know how togo to a couch potato.
We didn't even know we were goingto be training to run a marathon.
The marathon didn't exist at that point.
The 10 K did not exist.
We just knew, hey, we neededto move off of the couch and we
had to figure this stuff out.
(18:46):
Not just us, but multipledifferent groups around the planet.
and I kind of look at this as avery similar analogy, is that anyone
who would get up on stage and say,here's how to you to secure ai.
They are guessing as much as I wasguessing, and I was trying not to
let anyone else in the room know.
(19:06):
So, fast forward today, two yearsago, AI comes out and I'm looking
at this like, oh, technology.
It's just like another,piece of technology.
All I have to do is stare at it,figure it out, tinker with it.
I'm like, okay, I get it.
It's just like this, but different.
This is as different as Gandalf sittingin, the court of any random king that's
out there, he's the wizard comes in.
(19:26):
I know magic.
I've been doing magic for 25 years.
I have mastered magic.
And again, for me, I havemastered digital forensics.
I've, have all these skills.
I look back on, I'm like, wow, I canspeak with authority on these things.
Suddenly someone, da Vinci comes inthe room and says, I have science.
Gandalf looks at that and says,oh, that's very similar to magic.
(19:48):
It does things that people who areaffected by it don't quite understand.
Very similarly, there looks sim, there'sa, Hey, that kind of looks like magic
and science does cool things, but Gandalflooks at it and says, okay, I'm gonna
go attempt science because I'm reallysmart and I'm really good at magic.
And there's no adoption,there's no true understanding.
(20:10):
You have to go back to the beginning andsay, how do I read again in kindergarten?
so how do I potentiallymove my skills forward?
From a personal aspect, peopleturned to me immediately saying,
Rob, how do we secure ai?
And it took me a while to get backto this vulnerable position because
I was really trying to maintain myauthority and expertise on this stuff.
(20:31):
And I'd come back and say,I am learning right now.
and I thought everyone else was more, verysimilar to what happened 25 years ago.
I just thought more people hadthis, and I would about a year in.
I realized no one reallyunderstood what was going on.
So I had a theory, and I tested a theoryfirst at a OWASP snow frock in, Denver.
(20:53):
I had this presentation today, and I'mstill giving it, and I have people in
the audience feel like I'm piercingtheir head directly, like a Vulcan mind
meld, because I ask 'em a question.
I go through how significant thistechnology is, We have experts are
able to talk about the effects of this.
(21:14):
You're able to talk about the legalramifications of this that are able to
be in the room with leaders to be ableto speak intelligently about this because
your role in this is so significant.
People's lives depend onyou getting this right.
Your business existencedepends on getting this right.
Next slide.
How many of you, and I said first,I don't get to that slide first.
(21:35):
I want everyone to close youreyes and just sit in the moment.
Do not peak.
You gotta ask a question based on thatquestion, just like I was vulnerable to
you, and I raised my hand on this too.
It's like I'm learning, I'm struggling.
I do not know the answers in ai.
But then I turned it on them and thequestion I asked was, how many of
you are faking it with your currentknowledge about AI and ML technologies?
(22:00):
80% of the 300 people in theroom, 80% of the hands went up.
And I'm like, in the back of my mind,I was like, okay, this is more telling
than anything else I could have said.
I said, now open your eyes and look around
and you could feel thepin drop in the room.
And they're like, oh, good lord.
That's the issue with AI is we, thebusinesses are turning to security teams.
(22:25):
And they're saying, Hey, we've hired you.
You are a very expensive person.
You should know technology well,they're not even admitting their, they
have not mastered this technology yet.
But the business isasking 'em, what do we do?
And this is where I call it a crisis.
And again, it feels like I'm pointingfingers a crisis of competency.
And you can put quotes around thatwhere we have a security team that
(22:47):
doesn't really understand technology.
So their initial gut reaction to solvethis issue is to say what YouTube videos,
what training, what is the frameworks?
What is the applicable thing?
There's gotta be some documentation.
They come to SANS, say, sans train us.
And we're struggling with this too.
It's like, what are the thingswe need to do to potentially help
people secure their organizations?
Everyone's kind of looking at eachother like, it was 1996 again.
(23:11):
There are no books, there is no training.
We are asking you to get off the couch.
But the business is asking, going backto the governance and everything else we
need to innovate to remain competitive,we need to make sure we're here.
You need to learn AI.
And by the way, security teams,you need to figure out governance.
You need to figure out how to use it,and you need to, how to figure out
(23:31):
simultaneously how to protect everyone.
And they're asking Gandalf, who'sused to be seen as the magician of the
court for 25 years to look at scienceand speak intelligently about it
without having done the homework yetto go through those learning cycles,
as to what does this stuff mean.
And there are no books that havefigured out all this stuff out.
So this is.
(23:52):
This is the issue.
This is the thing that I wake up.
This is the truth to power type beat.
No, no, I think it's, I it's fair.
I would've flipped the analogy becauseI think, I think we went from science
to magic because the, back in theearly days when we were all trying
to figure this out, you could goback, this is a binary instrument.
It's a yes or a no.
It's a one or a zerosomewhere back in there.
(24:12):
Whether you're talking in hex, whetheryou're talking whatever, I can come back
to something and I can replicate it forthe first time in a technology, I have
a technology that I cannot understand.
people talk about quantum, the bestquantum physicists say, if you say
you understand quantum physics,you're not a quantum physicist.
Right?
And I feel the same way about ai.
(24:33):
If you say you understand what'shappening there, you don't understand it.
well, business is, first of all, an AIwill never give you the same answer twice.
And the businesses, the executivesare, don't know the difference
between science and magic.
To them it just looks similar.
Hey, this new tech stack, thisis what we've always done.
Cloud, you see the, all these iterations.
Gandalf, what do we do?
(24:54):
And Gandalf is used to be onto answer those questions.
And now it's kandal.
And this is again, what's reallyinteresting here is Sansa set me
up, chief aio, it's like, Rob, youhave all the answers, everything.
the reason I'm here, I'll just bequite honest, is because I'm expressing
vulnerability of my own lack of knowledge.
And I'm encouraging everyone out there islike, you need to be like what I'm doing.
And I have a mantra now that Isaid, in order to be healthy today
(25:18):
is you need to get proper sleep.
You need to eat a proper diet.
if you don't sleep, it doesn'tmatter what you eat, it doesn't, you
know, you get nothing out of that.
You have to sleep, eat.
Well exercise every day you doyour workout, pushups, whatever
you do, walk for 30 minutes.
You have to do AI for 30 minutes.
It is going back to the basics,and this is what I do now within
30 days, it looks like, wow,you have started to master this.
(25:41):
You've gotten in shape.
We could see like, you know, there'sa massive change in you, and that's
true because anyone who's not doing thework feels further and further behind
even on this stuff, and I'm lookingat it from just, how do you use it?
Let me look at how to protect it.
What are the things I could do?
But I chip away at it on a dailybasis and I'm expressing of
(26:02):
like, okay, I'm, I'm now nascent.
I'm now the newbie.
And I have to remember, eventhough I've been doing this for
a long time, I have to put myselfback like I'm 20 years old again.
Have to learn and start the rawmath, then use the calculator.
Then you're building up your skills.
30 days into this process, you walk outand saying, wow, you actually know stuff.
(26:24):
You're the expert, Rob, you know,you're put on these podcasts.
You're, you know, supersmart on this stuff.
I'm like, no, I'm just doing the work.
I need you to do the work.
I need the entirecommunity to do the work.
Because if we're all not doing thatwork, what we're tasked to do is
defending people's cybersecurity.
Because if what we domatters, we're saving lives.
I need people to understand this.
Like I am depending on youand you know, Mr. Water, you
(26:45):
know, plant cybersecurity team.
You have to do this work too.
And we all need to do this as acommunity, realizing we're in the foxhole
together, looking at each other, bothscared, both trying to figure this out.
But you look at each other in the eye likeyou and I are doing right now, saying,
oh, there is no one coming to save us.
We have to look at each other as like,all right, accountability partner.
(27:08):
When I'm going through this on daybasis, I will show you cool things.
You show me cool things andwe'll now learn together.
what do you mean by that?
and again, this is where I don'tknow, I've landed on certain
podcasts, like the AI Daily Brief.
I've listened to this what is going on,across the industry, you learn about
the AI action plan from the government.
You end up, getting exposed,Hey, Claude has a new capability.
(27:30):
And I'll go, oh, what is that?
I'll go on Claude and play around with it.
I start to integrate workflows as,instead of okay, I'm gonna figure it out.
I might read about something.
Well, let me go see if I could just playwith the thing and see what the impact is.
And maybe even something,related to my kids.
My kids are 13-year-oldtwins, about to turn 14.
The school is currently it's calledcheat, GPT, which is obvious reasons
(27:52):
why, but I'm trying to look at itfrom how's my kids getting educated?
So I can't do cybersecurity principles.
I go into how do you use thiscreativity that enables, is
enabled to do this technology?
I was on the, panel at the Nasdaq,talking to boards, room full
of people that are on boards.
And I said, on the panel everyone'slike, Gil, we need your AI expert that's
on, that's gonna inform the board.
(28:13):
And I said, I was that person.
Like, I totally disagreewith that assessment.
And everyone's like, whoa,no, this is brand new.
It's so hard.
We need, the new Gandalfthat's gonna inform the board.
I said, no, every single oneof you needs to learn this.
And I, , used my iPhone.
I took a picture, I said, I'm gonnabe talking while I'm doing this.
Take a picture.
AI changed this picture into makingit look like K-pop demon hunters
(28:36):
or whatever, current thing is.
And while we're sitting there talkingabout this, I have the picture on come
out and I said, Hey, that looks likea magic trick, but from an executive
perspective, you might be doing thisand saying, oh geez, I'm doing the level
of graphics designers and my own hand.
I see we have a hiring needfor three graphics designers
(28:58):
in our marketing department.
I'm gonna ask the dumb questionbecause I'm an executive, I'm
just learning the technology.
But how many of our current team are usingthese newer technologies to increase their
capabilities for the graphics design?
So we don't need to invest,I said that is why you as an
executive, need to understand this.
Not at a DNA level.
how do the models work?
(29:18):
Which Hugging face thing are you?
No, you need to be the person that'ssitting in, the people that came up with
Netflix streaming, so what we should do?
We should stream this because there'sthis thing called the internet.
I guarantee the person came upwith that idea and whiteboarded
it didn't know how TCPIP worked.
They just said, they connectedthe dots saying, Hey, I'm
doing this thing on the web.
(29:39):
Bandwidth is increasing.
I'm watching this videos thatpeople are distributing over here.
Can we do movies this way?
Yeah.
And this idea is like, samething with wifi and Starbucks.
None of those people were technologists,but they had a fundamental understanding
of the technological shift thatwas going on to ask new business
ideas and to drive innovation.
(30:00):
So they start seeing these technologies.
I said, if you outsource that,you and your business are at risk.
You need to be the person and sittingin 1998 starting use email, the
browser hitting websites saying, huh.
But if you're being asked to do,change your business in the middle
of the introduction of the internet,and you never used a web browser or
(30:21):
email, guess what there's no way you'regonna be able to do business strategy.
And if you're leaning on a AI expertor internet expert, hand them the
reins of the company at that point,and you step down because you are
effectively not useful anymore.
the board executives needto put in the work too.
(30:41):
Start playing with it.
Everyone needs to start playing with it.
And that's the hard part.
I think that's the essence of, andit would be the same advice I would
give to people is now, people say, oh,well I've gotta jump on board with ai.
You don't have to jump on boardwith AI and integrate it with your
business or get rid of 4,000 people.
You need to start playing with it.
I think, I would maintain that ifexecutives aren't trying this stuff out.
(31:07):
You really are going to miss theunderstanding of what it's like and
all levels, in the organization.
People need to understandthe possibilities.
That's the phase we're in.
And I, I think people miss that.
They're, they're rushing to say,I gotta get this implemented.
I gotta save some.
That's a really, that'sa really dumb strategy.
so you wanna see my next magic trick?
(31:28):
Absolutely.
Okay.
So where we started governance,we just went down to how important
is AI from the business side.
And we talked about earlier thatthe business needs are to remain
innovative, and governance is an enabler.
So in, organizations are startingto realize, wow, this is going
to be a needle mover for us.
We don't, we could save on costs.
we're not gonna fire people.
(31:48):
I still, we haven't seen the data andno one's being laid off at the, some
will have switching and replacement.
Yes.
But organizations are gonnasay, Hey, use the people we
have, get them more AI enabled.
We don't know what that looks like.
that's the other thing executivesneed to kind of tie the knot on.
we don't know what that is.
We just need a lot of people playingwith things and creating ROI.
(32:09):
So they turn the security teamand say, Hey, how do we do that?
We wanna lean forward,but we wanna reduce risk.
Security team is used to beingable to apply a framework to do 80%
reduction of risk on, whatever that is.
but what the security teams need to leanin on and saying, oh, we should probably
do things that are 10% reduction of risk.
But because Gandalf seen over here saying,Hey, you've been seen as a 25 year,
(32:35):
expert in this stuff, what should we do?
You're trying to apply a frameworkthat is gonna completely reduce risk.
it's the same issue legal teams runinto is like, we are gonna be starting
to do business in Timbuk two, exceptwe can't read their legal stuff
because it's only word of mouth.
We don't know what business impactsand issues we're gonna have, liability
(32:56):
issues we're gonna have over there.
So I don't think we shoulddo business in Timbuk two.
No.
The other thing that security teams,they look at that and they're also
looking at this from independently.
The only framework they have to applyto reduce AI risk for an organization.
And everyone's like, what frameworks, Rob,what is the thing that everyone's using?
I say, oh, it's a very simple framework.
(33:17):
Everyone's heard of it.
The framework of no period.
They basically come in there and say,okay, we don't understand Claude or
chat GBT or data rails or perplexity.
Someone may do something bad,we don't know what the risk is.
We need to evaluate every tool.
They form a committee, they come in thereand someone says, I would like to use clo.
(33:41):
How do I do that?
No, we don't know.
We don't give us maybe Q3.
We might have an answer for you.
Maybe Q4.
Months go by and then they, personnelask, Hey, what happened to the tool?
I asked about silence?
They're like, well, wesaid, no, don't use it.
We're still evaluating.
And then meanwhile they come,how do we evaluate this?
And we have no idea what's going on.
There are a lot of teams out therethat actually fully capable and have
(34:03):
implemented this stuff very well.
The exact problem security wastrying to avoid was now created
by security shadow AI as a resultof legal and security default.
No, everyone's going to useit anyways because they're
worried about their own jobs.
They're worried about like,Hey, I'm been told to do this.
There's also fascination, curiosity,all these things kind of add together.
(34:25):
Guess what?
The framework of NO has created theexact security issue that everyone's
actually worried about in that room.
Shadow ai, that's it.
It's worse.
They're hiding it.
No one knows it.
Four weeks ago said, hey, I'm,upload the financial spreadsheets
and send it over to another financialanalyst in the shared link in there.
Oops.
accidentally now shares thatentire chat in a Google search.
(34:47):
URL, oops, we had no idea.
But how would you figure that out?
And I would ask, I've asked myparents, it's like, okay, let's
assume you're evaluating grok.
How would you have found that?
You would just be like, oh, and allof a sudden you're staring at this.
Okay, do we just ban grok now?
this is again, like knee jerk reactionand in reality it's never worked.
I still remember the early dayswhen people were talking about
(35:09):
mobility things and we'd say, well,we've gotta have our data secured.
And then you go, wait a minute,they're having a board meeting
and they've all got iPads.
Anybody looked at the security on those?
No.
anybody think that the documents our boardhas might be, I don't know, confidential?
Oh, yeah.
Geez.
So the fact is, if you don't get inthere and try and cope with it, people
(35:32):
are gonna hide from you and Yeah.
the biggest people hide fromyou or the executives and the
management, for the most part.
But right now the stats say that,40, 50, 60% of people depending
on what research you're looking atare using AI and not admitting it.
Yeah.
And then even this is wherelike, I have all these things.
It's like I've tied, I need towrite a paper on this because
(35:54):
what is the biggest ROI that hasled to innovation in the company?
Shadow ai.
Thank god it exists.
Nothing the companies areimplementing as a sunlit project.
Not all of them, but most in the MITreport kind of highlighted 90% of
organizations and not figured out ROI.
(36:16):
But the report, honestly Iactually did write a Subec article.
I said, MIT report Masterclassof missing the point.
And again, I was like, again, alittle bit, I was out for a walk and
I said, they're missing the point.
And I'm like, all of a suddenI landed on that title.
They focused in on theROI of Summit projects.
But what it also stated in there is thatthe biggest ROI was coming from shadow
(36:37):
AI that 90% of the employees are usinganyways, even though 40% are supposed
to be using official AI capabilitiesthat the company has, made, approved.
Hey, so shadow AI is the actual thingthat's driving innovation in the company.
Shouldn't we be focusing in on how to.
Encourage that, and cybersecurityteams need to figure out a way
(37:00):
to say, okay, we need this.
Like you said, the playing around thing,like I'm telling executives to do too.
How do we move that from shadow AI intosunlight ai, and how do we potentially put
enough of a wrapper so the kids are ableto walk onto the playground and saying,
Hey, listen, I cannot fully protect you.
I'm gonna ask you to put on ahelmet and you still may bat,
(37:21):
bad things may happen to you.
We could reduce risk by 10%.
Now you pass that back to theexecutives and saying, all I
could do is put a helmet on you.
Are we okay with that?
And then hand that decision back tothe executive saying, this is the best
I could do because of my knowledge.
No one else knows it.
Gandalf science issue.
(37:42):
You need to be upfront with thisbeing vulnerable, the organization
being vulnerable, but theexecutives need to do their job now.
Make hard decisions that couldrisk the entire organization.
if they're not leaning forward hard enoughinto this new technological innovation,
they reduce competitiveness insidetheir own space, whatever they're doing.
(38:04):
And on the opposite side, if wedon't have enough risk reduction,
we also risk the business becausebad things can happen there too.
So you're walking down this tightrope asan executive more so than anything before.
And you don't have sixmonths to think about this.
No, you actually are now saying youhave two weeks to consider this deep
seek came out with a model that's 10 xcheaper, whatever your IT teams comes in.
(38:27):
We could save a bunch of costs, lookat all this power consumption costs.
We're gonna be able to doX, Y, and Z 10 X savings.
When you're spending a hundredmillion dollars on it is sizable.
And if you're saying, well, we'regonna spend six months staring is deep,
seek the model we should now be using.
Whereas Amazon, Microsoft, GitHub.
They all implemented the deepseek model within five days
(38:49):
critical business decisions.
Leaning forward,obviously saying security.
We can't have that hold usback because of the costing.
You're now the executive.
Do your job.
Make the hard decision.
Don't just wrap yourself inlegal and security to say no.
They just need to comein and say probability.
we can't secure it as much as you'd hope.
(39:10):
We need you now to make areally difficult decision.
Innovate or reduce risk.
It can't be both.
You need to balance.
Be smart, You need to lead the team.
Just like you know, if you're amilitary commander in Ukraine and
all of a sudden I was never trainedto deal with drones flying overhead.
It's not like you go to six months oftraining and do drain combat, defensive
(39:31):
operations, you know you're dealingwith the thing that's in front of you.
You need to make decisions withinminutes, hours over this new technology.
Not deal with it, with some sort oftraining class that may or may not exist.
What business leaders are doing inthe US and around the world, you can't
take the time to fully understand it.
You just need to seewhat's in front of you.
(39:52):
Make hard decisionsbased off intelligence.
Everyone else feeding you information.
You have to be your military commander.
You have to make hard decisions,own your job or step down.
So let's back up a little bit here.
'cause we, I think what we sort of saidis you need to play, you need to learn.
Yes.
And you need to getpeople out there to play.
But before they play, makesure they're wearing a helmet.
Make sure they know the rules of the road.
(40:14):
Yes.
Always.
And you know, I would say, youknow, like if, that's why I like
to say, take the analogy of play.
When I learned to drive, my dad tookme to a big parking lot on a Sunday.
Yes.
In those days, there wasnobody in a parking lot.
I could drive around fully, you couldstart to get the feel of the car.
So we're, yeah.
'cause this thing's moving fast.
But as a, as a person who's involvedactively in education and learning mm-hmm.
(40:36):
Where are the resources, whatare the things they should be
looking at to help them learn this?
Well, obviously SANS, and one ofthe reasons I'm, I'm really glad
to be Chief of Research is a partof my job, but my focus is on ai.
So I'm, I've pulled these twothings together, pulling in the.
A small number of experts that, andI call 'em experts saying they're
(40:59):
fully vulnerable, saying, Hey, we'relearning too, and pulling together
resources of what is currently known.
We have enough understanding of howpeople are using AI in day jobs.
So we're starting, you know, there'smassive parts of our courses that
are now teaching to that samething on how do we protect it.
But again, the protect is what wecurrently know and we have to iterate
(41:20):
those protect classes as fast as possible.
On the governance side, you know,how does you AI act in GDPR Dora
all feed into AI implementations?
So your understanding of the risk so thebusiness leaders can make these decisions.
We have, these summits that we do atSANS, And Alan Poller massive had to
depend, and our founder, who, who's nolonger with us, he started the first
(41:44):
ICS summit that said, Hey, listen, weneed to protect critical infrastructure.
It's not like that was solved.
He said, what are we gonna do to this?
Let's have a summit.
Bring in people who are kindof knowledgeable about this.
Mike Asante, Tim Conway, eventuallyRobert, Emily, to be able to
have these core discussionslike, what are you guys doing?
What are you doing?
What is everyone doing?
Have hard talks to havepeople talk about it.
(42:05):
But again, iterate, theniterate, then iterate.
It's research.
It is coming together.
It is community forming.
It is, let's solve this together.
That is the thing that SANS has stoodon for over 25 years since I came in.
They asked me, it was like, Rob, showus what, you know, in answer response.
It's not like, well, I wrote the book,this, here's exactly I started, it wasn't
a educational academic, Rob, what works?
(42:27):
And I would give a talk like, okay,can you do that for three hours now?
Do you have more material youcould teach as tricks and things
that you know would help people?
Like, well, here's what I figured out.
That's why it's the practitionercoming in and doing this.
They're sharing what they know, but thatis not the solution they're showing.
Here's what I currently know.
I did the work to be able to get here andI'm more than happy to bring others along
(42:51):
with me 'cause we're all in this together.
It's like, hey, did you know?
Oh, I never even considered that.
And you go try it yourself.
You're like, oh cool.
That's amazing.
Little technique for a cooking.
This is why SANS is a resource.
You go on our website, sans a rogueslash ai, we dropped everything in there.
We currently have AI criticalcontrols, which was a consensus
paper of 50 different individuals.
We have a partnership with OASP andOass AI Exchange on how to technically
(43:16):
implement a lot of those controls andworking together, community forming
experts coming together, implementing whatour practitioners are finding right now.
And it's still early days.
It is not like theseare all the solutions.
It is like, here's what ourcurrent knowledge and we're
basically three inches into whatwe're still staring as the ocean.
we started a group called Project Synapse.
(43:37):
We started having coffee Friday morningsto talk about AI and what was happening.
Yeah.
it turned into one of ourmost popular shows now.
and it started out, it's threeguys who are just sitting around,
we're gonna figure this out.
And I think that there's a lot to besaid for that, having access to some
of the material, I think is great.
You need to start reading again.
But you've also gotta talkto people and imagine things.
(43:59):
I mean, when I wrote my book, I wanted toexplore what the world was gonna be like.
When living with ai and there'slots of great fiction out there.
you can also start touse your imagination.
that's something we don't talk about inbusiness, you know, our imagination, our
ability to play and our ability to learn.
I think those are really important.
(44:20):
I think, it sounds like we arereally starting to figure this out.
Rob, you're speakingfrom authority on this.
I say, what I'm speaking fromauthority on is learning journeys.
How you become an expert, howyou develop passion, and how you
admit your own vulnerability.
So one of the reasons I considermyself a leader in the field on this
stuff now in AI is not because I haveall the problems solved, is because
(44:43):
I'm leading everyone to get in shape.
Again, a new form of.
I have to do the work every day.
I have to wake myself up and say,you have to do your AI homework.
What is the AI homework?
I don't know.
I'm gonna go online.
how do you figure out what to learn?
I said, great question.
You know what I spend a lot of time doing?
I have my entire feed on TikTok isai, people talking about AI things.
And I'm looking at differentoh, that's interesting.
(45:06):
And I pause, it leads to a YouTube videoand I try to replicate what they're doing.
And then I'm okay, cool.
But my vulnerability is, listenfolks, I don't have all the answers.
I cannot go down into everysingle thing on how to potentially
prevent attack A or B. I haven'teven done most of those things.
I'm looking at it from the aspect of Ineed to be able to lead the industry.
(45:27):
The only way I could do that is bystarting to get off the couch and invest
the time at microlearning environments tobe able to get to the point saying, Hey
SANS, we need to invest things over herebecause I think there's a thing here.
If you're not willing as an organization,as an individual, even to do that from
an executive board and just submit yourown vulnerability, that's what I think
(45:49):
is very important for us as a beat.
Yeah.
even for you, it's, this isthe Thanksgiving discussion
that's about to happen.
Everyone's gonna be talkingabout what they're doing.
do you, are you using ai?
What do you think is the job killer?
It's, I don't honestly think it's gonnabe the politics that's going on in the
country around Thanksgiving this year.
I honestly think and predict people willbe talking about AI and is it scary?
(46:10):
Oh, please God, be right.
It's gotta be.
Oh, I know.
yeah.
Well, I'm being invited in totalk to groups, and I find this
just, I find this fascinating.
Yeah.
There's a small municipality, justtwo hours from me, wants me to come in
and talk to their business community.
Yeah.
But the potential of AI and what they can
one of the things, I wanna circleback to this, 'cause it is an
(46:31):
interesting concept and it's somethingthat I think we need to learn.
And it's difficult.
We've had a real concept of leadershipthat says, I'm, I know everything.
I'm not gonna make any mistake.
I am, I'm on top of all this stuff.
Yes.
I've got bright people that work for me, Ithink I was close to 50 years old before I
realized I don't have to know everything.
(46:52):
Yeah.
I don't have to say I know everything.
And I think, you've talkedabout his vulnerability.
I think it's an important piece to say,wait a minute, I don't know everything
about this, so we're gonna talk about it.
We're gonna talk about it in detail.
I think that's a new mindset forexecutives in particular, and maybe
for CISOs, because everybody says, oh,you've gotta be on top of all this.
(47:13):
No, I don't.
I need, I This world is moving.
we've never seen technologyadopted this quickly.
I mean, when you take a look at thetelephone, how long did the telephone
take from till it became a business?
You, yeah.
When you take a look at, at computers,how long did computers really take before
they were implemented in, you know,the, the web started 1990, like it was
(47:35):
in, in the, invented in the eighties.
Didn't really touch down till the1990s and took about 10 or 15 years
to be put into, we're now we havecompressed this into weeks and months.
And we're moving into speed.
We've never moved before.
If you honestly believe you'regonna convince people that you
know everything, good luck.
You have to be able toadmit you're learning.
(47:58):
If you are in charge of Ukraine defenseand you see drones overhead, you'll turn
to your smartest people in the room.
What do you know about drones now if theycome out with like, here's what I know.
they're sitting on that ledge.
I guarantee that defensive commander islike, okay, seriously, you can tell me
you actually know what's going on here.
(48:18):
Have you ever flown a drone?
No.
Do you understand the mechanics?
Have you used this incombat defense offense?
No.
No, no, no, no.
Someone find me, anyone who'stouched a drone in their life,
and of course someone comes in,there's a hobbyist saying, I did it.
Then all of a sudden that person said,what do you need for this things?
And they start saying,okay, we could do this too.
(48:39):
It was some 20-year-old who flew a droneflying a camera around a house that
became the military commander for a gu.
I don't, I'm guessing here noneof those generals, in that room
that know how to do combat.
They relied on someone who was flying anddrawing around a home to sell that house.
The same thing occurs when you endup taking a look at, you know, Billy
(48:59):
Mitchell, the first Arrow Squadronway back, you know, flew the plane.
He's up there like, okay, cool.
We need a lot of these and we'regonna form squadrons and we're
gonna do damage to everything.
So they built a lot of planes andeveryone's staring at the planes
like, okay, do we have all the pilotsthat have graduated pilot training
at this point to fly these things?
There is no pilot training.
No one knows how to fly these things.
(49:20):
The only thing was we had someoneslightly smarter by a week, maybe getting
in that cockpit with you and saying,okay, here's what I know up, down, left.
Right.
We will figure it out, right?
And then all of a sudden you have40 people that are weak old flying
planes in combat, never having doneit before with no pilot training.
(49:42):
Now of course, all this stuffbecomes formalized and everything
behind the scenes, but you need thatperson that's a weak, older than
you, just a week, maybe a month.
That is okay.
Here's a couple things that I havedone, and these are called the AI
Champions in your organization.
Now, my AI champion, everyonehas, everyone needs a Yoda.
(50:04):
So my Yoda is a really goodfriend of mine, Kate Marshall.
She worked with me at SANS,led the AI Summit with me.
She's about six months to a year, probablymore in implementing AI things Now she
does not doing it from cybersecurity.
She's just doing normalworkflows and everything.
But now Kate, ironically enough, is doinga business specifically on working with
(50:24):
people on, getting them up to AI speed.
But everyone, she's my Yoda, I'llgo to her and say, what about this?
I need to talk through this.
so it's not like I'm completely alone.
I just am talking to someone whojumping in the aircraft maybe a week
earlier than me and anyone's on TikTok.
They're flying through there andthey're talking about AI things.
I guarantee they just looked at thatthing three days before and said,
(50:46):
oh, I'm gonna do a video about this.
'cause then it'll look like I'm anexpert in this and it'll look like
they've been doing this for a year.
But if you actually talk to themand saying, how, when did you
learn that before you filmed it?
Oh, three days ago, four days ago.
So how are you speaking with authority?
It's like, I'm not, I'mjust showing you what I did.
I'm like, no, I've only, I've been doingcybersecurity for 25 years, and I'm fully
(51:07):
willing to say, Hey, listen, just becauseit looks like I'm more of an expert than
you, I've only been doing it a week.
You could easily catch up to meand we could be working on this
together, and that's my call toaction for everyone in the industry.
. You're asked to do governance, you'reasked to utilize it, and you're
asked to protect things with it, andjuggling three balls simultaneously.
It's intimidating, and you feellike you're being left behind.
(51:29):
Go back into vulnerabilitymode, which is ask yourself.
just tell people.
It's like, okay, I just started learning.
Well, what are you doing?
Which learning class?
I'm not.
I just jumped in the plane andstarted, if I do this, do this.
By the way, did you seewhat I did yesterday?
If I pull back on the stick withenough speed, I was actually able
to get the plane to do this thing.
I, we will just call it a loop.
(51:49):
Who did the loop for the first time?
There was no training for that.
Who's the first person that did a role?
And then said, this is a brilliantidea without crashing the plane.
It's not like that was part of thesematic that the plane was built with.
It's all of a sudden they playedwith it enough, but then it
became thing that was trained.
We need you to do the loops, the rolls.
(52:09):
Hey, let's add a bomb to this.
Let's add a machine gun to the frontof his, and by the way, we need
to shoot through the propellers.
Someone has to figure this out.
Someone has to be that 20-year-olddrone operator that became the critical
node in Ukraine's defense and offense.
And we saw what they wereable to do just recently.
I don't think we wannaarm cybersecurity people.
(52:30):
Well, I mean, analogy sticks.
But if we, if we take this explore,learn, be vulnerable, stay ahead, find
an advisor, how do we deal with our.
Corporate world at the same time,because we're, we're faced with
real things of, you know, people areattacking us, people are doing things.
(52:51):
we're back in the war room now, you know,and in our corporate world, what should
people be doing or CISOs or cybersecuritypeople be doing in their day to day
that would help, them to understand
And manage ai.
you need to move from a framework of No.
to sunlight.
Ai.
You need to figure out a way to doa 55, 45 percentage split on yes
(53:16):
versus no default answer to Yes.
How can we make it happen?
Find those who are leaning forward intoplaying with AI to become AI champions
in every part of the organization.
So you could talk to them abouthow they're using the technologies.
'cause once you enable them alittle bit, it's Yes, yes, yes.
It's not like blind.
Yes.
It's like, okay, let's do an experiment.
(53:37):
Monitored one.
Can we sit down next to you?
Ask questions when you'redoing we'll, have a different
perspective, but it's finding the20 year olds that are using it.
Maybe 40-year-old, itdoesn't really matter.
Someone who's flying their own personaldrone around with a camera attached to it.
It's like, Hey, look what I did.
And you sit down with themand said, okay, let me see.
Watch what you're doing and can youshow me and can I start doing it?
(53:59):
That champion in finance andproduct and HR is going to be
utilizing in a such a different way.
You can't centralize it.
You have to see how they're using itand then ask them questions like, how
do you know you're not exposing things?
And I'm like, oh, it's easy.
And they're like, oh.
Then you write that down.
You solve it for your business.
But the only way you're gonna dothis by learning experimentation,
(54:19):
move to a framework of yes, sunlight,AI, and enablement and need to find
you're not, you need to find thoseinitial people flying the plane.
And it's not only business enablement.
By doing this, security teams couldwatch, monitor and potentially assess.
And it's like, okay, they're using it now.
If they're using this, how canI protect what they're doing?
So they're allowed to continue to do this.
(54:40):
, You have to learn by doing.
The only thing you could enablethat is potentially by monitoring,
it's going to take effort.
There's no books, SANS is leading onthis space and we show competency.
It looks like we have all this thing.
We're pulling in, like Alan Poller didwith the ICS saying, how as a community
do we help here and myself as a leader,is going back to the basic principles
(55:02):
of what happened in 19 95, 1 yearafter Defcon, this is so brand new.
the ping of death was a thing, andeveryone's like, whoa, magic, magic trick.
I could kill windows machineswith a ping of death.
That wasn't the concern we were dealingwith, you know, like Israeli, hackers
sitting in California, solar Sunriseand all the moonlight Maines and all
(55:23):
these other major events that startedto occur and no one knew what they were
doing and everyone had to come together.
the analogy that happened in the militaryback then is they formed a unit, joint
task force, computer network defense.
Everyone in there is like, wedon't have the book on this.
We don't know what we're doing.
That unit became cyber command fouryears later, and it's like the gestation
(55:45):
of a bunch of people who are smart,who don't know what they're doing.
Then they get training.
Then they, you know, teach capabilitylike we did at SANS, like I did at
sans without really knowing thatwhat I'm starting to build is a
curriculum and an academic capabilitythat is training people to replicate.
Not just say, Hey, here'swhat I did last night.
Isn't this, cool.
(56:05):
It takes us to come together.
So, you know, corequestion, what do you do?
Enable AI in a way that you, youcould deploy your team, maybe assign
a security personnel per team tosit with the AI champion to, Hey,
you're, you're my battle buddy now.
I watched what you're doing.
I'll ask you questions.
(56:26):
But if you go back to the typicalmodel centralization and then
deployment, you will fail.
You need to sit next to peoplewho are doing the experiments
who are playing, and you need thecybersecurity people probably.
To watch them replicate what they're doingto get you thinking, but you also need
to potentially fig, you know, we needsomeone on your team also playing, figure
(56:46):
out how do you do AI with cybersecurity?
The protect people need to lookat how people are using AI and
their function to protect them.
And it's different in finance thanit is in hr, than it is in product.
So it's, it's hard a great idea,but it takes a redeployment
of security resources.
Yeah.
As an example, when I was CIO of thelast company, I gave it up to become a
(57:07):
podcaster and make all this money, theidea of, I would sit down with people
and say, if you wanna try this out.
don't use our customer data.
I can get you a data set, to play with.
And I think maybe that's a reallygreat idea is if you're out there and
engaged with people, you can help themexplore without risking the company.
(57:29):
Yes.
because I think that's been a bigthing left to their own devices.
I can guarantee you they won't,they won't think about these things.
Oh, well, yeah.
It's, it's like someone sitting onthe playground and you could watch the
group a little bit, but just assignsecured personnel per team to watch
the playing on the playground and thensay, oh, maybe we need to put some
sort of padding underneath that thing.
(57:50):
AI is gonna require a restructuringof your team is security,
personal, maybe even hiring folks.
That will be sitting side by sidein every single one of those units.
And in, even in the military, theyhave a safety officer in every single
unit that says, here's what I, it'snot like single safety officer in
the air, you know, air Force Army.
Army is like, here's our policy.
Universally.
(58:10):
They need to see what that unit's doingto potentially implement what the safety
protocols are for that specific unit.
Okay.
And as we wrap up, I'm gonna, I'mgonna do what a, what I think a lot of
cybersecurity professionals would dowhen they're hearing this conversation
going, you guys are a little, you know.
Oh no, totally.
You're, you're a littleairy fairy here, guys.
okay.
Now I gotta face the reality.
(58:31):
I got threats that are coming at me.
I've got prompt injection, I've got allthis stuff happening in my real world.
You guys have a nice satSaturday morning chat.
What am I supposed to dopractically to survive?
What would you say to, to them?
realize that the only way tolearn is by joining the community.
attend.
(58:52):
and I'll say we're doing summits at SANS
I have an AI summit that is, happeningin April, but we have over 50 hours of AI
summit material from the past three years.
We did our first AI summit, sixmonths after chat, GBT was released.
Ton of material out there.
You could go absorb that.
And even using AI to load theminto, because they're just YouTube
videos, you could load them all intoa notebook, LM ask it questions,
(59:15):
experts, basically tapping brainpower around the world Now to ask
these questions to, that is one way.
It's like you have to engage thecommunity, jump and join the AI, , WAIS
AI exchange, and no one's gonna say, Hey,you're writing the policy for EI act.
But they had to start somewhere too.
And you sit around the smart people finda Yoda, not just a person that is like,
(59:37):
Hey, I have it down for cybersecurity,but probably it's even better find
someone who's you're Yoda, that's just outthere in the field doing cool AI things
because a trek that they figured out for.
Product may be a trick and techniquethat you could immediately apply
to a cybersecurity process andworkflow you are working with.
So it's a combination ofsomeone's going to save you.
(59:58):
here's your policy guideline, downloadtemplate that you do, and here's A,
B, and C that you do in this regard.
One, join the community.
Two, start tapping into resourcesand pulling people in with you that
are saying, Hey, we're learning this.
SANS has a massive amount ofresources and things you could digest.
We have the critical AI controls,we have the blueprint, which,
(01:00:19):
something I came up with those threepillars, govern, utilize, protect.
Because organizations literally come in.
It's like, we need AI security.
And I asked our team, Iwas like, well, what kind?
And they're like, no, I was justtold we need AI security, but
I find it, no, it's really AIgovernance is what they're seeking.
Well, that's a single set of,Classes that we have that basically
(01:00:40):
will say, here's what we currentlyknow about the way people are
interpreting Dora as it applies to AI.
The classes that we do have, like insecurity operations, those might be
a full day of material integrating AIworkflows, what they currently know.
We have a brand newclass that's out there.
Literally the entire classis focused on workflows.
So some of this stuff is currentknowledge, what we're, currently
teaching, but we don't have it all solved.
(01:01:01):
Then on the protect side.
So all three pillars are ways youpotentially figure out which pillar you
need to sit in to become the expert in.
Probably not all three, be the protectperson, be the utilized person, or be
the governance person and then tellthe organization, I'm gonna learn
this better than anyone else, butI'm gonna go join those communities,
including SANS that seem to beleading with the more competency.
(01:01:27):
Not the point where you'resaying, Hey, it's blind trust.
We need you to join the conversationand realize that it's our
responsibility to protect your family,our organizations, and our nation.
Fabulous.
Sir. Last words for our audience.
Anything you wanna leave people with asas we go, we let them go for the day.
(01:01:49):
, I really wanna know whatyour book title is again.
Um, and, uh, where, whereI could purchase that.
I, I mean, last wordsare, I go back to that.
I'm still, that sinking of my mind is likealways fascinating when someone on the
technical side ends up writing a, a book.
So what's your book?
Always glad to plug my book,Elisa, A Tale of Quantum Kisses.
It's an exploration of our lifein the very near future in terms
(01:02:12):
of AI and how we might coexistwith a superior intelligence.
it's a lot of fun.
You can get it on Amazon,you can get it on Kindle.
the audio book came out this week.
So you can get that everywhere.
You can get audio books I've discovereda lot of people don't read anymore.
Everybody's listening to audio bookswhile they're doing the dishes.
Washing or podcasts.
Yeah.
(01:02:32):
Or podcasts.
Yeah.
Please.
Still hanging with us on this podcast.
Well done.
Send me an email saying Ilistened to the entire thing.
I'll get two emails.
There you go.
Well, you know, this one you probablyget, there's good 10,000 people.
Listen to this.
I'm hoping, I mean, you never know.
I'm trying it like, I think it's up to us.
You doing this, me talking about this.
Now you're gonna talk about this.
(01:02:54):
We have 10 people come off this talk aboutthis, and they're gonna become vulnerable
and they're going to rationalize andsay, oh, I'm gonna go into my team and
say, Hey guys, I don't know what's goingon, but I'm gonna start learning this.
We all need to be in this together.
We're in that foxhole all scared.
If we work together, that's the waywe get through this and we're gonna
be able to protect our families.
And you know, I say that this is onetechnology that is permitting down to
(01:03:17):
that level, and that's why it's gonnabe a Thanksgiving discussion, is that
you need to potentially internalize whatwe're talking about in start leading your
own teams and expressing those, Hey, youknow, I don't have all the solutions.
It's more even than that.
It's like, I am now in kindergartenagain and I'm learning a brand new
language and I need to get, youknow, a very simple CJ run, CJ jump.
(01:03:38):
I have to tell you, if that crazy olduncle of, that attends all of the,
your, your Thanksgiving dinners is nowshowing you how he can edit photos on
Google instead of talking politics.
We've done our nano bonanno, my friend.
Yes.
Nano Bonano.
Yep.
My guest today has been Robert Tea Lee.
He's the Chief AI officer and chiefof research at the Sans Institute.
(01:03:59):
Thank you very much, sir. Gladto have had you for this chat.
Thank you.
Pleasure to be here and happy to comeback and talk about how things are going.
Great, and if you've stayedwith us this long, bless you.
if you wanna find out more about AI I'mgonna tell Rob to send me some links
and I'll put those with the show notes.
I, I eventually get the show notesup there on, on@technewsday.com, our
(01:04:21):
regular sites, you can find them there.
and I'll put some links up there.
I'll put a link to the book and ifyou're really interested to sit down
discussions every Friday we haveProject Synapse and most of the time
we broadcast those, as a podcast onHashtag Trending our sister podcast.
And you can hear three of us stumblingthrough what's happening in a week in ai.
(01:04:42):
That's exactly what needs to happen.
A lot of stumbling.
I love it.
Thank you very much.
I'm your host, Jim Love.
Have a great weekend.