All Episodes

September 13, 2025 • 59 mins

Inside Zero Trust: John Kindervag and the Evolution of Cybersecurity

In this episode of Cybersecurity Today: Weekend Edition, host Jim Love speaks with John Kindervag, the pioneer behind the Zero Trust model of cybersecurity. With over 25 years of industry experience, John delves into how the concept originated from his early work with firewalls, advocating for a system where no packet is trusted by default. He discusses the fundamental principles of Zero Trust, including defining protect surfaces, mapping transaction flows, and implementing microsegmentation. The conversation also touches on overcoming cultural and organizational challenges in cybersecurity, the inadequacies of traditional risk models, and adapting Zero Trust methodologies in the evolving landscape, including AI. Through thoughtful discourse and practical insights, John underscores the importance of strategic and tactical implementations in building resilient and secure systems.

00:00 Introduction to Cybersecurity Today
00:25 Meet John Kindervag: The Godfather of Zero Trust
01:50 The Birth of Zero Trust
04:08 Challenges and Evolution of Zero Trust
06:03 From Forrester to Practical Implementations
11:40 The Concept of Protect Surfaces
17:30 Risk vs. Danger in Cybersecurity
30:54 Farmers and Technology
31:48 The Importance of IT in Business
32:26 Introduction to Zero Trust
32:41 Five Steps to Zero Trust
33:14 Mapping Transaction Flows
34:25 Custom Architecture for Zero Trust
34:55 Defining Policies with the Kipling Method
36:04 Monitoring and Maintaining Zero Trust
36:28 The Concept of Anti-Fragile Systems
38:47 Challenges and Success Stories in Zero Trust
42:02 Microsegmentation and Protect Surfaces
45:39 AI and Zero Trust
49:22 Advice for Implementing Zero Trust
50:37 Military Insights and Decision Making
57:19 The Future of Zero Trust
59:07 Conclusion and Final Thoughts

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
And if there is a data breach in yourorganization, you allowed it to happen.
All bad things happeninside of an allow rule.
You have to allow it.
You're not an innocentvictim of cyber crime.
you have bad policies in place andyou allowed it to happen and you
didn't see it happening, right?

(00:25):
Welcome to CybersecurityToday, the weekend edition.
John Kinderg is considered one of theworld's foremost cybersecurity experts
with over 25 years of experience as apractitioner and industry analyst, he's
best known for creating the revolutionaryZero trust model of cybersecurity.

(00:46):
While he was a vice presidentand principal analyst on the
security and risk team at ForresterResearch today, John is the chief
evangelist of a firm called Illumio.
I truly value a personwho says what they think.
My only prerequisite is that heor she actually thinks before
they say what they think.

(01:08):
As the first clip I played indicatesJohn is opinionated and forceful,
but also incredibly thoughtfuland hellishly interesting.
Join me for a fascinating discussionwith the godfather of Zero trust.
Great to meet you.
nice to meet you.

(01:29):
the Godfather of Zero Trust.
do you wince at that or is thatsomething you proudly accept?
Yeah, I mean, that's a nicknamethat other people have given
me, so, you know, yeah, sure.
That's cool.
I mean, there's a lot worse things thatpeople could say about me, so, yeah.
it's an honorific thatI accept, graciously.
But it does mean that you've beeninvolved with this from the start.

(01:50):
What was the original idea, and I'veseen it from a distance, but what was
the concept that hit you when you firststarted to think about Zero Trust?
So, in the, you know, at the turnof the century I was installing
firewalls and firewalls have differentinterfaces, and they're labeled
by a trust level from zero to 100.

(02:12):
And, so your internal networkis your trusted network.
It has the high trust level of ahundred, and your external network
has a low trust level of zero.
And then every other interface had adifferent trust level that was between
one and 99, and they couldn't be the same.
And then that trust level thatyou assign to an interface,

(02:34):
determined policy, so by default.
You didn't need to put a rulegoing for traffic going outbound.
And I said, this is silly.
we need to put rules for outbound traffic.
And my customers, the vendorsand the company I was working for
said, no, that's not how it works.
It's right here in the manual.
You don't need to have arules, so don't do that.

(02:56):
Quit putting outbound rules on.
And I said, but if somebody gets inside,they can automatically ex bill all
this data and no one will ever know.
And they said, well, butthat's not how it's set up.
And, you know, that's nothow the trust model works.
I said, there should be no trust model.
There should be no trust in packets.
The, every interface and every packetshould have the same trust level.
And that trust level should be zero.

(03:17):
And that's where zero trust comes from.
And we even talked in terms of likea demilitarized zone, as if we could
somehow declare that some areas werefree or non-combat at those times.
Yeah, well those were justdifferent interfaces that
had a different trust level.
So your first DMZ you would put atsay 50 and then, so traffic going

(03:39):
from the internal network fromzero to the DMZ didn't need to have
a rule because that had a highertrust level or lower trust level.
So you just went, you didn'tneed to put a policy there.
So you always had bad policy.
It only cared about inbound traffic.
And now what was going out andnow today, I think there's nobody

(03:59):
out there who isn't tremendouslyconcerned about outbound traffic.
They might not be monitoring itas well as they should or dealing
with it as well as they should.
So you started to formulatethe concept of zero trust and
where did you go from there?
Well, I went to Forrester Research andso they're the ones who gave me the

(04:21):
opportunity and runway to do the research.
So I could never have done this ifI hadn't gone to Forrester Research.
I was there for eight and a half yearsand it was a pure research organization.
It's what a university should be, right?
So our motto, what we were told thefirst day of analyst training was they

(04:42):
wrote it up on the word, here's yourmission statement, think big thoughts.
And so nothing was out of bounds.
They, we had these terms,widen your aperture, you know,
put a different lens on it.
look at it from a different perspective.
And so you were always trying to, tokind of rotate it around, rotate an
idea around in 3D and go in and outof it and all that kind of stuff.

(05:05):
And if I hadn't had thatopportunity, there was no vendor
who would've ever let me do that.
And there would've been no universitywho would ever have let me do
that if I was a college professor,because we have peer review, which
always inhibits innovation, right?
There's a whole, you should do a wholepodcast on the history of peer review.
'cause it's a fascinating subject.

(05:28):
I not sure I could do that.
I was one of those profs who came in.
I was at one of the major universities andI had to sit with a bunch of people and
bite my tongue while people told me about,I didn't teach in an academic enough way.
I have, by the way, no.
Desire to dump on academia, learningthings, testing things, having that

(05:50):
discipline is a wonderful thing.
But if what you're teaching studentsisn't applicable in the real world,
or if you're criticizing it, 'causeit works in the real world, but
not in theory, it's a problem.
But I don't wanna go my history.
I wanna go back to yours.
What was it like, you were a techguy who ended up at Forrester.
What was that like?
my impression of these guys is theywere, at the time that they were

(06:12):
there, and I haven't followed themin the past few years, but they
were the smartest people around.
they were very smart people.
I got lucky.
the founder, the CEO still the CEO,George, Forrester Connolly, which is where
you get the name from, his middle name,said, we need to put the SBA in security.
It was, to him, it was gettingtoo academic and it wasn't.
We need to find a real practitioner.

(06:34):
We need a practitioner.
And so I got lucky.
I knew somebody at Forrester and theysaid, I, we know somebody who, you know,
designs, networks and does penetrationtesting and, and all this stuff.
And so I went in and, and interviewed.
I was like, they said, doyou wanna be an analyst?
And I said, sure.
What's an analyst?

(06:54):
I didn't even know whatan analyst was really.
and I got the job and my firstday was like, what do I do?
I didn't even know what to do.
but it was a great educationand coming from my background.
You know, as a security architect,engineer, network engineer, all that
kind of stuff was a great backgroundto be an analyst because I knew

(07:15):
things that weren't just hypothetical.
They were real world, you know, I hadbuilt networks, I had installed firewalls
and routers and switches and, and, youknow, all the things that I was covering.
So, you know, there was a deepamount of practical knowledge
that I was able to bring to that.
Yeah.
It's a funny place to be.

(07:35):
And I started my consulting career atErnst and Young, just at the top Right.
And I'd never, I'd hired consultants.
I'd never been one.
I remember still sitting in one roomlooking at a whiteboard and seeing this
wonderful diagram and going, there'sonly one small problem with this.
And the partner lookedat me and said, what?
I said, that middle parthasn't been invented yet.

(07:56):
Right, So, to their wondermentand smart people, they accepted.
there was a tension therewhen you're dealing with the
theoretical versus the practical.
we discussed it and they were gladto have that expertise there at
least most of the time, I think.
Right.
So, how long did this take you to come upwith the first, what you, would you call

(08:16):
your, your first version of Zero Trust?
there was two years of primary research.
Once I got to Forrester, I'vebeen thinking about the problem
since I encountered this trustmodel, installing firewalls.
So I've been thinking about the problemfor nearly a decade and been, you

(08:36):
know, I, luckily the best thing thatever happened to me actually was I
got fired from a job because I refusedto not put outbound rules in and I
ignored the trust model and I, andI started doing better policy, which
made a bunch of people ma mad, right?
To think that you're doingit better, makes people mad.
But that's a very commonthing in our business.

(08:58):
And so then I did, from 2008 to2010, I did primary research.
I did some test, speeches and things onit, and then I wrote the first paper in
September of 2010, no more chewy centers.
so you've got this, you'veput this I toge together.
When did you know you had a complete idea?

(09:19):
We all have ideas.
Something that's actually gonnaform part of the mainstream,
of tech ideas moving forward.
That's a big deal.
When did you know it was gonna work?
When I knew it was ready to write.
Oh yeah.
So, yeah, I'd done enough primary researchI'd gone around the world, I'd gone to
people, I'd asked 'em to put holes in it.
I built prototype environments.

(09:40):
I knew it would work.
I would talk to peoplewho were really smart.
They said, we don't seewhy this won't work.
you know, there were some people say,well, we don't think it, it's gonna
catch on because no one will wanna do it.
This is just so embedded in ourculture, this trust model thing.
But, I got, called in to meet witha lot of important people, some from

(10:02):
vendors, some from the government,some from, foreign governments.
And they were like, let's,let's talk about this.
and then some people said,let's start building it.
So I was building zero trust environmentsbefore the paper was ever written.
Wow.
And from my perspective at least,I've, it's one of those things
I've heard people talk about a lot.

(10:24):
I've seen very, very fewgood implementations.
That might be the limits of myexperience, but I've heard a lot of
people talk about it, but a lot ofpeople started it and they almost, I
don't know how to describe this, butyou almost get to that sort of thing.
Well, we're doing zero trust, but inour own way, you know what I mean?
And that usually meanswe ignored the hard work.
Is it hard work?

(10:45):
No, it's not hard work.
It's simple if you follow the rightmethodology, but a lot of people don't.
the people who do itwell, don't talk about it.
As one guy said to me who, had put onein a very important place, he said,
John zero trust is like fight club.
The first rule is you don't talk about it.
Right.

(11:05):
So all the best Zero trustimplementations out there.
There's a few of 'em people are talkingabout, but most of them, they're not
talking about they, you know, theydon't feel the need to brag about it.
When people do it badly, they feel theneed to complain about it so that they
can have an excuse for not doing it right.

(11:26):
So, but most people, the big failuresare they think it's a product.
I bought this product andnow it's, you know, well, no,
you're, it's not a product.
And then most people try to go too big.
They try to do it all at once foreverything, and that never works.
So they don't understand thecontext of protect surface,
asking, what am I going to protect?
So the big thing, if you want to determinewho knows, zero trust versus who's.

(11:51):
Who's just pretending they know.
Zero trust is ask them to tellyou about a protect surface.
So the protect surface is theinversion of the attack surface.
The attack surface is unmanageable, right?
It's always growing.
It's like the universe,it's constantly expanding.
you can't manage it.
You can't control it.
It's outside of your control.
What do I got need to protect, right?

(12:12):
What data do I need to protect?
What assets do I need to protect?
What services do I need to protect?
These are known as das elements.
So we take a single das element, putit into a single protect surface,
and we build out zero trust,one protect surface at a time.
So you end up with like at a bank thatI've done that's, very successful.
They won't talk about it, but, theyhave different protect surfaces.

(12:32):
Their ATM system is a protect surface.
Their swift system is a protect surface.
Their mainframe that does allthe financial transactions
is a protect surface.
Their DNS system is a protect surface.
They have like 28 differentprotect surfaces that they can then
monitor, maintain, and maturity on.
And, but they built it out.
One protect surface ata time, and so that's.

(12:55):
The secret to zero trust is understandingthat it's not about what are the threats,
it's not about what is the product,it's about what am I trying to protect?
And until you know the answer tothat, you'll never move forward.
And people, I've heard itphrased a number of ways.
You know, people say, whatare your crown jewels?

(13:16):
What are the things thatyou're most important to you?
And I guess there's always adifficulty in having that discussion.
I've always had sympathy for the guysthat come up with their PowerPoint
and their discussion and try andtalk with the business about how,
what they need to protect and getthose types of conversations going.
Did you ever stumble across that,that was difficult to get people to,

(13:38):
to give you that type of information?
Well, because they didn't know, but Iknow how to get them to tell me that.
So, you know, I've done dozens anddozens of workshops that, that are
cross-functional, that include leadership,because you have to ask leaders
what needs to be protected, right?
So the leaders always know.
But we in cybersecurity tend totalk to the practitioner who doesn't

(14:02):
know what to protect, which is whythey want it to be a product, right?
Because then I can just buy somethingand hope that something gets protected.
But they don't knowexactly what they have.
They don't know how importantit is to the business.
they don't understand the objectives ofthe business and they kind of don't care

(14:23):
about the objectives of the business.
So they just don't want things to break.
Right.
So if we have the CIA triangle, youknow about the CIA triangle, right?
Which of course.
but the CIA triangle is kind of thisbizarre thing that no one knows where
it came from, but we talk about itlike it's sacrosanct, like it's sacred,
But I am gonna make you explainit though, because even though

(14:44):
I know it, we've got a thousand.
So you have a, you have this supposedlyequilateral triangle where you,
you're trying to balance three things.
Confidentiality, integrityand availability.
Those three things are supposedto be in balance in reality, all
anybody cares about is availability.
So it's not an equilateraltriangle, it's a, I can't do an oly

(15:06):
triangle with my fingers, but thehypotonus is really, really long.
And that's called availability,and that's all we care about.
And then there's a reallysmall part that's the c that's
supposed to be confidentiality,but it's really compromised.
We've got a lot of.
Highly available yet compromised networks.
So I can't tell you, I, I mean,I was in an office of a CEO.

(15:28):
He had just had the Department of Justicecome in and push a file and said, how come
all your really important intellectualproperty for this government contract
was found on the server of something wetook down in a adversarial nation state?
Oh, no, that can't be.

(15:48):
Lemme call the CIO.
No, the network's up and running.
Everything's fine.
let's call the ciso.
The network's up and running.
Everything's fine, right?
Call the call, the head ofnetworking, the network's up
and running, everything's fine.
No, the network, just the factthat the network is up and running
doesn't mean that everything's fine.
In fact, it might, as one friend ofmine says, unexplained uptime could

(16:10):
be evidence of a data breach, right?
Because the attackers nevertake the network down.
In fact, I know of times when thenetworks have reconfigured and made
the networks better to optimize theirability to steal stuff, and no one knew.
So when somebody says, well,we're not in a breach condition.
nothing bad is happening.
Everything's fine.
How do you know Do you haveany visibility into that?

(16:32):
because we don't, right?
Which is why I came to Illumio becauseI wanted to have that visibility.
I wanted to have that map.
I wanted to understandwhere the important.
Resources were so that I couldunderstand how to put them in protect
surfaces, and I could go through thefive step methodology and I could build
these zero trust environments veryeasily and quickly and inexpensively.

(16:55):
So getting the concept of the protectservice and I confess, I learned most
about what I know about cybersecurity froma lawyer who talked to me about how much
risk, the risk the business wanted to takeand gave me, one of the best questions I
ever had to ask was to look at somebody.
He said, okay, how much risk do youwant to take instead of trying to push?
And that's where I got to a differentplace 'cause I wasn't trying to

(17:18):
push anything on them saying, youcould have all, no expense at all,
but you have a hundred percentrisk where's your trade off?
And I think that was one of theways to open the discussion.
How do you open that discussion?
Well, I don't talk about risk at all.
I think risk is bogus.
So I have a hole.
Threat of research that I'm doingnow called risk is danger, that we

(17:39):
need to move from risk management todanger management because risk assumes
a probabilistic statement that youcan't make in cybersecurity, right?
So we define risk as, as probabilitytimes impact, and no one knows
what the impact is, right?
I mean, you go back to the SonyPlayStation network going down.
if I went back in my DeLorean and gotmy Flex capacitor going and went, up to

(18:03):
88 miles an hour and went back to HowardStringer, the CEO of Sony who got fired
because the PlayStation network wasdown for six or eight weeks or whatever
it is, and said, if your people don'tupgrade these Apache servers, you know,
in three weeks you're gonna be down.
You're gonna be down for weeks andweeks and weeks, and it's gonna cost
you your job and it's gonna be, youknow, many, many millions of dollars.

(18:24):
And they'd say, our risk managementpeople have said that's not.
Well, okay, but it happened, right?
So the impossible happens all the time.
So I say we need to focus on danger.
and this comes from, andyou can find it on YouTube.
the a speech I did called Riskis Danger because my nephew,
his name is Steven Danger.
Danger is his middle namegot neuroblastoma Cancer

(18:48):
when he was four years old.
Okay?
Now the probabilities of gettingneuroblastoma cancer are less than two 1%.
And then they told us that hehad a 2% chance of survival.
and he's alive because theprobabilities didn't work out.
And so he shaved my headfor cancer a year ago.
that's why I'm bald, given thisspeech and I'm, you know, I gave

(19:11):
it in front of 1500 people at HughSecon in Houston last September.
And, I had 1500 people crying at theend of the speech when he came out.
'cause I didn't tellwhether he lived or died.
I told the story of, how I got tothinking about risk versus danger
and why we need to look at thingsfrom a danger management perspective.

(19:32):
Right?
So that's dangerous.
when I tell you that's dangerous,that's not something you can accept.
The problem with risk is you can acceptit because you don't understand it.
and there's so many variables.
I tried first going the oldschool way and building an
actuarial table for cybersecurity.
I went to really top-notch actuaries.

(19:54):
I got the same answer.
Too many variables.
You can never build an actuarial table.
EB in his book Antifragile says,imagine a world where you have a
dice with unlimited sides so youcan roll it and get any outcome, but
you can never predict that outcome.
It's unpredictable that you can'tdefine a probability about it,

(20:14):
and that's what cybersecurity is.
So, you know, there's a few people who,really hate the idea of risk as danger.
And I've had a bunch of screeds writtenon LinkedIn about it, and I don't pay
any attention to that, but I've had a fewpeople who said, I went to my CEO and told
him about that concept, and we're redoingour program, so we're focusing on danger

(20:36):
management instead of risk management.
And that's typically how it goes.
if you look at it, from a macrolevel, it'll look like that's a
stupid idea and nobody's doing it.
But from a micro level, you'll find thatthis important person in this important
company that really resonated and they'redoing it, but they're not gonna tell
anybody about it because they don'twanna sound like, well, you know, we're

(20:56):
way out on the edge of weird ideas.
You know, great concept.
but the question I'd have is howdo you prioritize if all you've
got one element that's danger?
Well, it works withinthe zero trust framework.
So you prioritize it based uponthe sensitivity, or criticality
of the protect surface.

(21:17):
Okay, so you're constantlyworking on that, but what if
you have multiple decisions tomake about that protect service?
what, how do you refine those?
Well, in terms of like, I've got multiplethings that I think are dangerous.
well, you've got so much budget youcan spend in so many things you can
do, even though we know that thisis an ultimate tech service, but how
do I make those business decisions?

(21:39):
There's a, on the back end,there's a whole maturity model
assigned to protect surfaces.
So you can see how mature each one is,and then you can prioritize where you want
to put that effort in and where it goes.
when you do it that way, you cango, oh, my PCI protect surface,
you know, I'm an acquiring bank.
I have a PII process credit cards.

(21:59):
My PCI environment, hasa low level of maturity.
where am I missing things?
And then, okay, I'm gonna put someeffort here because it's dangerous.
Things going on here.
a lot of it is about changing incentives.
Incentives is the other thing thatI talk about a lot, you know, Warren
Buffett's partner, Charlie Mungerused to say, show me the incentives
and I'll show you the outcome.

(22:20):
the reason we have such badoutcomes in cybersecurity is
because we have bad incentives.
I wrote about that for theFinancial Times earlier this year,
and it's still a big concept.
I have a speech called ChannelingCharlie Munger, where I talk about that
because we don't have good incentives.
Most people are managing theirown downside, personal risk.

(22:42):
I don't want to do anythingbecause if I do something and
it goes bad, I get blamed.
But if something goes bad and I didn'tdo anything, I say I didn't do that.
And so instead of managing the upsidepotential for the organization.
Most people are managing theirown personal downside risk because
from a managerial perspective,they had bad incentives.

(23:03):
They're gonna get blamed if somethinggoes wrong instead of somebody going, man,
you really were onto something, right?
It didn't turn out right, but youwere on the right thread, and we
need to do it better the next time.
But you were right on and,and no one feels confident
to try to make things better.
No one wants to make things betterwhen they're in the trenches, which

(23:24):
is why I kept getting in troublein some of my previous jobs because
I wanted to make things better.
And you're gonna get me in trouble, right?
I watched an entire financial database,exfiltrate live to an adversarial foreign
country, and I said, let's shut it down.

(23:45):
No, I'll get in troubleif we shut it down.
All we gotta do is pull this one wire.
No, no.
we'll get in trouble, right?
Yeah, because we can't go down.
It's okay if all of our datagets exfiltrated because
that won't be a public thing.
But if we go down, that's a public thing.
So you go back to say the targetdata breach, which is where
I think cybersecurity began.
that happened in 2013 and I say everythingbefore that is BT before Target.

(24:10):
So we're now in the year, what?
12 after Target, because that's thefirst time A CEO got fired because
of something it did or didn't do,which was allow a data breach.
So the only thing the IT can do to getthe CEO fired is to allow a data breach.
And if there is a data breach in yourorganization, you allowed it to happen.
All bad things happeninside of an allow rule.

(24:32):
You have to allow it.
You're not an innocentvictim of cyber crime.
you have bad policies in place andyou allowed it to happen and you
didn't see it happening, right?
And so there were people at Targetgoing, we need to pull the plug, right?
I mean, I would talk to some peopleafterwards, we need to just unplug.
We can't do that.

(24:53):
It's Black Friday.
You know, we're making all our money.
Well, why did the attackersattack on Black Friday?
They attacked on Black Friday.
'cause one that's got the mostamount of credit card data.
And they were trying tosteal credit card data.
Two, they were in a change freeze.
Three, a whole lot of people wereon vacation because they were
told, go on vacation, you're notallowed to, make any changes.

(25:13):
So you got nothing to do here.
And as a result, not onlywas the CEO fired, there were
lots of other consequences
Target at course doesn't talk about, buttry to find a Target store in Canada.
you can't.
They're all gone.
Why?
Because they ran outta moneyto fix the operational issues.

(25:33):
So they said, well, we don'thave any money to fix Canadian
Target stores, so we're justgonna withdraw from, from Canada.
That was a direct resultof the data breach.
Wow.
So you've raised some interesting points.
One of them being the cultural issuesthat are associated with cybersecurity.
You know, I mean, I always thinkthat the reason why we have CISOs

(25:55):
is so the CEO's got somebody tofire before they fire him, or her.
I'm sorry I get a little cynical as I getolder, but No, no, you're right, right on.
I always tell every CISO they should havea t-shirt that has the word CISO right
here, and then they have two tire tracks.
Above and below it because my jobis to get thrown under the bus and
then it, they back it up forwardand backward over and over again.

(26:19):
But this whole concept of buildinga culture of, you know, and,
and this is why I kept pursuingon in terms of prioritization.
'cause one of the, the attitudes Iget is you've, you've raised the first
attitude, which I think is, don't touchit 'cause you're gonna get blamed.
The second one is, I can't doeverything, so I'm not gonna do anything.
Right.
And that I, you know, I think, I thinkyou addressed that with your maturity

(26:40):
model, but that's, that's been oneof the big things in cybersecurity.
Well, we can't do all this stuff.
Well, what are you gonna do?
as an IT person, I didn't startout as a cybersecurity person.
I was an IT person.
I ended up being in charge ofcybersecurity because I became
a CIO before we had CISOs.
And that was my big thing was, hey,you're suddenly in charge of all this
stuff and you have to think a new way.
I thought like a developer oran operator, I didn't think

(27:01):
like a cybersecurity person.
And understanding that culturewas, I think one of the,
there's a leap you have to make.
There's a learning you have to make,you know, your technical background
gave you the acumen to understand.
We have a lot of people who don't havethat acumen in those high level roles
now because they came from, consulting,they came from governance or risk

(27:21):
or compliance, they were auditors.
They don't understand how a packet movesfrom point A to point B. it's amazing how
many people who are in these high leveljobs that I talk to that don't understand
how a packet moves or what a packet is, orwhat the OSI model is or any of that stuff
that is fundamental to how things work.

(27:44):
And so I was talkingto somebody, recently.
I don't care about the network 'causeI'm in, I'm in charge of the cloud.
And I said, there's networks in the cloud.
And they're like, I hadn'tthought about it that way.
And then I had one person whotried to convince me that there
weren't networks in the cloud.
The cloud didn't have any networks.

(28:05):
You didn't have to worryabout networks because there
was no networks in the cloud.
so you got a lot of peoplewho didn't come up through the
trenches like you and I, right?
Yeah.
and so as a result.
They don't know how it works.
Right.
So like I don't know if it's the currentCEO of UPS or the previous CEO of UPS, but

(28:26):
he started off in college loading trucksat night, and then he did every single
job at UPS before he became the CEO.
You don't think he understoodexactly how UPS worked?
Oh, right.
But you'll run into a lot of placeswhere, somebody came in and they

(28:47):
just had executive experience.
In fact, I had one CEO tellme, well, you know what I do?
I take the people who have greatmanagement potential and I wanna put 'em
in someplace so that they can learn andget experience, but it doesn't matter.
so I put 'em in it and make 'em a CIOor a CISO because that doesn't matter.

(29:08):
I think it works before they folks,before they get into logistics
or sales or something else.
So I was, and I just lookedat that guy, I was like, okay,
your entire business, depe.
In fact, I think I said this to him, and Idon't think he'd ever thought about this.
Your entire business depends onwhether that IT system works.
if it goes down, youdon't have a business.

(29:30):
Right.
There's no business in the worldtoday that I can think of that if
the computational systems don't work,you can still operate, you know,
and I come from a farm in Nebraska,
You didn't need computers.
now on our farm, which is, you know,it's, my cousin runs it now, Our

(29:51):
family homesteaded it in 1871, right?
So it's 120 some years old.
Now everything's dependent uponsome kind of computer system.
In fact, if your tractor breaksdown, you can't fix it yourself.
You have to call the John Deere dealerwho's 40 miles away and they have to come

(30:12):
out and you know, then they have somebodywho plugs it in and goes like this.
so we keep a couple of the old tractorsand combines just the analog stuff,
Yep.
And, that is a trend thatmight be coming back.
I know people who are rebuildingold cars from the sixties and
seventies 'cause they don't wanta car that's computer controlled.

(30:34):
Well and the right to repair is,strongest through farm movements.
And I just love this 'cause I spenta lot of time living in farm country
and I wouldn't say I was a farmer.
I was probably that guy.
I don't know if you remember the oldshow, green Acres, but I think I was,
I was, I think the music to that plateas I, as I drove by, 'cause the farmers
would, would be sort of nice to me.
But anybody who thinks thatfarmers don't understand IT Most

(30:58):
of, most of 'em are pretty sharp.
You, you run a barn where you'vegot temperatures that must be
within a certain tolerance oryou lose all your hogs, right?
You've got crops that have to go in.
farmers are pretty damn smart.
many of them are adapting technologyin ways that if we could adapt
technology and change processes asquickly in business as many farmers
have, we'd be in a lot better shape

(31:20):
Oh, they were the first people toreally adopt GPS for a business use.
So you would use GPS to drive yourtractor and, and drive the rows, right?
Yeah.
you know, I was never allowed to plantbecause I couldn't drive a row straight
enough for my uncle to be happy.
So I got all the menial labor jobsand I didn't get to sit in the tractor

(31:43):
By the time I was in high school,we had air conditioned tractors,
which was like, wow, look atair conditioning in the tractor.
But you think about that there's nobusiness on earth that isn't a hundred
percent dependent upon IT We just had abig outage at one of the airlines, right?
They had the three Ps of, of, of theairline world, pilots, passengers, planes.

(32:05):
That's really fundamentally what you need.
But the computer system wasn't working,so none of them could take off and
those passengers were stranded.
Yeah, our dependence is incredibleand that's, I think what's one of
the reasons why, why cybersecuritybecomes so important and it becomes
so complex and so difficult.

(32:26):
I wanna ask you, I wanna go back tothis because I wanna make sure we get
the concept across is, is you've talkedabout the protect surface as being one
of the key elements of of zero trust.
We talked about culture.
What other.
Components are essentialto making it work?
Well, there's a five step methodology thatI try to reinforce to people all the time.

(32:47):
And if you do this five step methodology,you're, almost a hundred percent
success rate for people who do this.
For people who try todo it some other way.
I love people.
Well, I don't wanna do it that way.
I wanna do it my own way.
Fine, do it your own way.
But success as theysay, is not guaranteed.
so the five steps are, one,define your protect surface.
What am I trying to protect?
Right?

(33:07):
Two, map your transaction flows.
How does the system worktogether as a system?
You know, I have a friendwho is a former Navy Seal.
He and I do, have a presentation thatwe do together, and, and it's, it's,
it, it's about why cybersecurityneeds better cartographers, right?

(33:28):
What, the map is the important thing,you know, if you're gonna fight a,
he always says, my favorite weaponin war was a map, you know, and, and
so, 'cause if I have a map, there'stwo things you can be in, in combat.
You can be wrong and lost.
And if you have a map,you'll be neither, right?
You'll, because if you're wrong.
You look at the map and youfigure out how to make it right.

(33:49):
If you're lost, you look at the map,figure out how to make it right, and the
map tells you everything, where the enemyis, where you need to come in, where you
need to go out, all the things you do.
So mapping the transaction flows,that's why they came to Illumio.
They have a great transactionflow mapping solution.
It tells you where theprotect surfaces are.
It tells you what's connected to it.
It tells you what, what everything that'sworking right and what's not working.

(34:11):
And so now I, I'm like, you know,Eisenhower looking at the map of
D-Day and figuring out where, wherethe German positions are and how I
need to go in, you know, because ifyou have a bad map, you lose the war.
The wars are lost on bad maps.
And so then the thirdstep is architecture.
Everybody wants to knowwhat product do I buy?
No, you decide that you decide.

(34:33):
What products and technologies toimplement after you understand what
you're protecting and how it works.
So every zero trust environment,the architecture is custom built,
tailor made, bespoke, whateverword you want to use for it.
So I have to know what I'm protecting inorder to know how to design the system.
I can't just have a reference architecturethat's just general and generic.

(34:55):
And then the fourth step isto define the policy, right?
How do I write policy todetermine what is allowed?
And so we start with a deny all policy.
Instead of, in the old days,we played Whack-a-Mole, right?
So you don't want to justtry to stop the bad stuff.
You say, what stuffshould I explicitly allow?
And so there's a po there's apolicy methodology around that

(35:17):
is called the Kipling Method.
My personal homage to the writerRudyard Kipling, who gave us the idea
of who, what, when, where, why, and how.
In a poem in 1902, I keepsix honest serving men.
They taught me all I knew theirnames were what and why, and when,
and how, and where, and who, right?
So who should have accessvia what application?

(35:38):
when should that access happen, right?
So it shouldn't be 24 hours a day.
That's one of our big problems is we justturn on a rule and we never turn it off.
So turn off all your ruleswhen people aren't working.
Where is that located?
That's the location ofthe protect surface.
Why are we doing that?
That's where we can put a lotof metadata into the system.
We do this because it's a PCI environment.

(35:59):
We do this because it's regulatedby this thing or that thing.
We do this for all this stuff.
That's all stuff we can ingest in stepfive, which is monitor and maintain,
where we pull in all the telemetryinto the system, we analyze it and we
know how to make it better and better.
So we can do what's calledcreate an antifragile system.

(36:19):
So again, collab gave me the vocabularyin his book Antifragile for what I've
been trying to build a system thatgets better and better under stressors.
So the more, when I have an attack,a stress, you know, somebody's
doing something, I can adapt toit, I can mitigate it, adapt to it,
and become stronger and stronger.
And that's what.

(36:40):
We're doing, we're buildinganti-fragile systems.
We're going beyond resilience,which is the big buzzword.
But as EB says, you know,resilience is static.
when there's a stressor, it doesn't fallapart, but it doesn't get better either.
But anti-fragile systemsrespond to a stressor and adapt
and gets better and better.
So like your human body, this is theexample he gives in his book, right?

(37:04):
if you go on vacation, I justcame back from vacation, right?
On a cruise, and you caneat and drink all you want.
So, what do you do?
you gain a few pounds.
to get rid of pounds to loseweight, to get in better health,
you have to stress your body.
Reduction of calories,that's a stressor exercise.
That's a stressor.
But your body doesn'tfall apart and break down.

(37:25):
It adapts and gets stronger.
And so that's what wecan do in zero trust.
So if you follow those five steps,you will almost always be successful.
In fact, I haven't run into anybody whohas done the five step methodology and
said it didn't work, I explained thisto a three star general and he said,
John, thank you for explaining this tome in a way that I could understand,

(37:47):
because I could never make heads ortails of what people were telling
me with all this other mumbo jumbo.
But making it simple, making itactionable, I can put a task force on
this right now, on the things that Ineed to protect because he understood
the concept of protect surface.
It's a military protect surface.
In the webinar I do with ClintBruce, the Navy Seal, famous written

(38:09):
Navy Seal, we talk about that isthe high ground, the thing that
you're trying to hold and protect.
And, in the battle of a little roundtop, in Gettysburg, Joshua Chamberlain,
they take the high ground little aroundtop, and they hold it, and then they're
able, the stressors are the attacks thatthey're getting from the Confederates, and
they end up doing a charge and defeated,and they hold that whole ridge and that

(38:33):
leads to the victory at Gettysburg.
Right.
I mean, by that time, when, bythe time Pickett's charge comes,
the Confederate army is sodepleted that it becomes defeated.
One of the things that becomes difficultwith Zero trust is the proving of
a negative, I guess, is, you know,you, your system's really working and

(38:55):
you're not being attacked, like you'renot having a successful attack, it's
a hard thing to sort of prove it.
but you know, I get.
Screen captures from people a lot whosend me, look at, you know, look at what
my zero trust environment just stopped.
This big attack was starting and wejust stopped it because we have all this

(39:16):
evidence that they're trying to get in,but they weren't allowed to come in.
They didn't get in.
Right.
So it was like, I remember there was apen test done, and I gave a, presentation
with the guy who did it, but I won'tgo into that, it was a number of
years ago, but it was like the firstpen test of a zero trust environment.

(39:36):
And so the guy had defined what I hadn'tyet come up with the term protect surface,
but, the important system that he wantedto build a zero trust environment around.
So he had the pen tester come in, andthe pen tester goes, Hey, I can't get in.
You know, well, ofcourse you can't get in.
Right.
It's a zero trust environment.
There's no rule thatwould allow you to get in.

(39:59):
There's just not easy ways to get in.
And as a former pen tester, youalmost always got in because
there was a bazillion waysbecause everything was allowed.
So if we see the pen tester,we're gonna try to stop it.
So we're looking forthe pen test to come in.
That's what we're trying to find.
And then, oh, you didn't see the pen test?
Aha.
You're bad.
so no, that's a failure of policymethodology, not a failure of the people.

(40:21):
Right.
So everybody's lookingfor a, a thing to blame.
Well, the policy methodologywas the thing to blame.
So the pen test didn't work.
So he said, I need a domaincredential to finish this pen test.
Sure.
Well, he couldn't do anything.
why can't I do anything?
Well, I didn't assign.
A policy to your credential.
Your credential has nopolicies assigned to it.

(40:43):
Because typically when weonboard somebody, we give
'em access to everything.
Right?
Snowden Manning, they hadaccess to everything on CPR net.
That's why they wereable to do what they did.
And finally, the pen testersaid, what are you trying to do?
Make me look bad?
And the CISO said, Yes.
That's your my job.
Yeah.
And there, so just a couple of thingsthat in terms of, of taking this to

(41:06):
a, a level that, where I think it, itmight have some resonance with people.
God, I just can't believe Ishould use the word resonance.
I'm sorry.
I, I'm, I'm doing this too long.
The, that's a good word.
It's a good word.
Tell use.
What I'm, I'm trying to get atis what are the things that that.
In the current security environmentthat you think Zero Trust has

(41:26):
a real applicability to and,and real defense against?
And a couple of things I'll bring upthat just drive me crazy right now.
the whole idea of social engineeringis driving, hacking, and the most
successful, there's the groupscattered spider that's taken social
engineering to a totally new level and.
Really, really became an effectiveforce, uh, in the hacking community.

(41:51):
Well, because typically the people thatget social engineered, have access to
things they shouldn't have access to.
And so that's what they'retaking advantage of.
They're getting that lateralmovement inside the internal network.
So one of the key technologies that weuse, I worked for Illumio, I came to
Illumio 'cause it has microsegmentationtechnology that segments networks
internally to build the protect surface.

(42:12):
That's segmentation.
Microsegmentation technology is what weuse to define the micro perimeter that
creates the protect surface, right?
So there shouldn't be a policy inplace that allows, you know, Jim love
who's, an IT manager to have accessto the financial database, right?
'cause that's not your job.

(42:33):
And so, even if yourcredentials are compromised.
the attacker tries to use thosecredentials to get access to, say,
the financial database, it stoppedbecause that rule doesn't exist.
And then there's an alert thatgoes in and says, Hey, Jim is
just trying to access this.
And you go, Hey Jim, why are you tryingto access the financial database?
I'm not, oh, it looks like yourcredentials has been compromised.

(42:55):
We stop it.
And then even if they get into thatdatabase, there's no rule that allows
them to exfiltrate that stuff outsideto a command and control server on the
public internet, which is an unknown ip.
So, you know, there's no rule thatsays, take all the data and allow
it to go outbound to somethingthat we don't know what it is.

(43:17):
And so you've contained the blast radius.
Even if somebody does getaccess to the protect surface.
Getting access to the commandand control server outbound is
extraordinarily difficult in thoseenvironments, if they're done correctly.
And I notice you use microsegmentationas, as a way of describing it, where
many people might say segmentation.

(43:38):
What's the differencethe size of the segment?
I mean, microsegmentation isjust a well known term, right?
So Forrester did a waveon microsegmentation.
You should link that in your thing.
You can get that from thepeople who set this up.
but we were the leader in theirwave or it, yeah, the Forester wave.
And, so it's a great technologythat allows you to, to, and

(44:00):
I've been a fan of it forever.
In fact, the second report that I everwrote about Zero Trust was called Build
Security into Networks, DNA, aboutthe Zero Trust Network architecture.
New ways of segmenting networksmust be created because all modern
networks must be segmented by default.
And I was a big believer insegmentation, and it was hard in 2010.

(44:22):
November, 2010 is easy now, right?
But, and even the NSA last year in their,guidance on the network and environmental
pillar, it was really about segmentation.
And they go back again tothe target data breach.
Why did the target data breach happen?
Because the network wasn't segmented.
you know, everybody says, well, thehvac, company had compromised credentials

(44:44):
and it came in through the hvac.
Well, the h you know, I'm a formerQSA for PCII was one of the first
generation, I was in the secondever PCI, certification class.
There should have neverbeen an HVAC computer on the
cardholder data environment.
That is a clear violation of PCI.

(45:04):
It was that architecturaldecision that caused the problem.
It wasn't a problem of the H factor thing.
It was a problem of how thesystem was designed and the
policies that were allowed.
So really, if you think about it,we focus so much on products, but
really it's all about policies.
Products exist to enforce policieswhat policy is it gonna enforce based

(45:28):
upon what are you trying to protect?
So I need to know what I'm protecting,the protect surface so that I can
define the policy, which will tellme what the product needs to be.
Yeah.
Just, we're about to implement, AIthroughout our entire enterprise,
probably what I would call theleast secure system that's ever been
developed how do we start to thinkabout AI with a zero trust mentality?

(45:53):
Because God knows we'recreating huge holes in our
security system and our world.
How do we think about it differentlyor how can we attack it using
the principles of zero trust?
Well, there's a couple things.
one is, it will be very importantin step five monitoring to maintain.
So I wrote an article for the FinancialTimes called Why I Am Not Losing Sleep

(46:15):
Over ai, because we'll be able to useAI to analyze the data, to respond
to the attacks much more quickly.
and I think it gives us an advantage overthe attacker because the attacker, no
matter what you do in ai, they're stilllimited by the way that networks work.
You know, the TCP protocol,T ccp IP protocol, right?
They're still limited by that.
Second thing, you need to think of theAI repository as a protect service.

(46:39):
So my friend George Finney, the CISOof the UT Systems, has written a great
book called Rise of the Machines andit's about zero trust in the AI age.
And I wrote the forward toboth his books on Zero Trust,
people should read that book.
'cause I think it's a really good bookthat explains it easily for people.
and then there's a wholelot of things we don't know.
In fact, I would say there'smore things we don't know about

(47:02):
AI than we do know about ai.
And so all the hype that we're seeing,we don't know where that's gonna go.
but we're using it, internallyto give more visibility.
We have what we call the AIsecurity graph, so we can tell you
exactly what's going on inside yourenvironment, what we're seeing,
and how to make things better.

(47:22):
we're partnered with, Nvidia to whereyou can buy an Nvidia, Board card and
video card that's preloaded with illumio.
So you can segment, do microsegmentationand create all these policies right there
on the card at layer two as it's comingin it's primarily for OT systems, but

(47:43):
you could use it for anything really.
And it'll be deployed insome of the big hyperscalers.
So, you know, ai, there's a lot ofdownside, but there's also a lot of
benefits and we're so early in the journeythat we don't know where it's gonna go.
There's so much hype and what did I read?
the market cap is like 4.7 trillionand the revenue across everything is

(48:06):
like 20 billion, substantially less.
Yeah.
Right.
Will it change the world?
Yes.
will there be a rise of robotswho come out and kill us?
I don't know.
I don't know.
Maybe we should, start building the timemachine to go back to eliminate the people
who create, Skynet one of the things wecould do, but, but chances are, is I've

(48:26):
always told people, I said, the, theone thing you must depend on is nobody.
We will have AI in ourenterprises, so get over that.
Yeah.
You know, this is like, you can be likethe guys who tried to stop cloud it.
It's, it's something you cando if you, or encryption.
do you remember the story of, PhilZimmerman when he created PGP?

(48:47):
Mm-hmm.
Yeah.
the government tried to ban it,so he just put the algorithm on
a t-shirt and wore it around.
I have several Phil Zimmermansigned t-shirts in my collection.
He just said, you can't ban math.
Right.
Which is true in ai.
So he just put the, thealgorithm on a t-shirt.
Now, I would suppose it's much harder toput the algorithm on a t-shirt of open AI

(49:13):
or whatever it is, all the code, becauseit'd be really small, but still, yep.
Maybe somebody should do it,but it's still out there.
It's out there in the world.
The models are out there.
they're not going back.
Just final thing from you,your advice to people.
you've obviously walked in, we've coveredpart of that, but I'd like to close off
with a bit of that, your advice to peoplewho haven't gotten started or feel like
they've hit a wall with zero trust.

(49:33):
what should they do?
First thing they need to knowis there's nothing that they
need to do to get started.
Everybody thinks, okay, I haveto fix this whole thing, which
is gonna take five years and thenI'll start my zero trust journey.
No, start wherever you are.
There's always something that you can do.
You know, I talk about threelevels of protect surfaces.
We have learning protect surfaces,things that don't matter.

(49:56):
So you learn how to do it on systemsthat if you screw up, it doesn't matter.
You have practice protect surfaces,things that maybe have a little
bit more sensitivity, but they'restill not, not mission critical
And Carnegie Hall the same way.
Practice, practice, practice.
And then you have the crown jewels caseof the kingdom high value assets, which
you, whatever you wanna call them.
And so go through it in that order sothat you can learn, you can practice,

(50:21):
and then you can go to the bigs, right?
So that's the first thing.
So you don't ever have to have anythingin place to, that's a prerequisite.
secondly, don't get discouraged, right?
I mean, too many people.
Give up too early.
And this, you know, I do a lotof work with military veterans.
We have an epidemic of veteransuicide in this country.

(50:43):
So I volunteer with someveterans organizations.
And one thing I learned fromveterans is when they're in the
military, they never give up, right?
They never, they learn from failure.
they adapt, they overcome,what, where they can't do
that is in the private sector.
When they translate into theprivate sector, it's really hard.

(51:03):
And that's where we get, youknow, get a lot of those suicides.
Is, is that world is so, our worldis so strange to them, right?
So, you know, when I'm, once you'vebeen to life and death, the games
that we play, the, you know, wetalk about the blame games stuff
must look just absolutely absurd.
the things that you and I take forgranted are sometimes terrifying for them.

(51:28):
Right.
I got a call from a Navy Seal friendof mine that I was involved with.
And he, man, I had a bad day.
I so wish I could go back to the teams.
He had to leave the SEAL teams.
'cause he had a kid andhis wife was like, no.
You know, I mean that's one ofthe primary reasons people leave.
'cause they love what they do.
I mean, they love it.
And he said, man, I just,I really wanna go back.

(51:50):
I just, I can't, I can't deal with this.
It was such a bad day.
Well, what happened that was so bad.
Well, I had a job interview.
It was terrifying.
He'd never been on a job interview before.
I mean, think about that, right?
Yeah.
and another guy said to me, being aNavy Seal was the best job I ever had.
I said, really?
Why?
He said, 'cause I didn't haveto make any decisions, you know?

(52:12):
and then I was talking to, somebodywho studies this and he said, the
average person in one of thesehigh level military things makes
50 decisions for themselves a day.
Right?
So.
They're told when youget up, you go to chow.
You have very few choices onwhat you're going to eat, right?
You're told when you go to thedentist, once you go to the doctor,

(52:33):
when you go to to train and workout, you're told everything.
You don't make very many decisions.
And this scientist, this neuroscientistwas telling me that the average
regular person in business, we make500 decisions for ourselves every day.
Right?
And so, 10 times more.
and it's that disconnect that'sthat sometimes people have trouble,

(52:57):
transitioning from 'cause the sealwho had done things that would just
terrify the heck outta me, he waslike the things that, like going
into a business meeting, he said,I would rather he said this to me.
I would rather go into a gunfightthan go into a business meeting.
He said, if, if, you know, ifthere's a gunfight here, you get

(53:18):
behind me, I'll take care of you.
But if there's a business meeting,I wanna follow you into the room.
And I thought that was a reallyinteresting perspective because
the things that are just commonto me are uncommon to him.
Well, and bringing it back to our subject,and let no one think that people who
fear making decisions are unintelligent.
As a matter of fact,there's good psychology.

(53:38):
Good.
Science that says that the moreintelligent you are, the more difficulty
you have in making decisions becauseyou can see all aspects of them.
The people who scare you are the peoplewho make decisions right away without
any facts or any interest at all.
Now, there is a place for that.
There's a time for that you don't wantpeople thinking about things all the time.

(53:59):
But, I just don't want anybodyto disparage the idea that this
fear of this business environmentmakes them less intelligent.
'cause it doesn't, it probably is a goodsign of intelligence, but ultimately,
if you're gonna be a leader, youhave to be willing to make decisions.
Right.
So there's a, you eversee band of brothers?
Yes.
so, what was the guy's name nor thefriend Doug was a big fan and, sorry,

(54:20):
long story, but there's a lieutenant,
I can't remember what his name is.
anyway, he is portrayed as theincompetent lieutenant, right?
Who, in the battle he breaks down oneof the guys says lieutenant, whatever
his name is, I can't remember his name.
Wasn't a bad leader becausehe made bad decisions.
He was a bad leader becausehe made no decisions.
making decisions takes courage.

(54:41):
So you have to have the courage to analyzeall the things, know what is the best
decision to make at that time, and thenhave the courage to make that decision.
So, it's a bad thing to not havethe courage to make decisions.
Right.
It's not that you're unintelligent.
You may be unintelligent, you may bevery intelligent, but you may not.

(55:02):
and I think if you're not incentivizedto allow that courage to come out,
then you will make no decisions at all.
And it's an issue in security because,so many things happen where people
have to make, they say that armiesare resilient because people can make
the right decisions at the work faceor at the attacks face or wherever.

(55:22):
And that's something we have toconsider as we put things together.
And I think it takes us full circleback to that blame and shame game of,
you know, people make mistakes and youteach them that , we're gonna blame
you every time you make a mistake.
You're not gonna have peoplemaking the decisions and taking
the actions they need to.
I was talking to a military historian, andhe was telling me, you know, in his mind

(55:44):
one of the big differences why the UnitedStates was so dominant over the Germans
was the idea of commander's intent, right?
So Commander's intent is, here'sthe objective, get this done.
And the commander says to do it.
And so.
When, you know, on D-Day, if all officersget killed in your platoon, the next guy

(56:07):
takes over and the next guy and that, andthat's kind of attitude, the next guy.
And we're gonna get the thing done in theGerman army, according to him, and I'm
channeling him right now, if, if you know,the officer got shot, you had to call back
and get permission to continue the attack.
Right?
And actually, throughout history, alot of times when leaders were killed,

(56:28):
then the armies couldn't fight.
And culturally, one of the things thatmade the, and makes the US great still.
Is they have this concept of commander'sintent that's written down, and we
need to bring that into cybersecurity.
So the grand strategic actors, theCEOs or the generals or whoever, need
to define a commander's intent oncybersecurity and say, achieve this.

(56:54):
And then you can't get introuble when you're doing that.
Even if, if the op goes bad,it goes south, there're gonna
be mistakes that happen.
That's just human nature.
But if you're doing it for the rightreason, that's what we want to know.
Are you doing it for the right reason?
Because the outcome has variability.
but if you're doing it for the rightreason, then the trajectory is correct.

(57:19):
Last question for you and justis you, you've, you've been
with Zero Trust all this time.
Where's, what do youthink is the future of it?
What do you think is the next thing in,in security that you're, you're watching?
Well, zero Trust was designedto be strategically resonant.
There's that word again, strategicallyresonant to the, the top leaders of an

(57:41):
organization, the grand strategic actors.
So the first people who really understoodit were, CEOs and high level people
that, you know, and it, and then it wasdesigned to be tactically implementable
using commercially available off theshelf technology at whatever level
that technology was gonna be at.
So I knew the strategy wasnever, wasn't gonna change.

(58:01):
It didn't need to change.
Strategies generally don't change.
Right.
You know, we still readSun Zoo, but Tactics Yes.
change.
And people confuse strategy and tactics.
I got to study.
I got to do a project witha guy who was a military,
strategist, in the first Gulf War.
And one of the reasons wewon it so quickly is because

(58:22):
of his strategic thinking.
And he applied that to business.
And I got to be on a project forabout two years with him, and he
taught me all this stuff that ulultimately, you know, 10 years later
or so, I applied to Zero Trust.
But he said, he used to say, he saidthis to me so many times, John, most
people confuse strategy and tactics.

(58:42):
They think they're being strategic,but they're being tactical.
If you're focused on things,you're being tactical.
If you're focused on ideas,you're being strategic.
So the idea of zero trust isn'tgonna go away, I don't think.
the products will get better andbetter the way we do it will be
faster and faster and its successwill get greater and greater.

(59:04):
It's not going away.
Yeah.
John, this has been afascinating conversation.
I'm so glad to have met you.
Nice to meet you, Jim.
Thank you for, invitingme on your program.
. and that's my chat with John Kinderg.
Love to know what you thought.
Reach out to me.
You can send me a message at ourwebsite@technewsday.com or.ca.

(59:26):
Use the contact us form orreach out to me on LinkedIn.
And if you're watching this onYouTube, I read all the comments.
I'm your host, Jim Love.
Thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.