Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ad Spot (00:06):
This episode is
brought to you by Zscaler.
Secure your digital transformationwith a leader in cloud security.
Discover how Zscaler can protectyour organization at zscaler.
com.
I
Chris Louie (00:20):
believe organizations
should embrace generative AI and
all the productivity gains thatyou can potentially get with that.
Always with the asterisk, alwayswith the caveat that it needs to
be done in a safe and secure way.
Ed McNamara (00:35):
In the world of
technology, heroes are everywhere.
They're overcoming disruption, deliveringsustainable outcomes, and fearlessly
forging the future to solve what's next.
Join me.
Ed McNamara, as we meet the peopleand businesses driving change in
our constantly disruptive world.
This is Innovation Heroes, apodcast brought to you by SHI.
(01:01):
According to the most recent state ofcloud native security report, 61 percent
of organizations fear AI powered attackscould compromise their sensitive data.
The other 39 percent were either tooscared or paranoid to tell the truth.
This statistic underscores not only theneed for robust security measures in
our increasingly digital world, but thefear AI incites in business leaders.
(01:25):
Enter Zero Trust, a security modelthat says that everyone inside or
outside a network should be trustedequally, which is to say, not at all.
It's a popular concept widelyenacted because it addresses the
evolving threats of our time.
But what about AI?
Is AI always a foe whenit comes to security?
The latest trends in AI demonstrate it'snot only revolutionizing Zero Trust, It's
(01:46):
also making our digital defenses smarterand more resilient to cyber attacks.
To explore this fascinatingintersection of AI and zero trust.
We have two expert guests with us today.
Chris Louie and Brian Deitchare popular podcasters known for
their weekly show pebcac together.
They have enlightened thousandswith their insights on security.
(02:06):
Now they have been kind enough to joinour part podcast and delve deep into
the world of AI and security throughtheir roles at Zscaler, a leading cloud
security company and an SHI partner.
Chris and Brian, welcometo Innovation Heroes.
Brian Deitch (02:22):
Thanks for having us.
When I first saw your name,I was hoping it was Epic Man.
You're going to hand me a check, bigol publisher's clearinghouse check.
But, uh, I guess not.
Ed McNamara (02:31):
You are incorrect, sir.
Um, thanks for having us.
Absolutely.
You're both podcasters to talk aboutcybersecurity and you work at Zscaler
where I assume you spend the bulk of yourdays talking with colleagues, partners
and customers about cybersecurity.
So guess what the topic is today?
Um, starting off, how big atopic has AI plus security been
(02:52):
since the explosion of Gen AI?
And I'll kick it over to you guys whoI'm the third wheel in this party.
So You guys decide who wants to takethat first softball right off the bat.
I guess I'll go first,
Brian Deitch (03:05):
and I would say,
earlier he had said that 39 percent
of people are too scared to ask.
I think 39 percent of the people probablydon't even know though at the same time.
But I would tell ya, Every single customermeeting that I go into where they are,
you know, security is top of mind.
Everyone's freaked out about generativeAI, but then it's like a two sided coin.
(03:28):
They either don't want to their,their employees to use it, or
they're trying to figure out.
How do we implement it?
So we're one step ahead of thecompetition day in and day out.
So there's this weird area oflike, it's gross and yucky.
Don't touch it.
But at the same time, we need it.
We need it.
Give us some more.
Right?
And so when looking at zerotrust, it's like, what's the
(03:48):
appropriate way of doing it?
And just blocking it isn't that'snot the way it breaks collaboration.
And if anything, I'm Users are savvy.
They're going to look fora way to get around it.
So you have to figure out what's away to safely enable the adoption
of gender to AI without introducingany risks, but make them more agile.
What do you think, Chris?
Chris Louie (04:08):
I think Brian's right.
That a lot of organizationssee it as a binary decision.
We either, we either allow it orwe block it, but there's also.
What they don't know theirshades of gray in the middle.
It's like how we can enable it.
We can safely enable it.
And there's lots of tools out therethat can help organizations enable it
and make, uh, have it increase theirproductivity or improve their security.
(04:29):
And just as.
Uh, a topic even on, on our own podcastthat we have a, it's almost a joke.
Now we said, this is ourchat GPT story of the week.
This is our AI story of the week.
There's just something everyweek that's new with the
intersection of AI and security.
Ed McNamara (04:44):
When you see, when you
see that, that people are using it
correctly, like as, as you, as, asBrian and Chris, as you would say, boy,
these guys are really getting it right.
Are there actually.
Characteristics of organizationsthat tend to get it right.
And I'm not saying business verticals or,or, or industries or things like that.
Like, do you see like actual structureor culture or characterizations that
(05:04):
it's like, yeah, what companies areactually, you know, using it the way
that we think, you know, what, what'sthe kind of company that like pops into
your mind since you have access to, toso many from your, from your day to day,
Chris, maybe I'll start with you on that.
Chris Louie (05:17):
I think the
profile of a customer that's
using AI correctly is one that.
Embraces it, but also recognizesthe, the risk of it as well.
And ultimately, just like with everythingin security, it's ultimately up to the
risk tolerance of each organization todepend, to decide what's the correct
level of risk tolerance, because, uh,gen AI, it's, it can be used for good.
(05:39):
It could turn a good programmerinto a great programmer.
It can turn a great programmer intoa principal program, but it could
also be used for nefarious reasons.
As well that if I accidentally upload mycompany source code to chat GPT, well,
that's now part of the public domain.
And I don't want that that out there.
So striking that balance, educatingusers how to properly use it and then
(06:00):
putting the appropriate guardrailsaround it would be the profile of
the ideal company on on embracing AI.
Ed McNamara (06:07):
And Brian, you have
chief technology evangelist.
You have the, um, you know, as, as yourtitle over at Z scale are there from,
from a leadership perspective, like what,what, when you see, you know, your, your
peers in the, in the sea level there,like, how do you, uh, how do you see who's
effective at kind of fostering, you know,what, what, what would be, you know, a
great use of, of AI in their environment.
(06:29):
I would say, I guess, let
Brian Deitch (06:32):
me think
about this to appropriately.
Take advantage of AI is toreally understand the risk.
Right?
And we talk about, youknow, maybe source code.
It's the IP It's the things that aregoing out there and trying to figure out
like when it becomes part of the publicdomain You're training the AI right?
So you have to think about it to thislens of like if that's the worst case
(06:53):
scenario What's the best case scenario?
Like how do we adopt this platform?
And the way we look at this it would bearound You know, like data protection,
like really understanding what is criticalto you and your organization and making
sure that that doesn't go somewhere.
It doesn't need to go.
Now, the, the executives that areembracing this, they're doing it from, I
(07:13):
guess, multifaceted, you know, they're,maybe it's a large bank and they're
using it to, um, answer questions from,uh, like a chat perspective, right?
Someone's coming in and askingquestions about a bill pay.
I have no idea.
That's one.
Great use of it from the flip side.
What about your employees thatare out there doing it as well?
And so enabling them to do more, right?
(07:34):
Is ultimately what we want to do.
We don't want to be an inhibitor,uh, to, to be able to do things,
but we want to do it securely.
And so having these conversations andyou can't train everyone the same.
Not everyone's in length, but youknow, learn on the same curve, but
instead put technologies that'sgoing to allow them to work together.
Still leverage these things, butnot put, you know, not introduce
(07:56):
any risk to the organization.
So a great example would be, Hey,I have a developer and we want them
to go out there and try to solvea problem with python or whatever.
That's fine.
But as they start to upload that, theycopy their code and they send it over.
There doesn't have source code.
That's part of our organization.
Then we block it.
I'd be able to understand thoseprompts and be able to enable it.
Now, what that's going to do is we think,tell that user, like, but wait a second.
(08:20):
Can't be taking sensitive information.
Maybe I need to restructure my question.
How do I work through this forloop or something like that
and be able to embrace it?
I've also seen it being used widelyfor people that may or may not write.
Non sensitive emails, but not sensitive.
Like it has bad information, but I'mjust a little too harsh to my, to
(08:41):
my minions that I, that I govern.
Right.
And so it's just like, Hey, this is whenI'm a sense of somebody and they'll,
they'll anonymize it and say, uh, youknow, make this a little bit softer.
Right.
And I'm talking to somebody that,you know, We don't have a personality
conflict to be able to embrace that.
So I think that there's alot of good things that come
from generative AI as well.
Ed McNamara (09:01):
Right.
So, um, I think the zero trustconversation has been happening longer
than the gen AI conversation, atleast in the public domain anyway.
Um, first I wanted to say, where arebusiness, Businesses today in zero
trust adoption, like, you know, interms of their zero trust adoption
path and has the rise of a I had anyimpact in terms of, um, either either,
(09:24):
you know, just just acceleratingthat or or just any impact at all.
I think
Brian Deitch (09:31):
the biggest
problem zero trust is.
I guess multifaceted one, it's, it'sold enough to have child support
and alimony at this point in time.
And then two, what is zero trust?
Right?
I think we asked everyonein the room right now.
I think we, well, me and Chris probablyhave the exact same answer, but I
think everybody else does as well.
And so one thing that's critical, uh,when talking with customers is trying
(09:51):
to outline exactly what does zero trustmean to you and then back into it.
Right.
I would love to believe that Zscarecan boil the ocean and do all
things, but the reality is it can't.
Right.
And so we want to be able to solvethose problems and then introduce it.
A.
I.
M.
L.
Does incredible amount of the heavy lift.
We use A.
I.
And M.
L.
It's pretty data protection assessments.
(10:12):
And when you just look at it throughthat lens, Chris, I'll give you
a second here to answer, but I'lljust focus in on data protection.
Data protection has beenincredibly difficult to do.
And if you talk to anybody, they'rejust like, it's, it's a false positive.
We don't want to do it.
Right.
And it's cause we alldid it the wrong way.
We started down the path of like, whatwas the old school way of doing it?
(10:34):
It was, uh, you know, rejectsand dictionaries and it is it
extended all these false positives.
And then we're like, well, maybewe should do different things.
But if I sit, In an exchange in whichI see every single user, every single
workload that's going through thereat every single destination, I started
to know your data better than you do.
(10:55):
And by running AI and ML against that tocreate these models, then I can produce
different type of, uh, dictionary.
I'm sorry, not dictionary.
I can, I can create DLP based, um,classifications for you that we don't
have to do the heavy lift on the backend.
And then you're just setting policy.
So what took A year, maybe two yearsto do back in the day is now being
(11:18):
done in a fraction of that time.
Chris Louie (11:23):
I see Zero Trust
definitely as a business driver.
It's, it's really transformationalcompared to how we were doing
secure remote access, maybeup just 10 years ago or so.
But it really does help companies scaletheir needs based on the requirements.
It'd be some, basically any of therequirements that they, they lay out.
There's a Zero Trust solution already.
(11:43):
And a model that, that would fitthat having the pandemic, uh, fairly
recently also shows that we canwork from anywhere and those users
need a safe and secure way to do it.
And then just sprinkling on topof that with, with AI, that's,
that's a little bit preeminent, thecherry on top of, of zero trust.
As Brian mentioned, if you havesomebody in the middle that sees
(12:03):
every transaction, we see everyuser going to every destination.
Well, that's data.
And that data can be trained and all of asudden we can start seeing these patterns.
Well, if I have a person in thefinance group and I see that they're
the only ones that access financeapplications, well, that's a logical
rule that says people in the financegroup can access Finance applications.
(12:24):
And when I talk to customers, they, theytell me like, yeah, we've had this, this
data, we can't really do anything with it.
It's going to take us anotherfew months to really analyze
everything and create the rules.
But you know, what if I told youthere's a system that uses AI and ML
that analyzes those traffic patterns.
And with, you know, if just a fewclicks, you can create these access
policies to significantly reduce theamount of people that actually need
(12:46):
access to, to a certain app application.
Ed McNamara (12:48):
So I saw that at uh, Zenith
Live, which is a Zscaler, major event
for Zscaler, your CEO Uh, Jay Chaudhrykick, kick things off by talking about
the competitive advantages of zerotrust architecture and AI together.
Um, can, can you guys help us,you know, connect those dots or
share what his company's vision is?
And, and just, you know, likewhat, what is the, the competitive
(13:11):
advantage that, that you guys have?
So here's the disclaimer, here's the,here's the ad opportunity in the,
in the podcast here, let's try not
Brian Deitch (13:20):
to be too, uh, salesy right
here, but when you appreciate it, please.
Yeah.
But when you, when you look at Zscar,we, we really do start at zero, right?
And by that, what I really mean is like,nothing has access to anything, right?
There is no world in a Zscar organizationthat is, that has done this adoption
(13:40):
in which you have routable networksbetween your branch offices and your
data center and your private cloud.
In a Zscar world, we don't trust anything.
Every single user is his ownlittle island, every single branch
and every single little vlan, itbecomes his own little island.
Every, every single application thatis in a data center, it might as well
be office 365 because you can't getthere unless we know who you are.
(14:04):
You have authorization to do so.
I can posture you, but youwill never again be in the same
physical network as an application.
And if you think about that.
Users really are yourbiggest liability in life.
And if I can pull those users off thenetwork and only give them access to the
applications that need to do their job,that can becomes exponentially easier.
And then what.
(14:25):
Or I'm sorry, becomes exponentiallysafer and really the icing on the cake,
as Chris has already talked to youregarding the financial person, right?
Is the A.
I.
Behind the scenes that doesthat policy mapping for you?
Because as you on board these things,we're gonna make sure that you want
to reduce that attack surface, right?
If you have a user on VPN today,they can probably talk to 567, 000
(14:51):
applications that are on that network.
They might only need to talkto maybe 50 or 60 of them.
I have no idea.
How do you even begin to dothat in a legacy platform?
But in the Zscar world, you start offwhere there is no access to anything.
We give access to, uh, userspecific Google groups to the
applications that they need.
And the AI in the backgroundhelps marry those two, the policy
(15:14):
decision making, uh, to that.
Chris Louie (15:17):
And I'm going to start with a
summary of one of Bruce Schneier's quotes,
where he says, you, you can't protect,you can't defend the best hope that
you have is you can detect and respond.
And I believe I definitelytruly believe that that's real.
And that's where we've in cybersecurity,we've come up with models called assume
breach and assume breach means assumethe bad guys are already in your network.
(15:38):
Assume your users are goingto click on bad links and.
Download bad files.
Now that we've assumed that, howdo we take an approach that reduces
the blast radius, limits the amountof damage that can be done from a
compromised users or compromise,a set of compromised credentials.
And that's where Zero Trust comes in.
You give the user exactly theaccess that they need and nothing,
(16:00):
nothing further than that.
And then also speakingOf defensive strategies.
There's, there's alsoactive defense as well.
So you can have deception technologyor decoys out there that you
place like fake mail servers, fakeactive directory servers out there.
And when the attacker comes in, it'svery hard to discern what's real
and what's not like the attacker.
Gets one chance to get it right, to geton the actual active directory server.
(16:23):
But if they happen to land on one ofthe decoys, you can run an automated
automation to contain that userthat says you landed on a decoy.
I'm cutting off your access until I couldfigure out why you landed on that decoy.
So there's, there'sactive defense as well.
Brian Deitch (16:38):
Yeah, that kind
of lends into the, the overall
messaging of like, what can you dowith that traffic and allow or deny?
We've had it for, for decades.
We can steer the traffic.
You can isolate it.
You can deceive it.
Or you can coach the user through adifferent type of workflow as well.
So I think that's like on top of zerotrust, being able to have those different
(17:01):
levers to pull and steer that traffic,whether it's deceiving or just, you
know, isolating the user becomes.
exponentially, a better wayof locking down the network.
And at the day, it makes it easierfor the users to do more without being
Ed McNamara (17:17):
told not to.
So in terms of like threat hunting, then,um, a lot of the interesting keynotes
and talk tracks, you know, spoke,spoke to that, um, at, at Zenith live.
And one of them was about how threathunting is, is evolving with AI.
Um, can, can you guys address that?
How is AI.
You know, transforming threat,threat hunting practices
Chris Louie (17:39):
is going to be
significant help to, to threat hunting.
There's a few trends.
And I think our listeners can universallyagree in, at least in cybersecurity,
there's things like sock burnout,our security operations centers.
They're understaffed.
They're overworked.
There's things like alert fatigue.
They just get so many alerts.
There's no way to tell which ones arereal, which ones are relevant, which
(18:00):
ones are the ones that we should reallylook at, um, you know, There are a
skill gaps and staff shortages as well.
I think the latest numbers are over700, 000 cybersecurity job openings
in the U S that we just can't fill.
There's just not enoughpeople to fill that.
And I think AI is going to helpall of those, those points there.
Um, part of, The reason that we have a,a skill gap is there's just, just simply
(18:24):
not enough people and, and you can't hireyour way out of a problem, but you can
make, like I said earlier, you can makea good analyst, a great analyst with AI.
You can make a great analyst, a principalanalyst with AI and using things
like analyzing data, contextualizinginformation with the data fabric.
One of the.
Recent acquisitions we've made,Avalar does this exact thing.
(18:45):
It ingests all these data sources.
It helps you contextualize theinformation and it will tell you, you
know, did Chris log in from Japan?
Is that a good thingor is that a bad thing?
Well, it depends if he wasin San Francisco 15 minutes
ago, that's a bad thing.
But if he has a business trip planfor Japan, well then that's fine.
So you have the same data point, butyou have to wrap some context around it.
(19:05):
And that's where the, the AI andthe data fabric will help with that.
Ed McNamara (19:11):
Chris, I'd love to stick with
that for a second, because fatigue, to
me, seems like it would be a real thing.
I mean, you can, there's volumes writtenon all these, on every different medium
about, it's just the human body, themind, being able to be under attack or
stress for a certain amount of time, andthen there's going to be a breakdown.
Like, you guys talk tocustomers all the time.
(19:31):
What's the human toll that'staking on the constant vigilance,
and is it being addressed?
Is that a real thing?
Is that what you see out there?
Like, when you're When you're meetingwith customers and our leaders,
even considering that, just speakingto, to our leaders out there.
Chris Louie (19:47):
I think sock sock burnout
alert fatigue are definitely real.
When, when I talk to customers andtalking to their security teams,
almost universally, like, like whenI, when I said our listeners can
universally agree with that, thatthat's real world data that I've.
I've taken that they're, they'reunderworked, I mean, sorry, they're
overworked and they're understaffed andjust having those thousands, hundreds
(20:09):
of thousands of alerts coming in, notknowing what's relevant and the fear,
the fear that you're going to misssomething, that you'll be the next
large breach because you saw this alert.
You didn't think it was relevant,but later on that alert led to
some catastrophic breach there.
There's always going to bethat, that fear of that.
Brian Deitch (20:29):
Yeah, I would,
talking with a lot of CISOs.
One of their biggest problemis like, they'll come to me
and say, Brian, I have 100
CVEs that I need tofigure out how to address.
The reality is only 5 10 percent ofthose CVEs will ever be breached.
I don't have enough human powerto go out there and patch all 100.
(20:51):
I need a tool that's going togo out there and tell me which
5 10 percent of those CVEs.
Are going to be blocked, or that aregoing to be breached most likely, right?
And that's where Avalor comes in andbe able to prioritize those CVEs.
And then that's also where theZero Trust Platform comes in.
Because if I could, in theory, move,let's say, half of those CVEs back
(21:11):
behind the Zero Trust Exchange.
Where it's not reachable.
It's not breachable.
You don't really have to worry about that.
So you take those off the table.
Then you're left with 50.
Then how do you prioritize that?
And that's where Avalor comes in andsays, out of this remaining attack
surface that you have, this iswhere you should focus your efforts.
So if you are the, the,the, the sock person, right?
(21:33):
And you're listening to this, you knowthat you're not going to lose your job.
Instead, you're going to use this as theAI is in a compliment your job and help
you prioritize what you should be lookingat in the long run and the CISOs, right?
I always feel like they're, uh, they'realways wondering if they're going to
wake up and lose their job, right?
It's just like, it's oneof those crazy things.
Like you're just one breach away fromhaving a terrible day in the office.
(21:54):
And so this is just a tool that helpsthem allocate time, resources, and energy.
Ad Spot (22:02):
Are you ready to elevate
your cybersecurity strategy?
Zscaler is here to help.
As the leader in cloud security, Zscaleroffers advanced solutions to protect your
organization in today's digital landscape.
Zscaler's cutting edge platform ensuressecure access to the Internet and SaaS
applications, safeguarding your data andinfrastructure from evolving threats.
Whether you need to secure remoteworkforces, streamline access to
(22:25):
cloud applications, or enhanceyour network security, Zscaler's
comprehensive suite of services isdesigned to keep you ahead of the curve.
Why trust Zscaler?
With a track record of innovationand leadership in cloud security,
Zscaler leverages its expertise todeliver unparalleled protection.
Our zero trust architecture andscalable solutions provide robust
security without compromisingperformance or user experience.
(22:49):
Stay ahead of cyber threats andensure your digital transformation
is secure with Zscaler.
Visit zscaler.
com to learn more.
Ed McNamara (23:02):
We've seen the bad side.
Obviously we just touched on that,but, um, maybe now's a great, great
example to, or a great time to sharea couple of examples, um, in terms of
what are some of the, the interestinguse cases you've seen out there in
the, in the AI and security world,or are there any, any instances where
you're like, well, I hadn't consideredthat, but somebody was, was using it.
Was really applying, uh, you know,the, the, the practices that you're
(23:25):
helping with and, and, and in a waythat was impressive to you both.
And, and Brian, I guessI'll start with you.
Do you have, I mean, you talkedto, I'm sure you're talking
to customers all day long.
Uh, you have one or two that like stickout in your mind and be like, wow,
that that's an interesting one there.
Brian Deitch (23:38):
Yeah.
So the ones that I think that aredoing it right and doing, doing
things the kind of the cool waywould be, there are generative tools.
That are out there that are free, butthey're also the paid ones, right?
And so they are making significantinvestments to do that.
And one of the biggest reasons whythey're, they're purchasing some of
these generative AI is that they'renot training the larger model, right?
(23:59):
It's really focused just on that.
So they're instructing their userslike, Hey, if we're going to be
doing this thing, you're going togo out and we'll call it chat GPT.
Everyone knows about chat GPT.
So they'd be on a paid version of that.
It's their own tenant.
They have control over their own data.
Now, if a user goes out.
To use some other type of generative AI.
Maybe it's DALL E.
I have no idea.
Rather than this block that user, they'restill giving them the ability to do it.
(24:21):
They're putting theminto browser isolation.
So they have the ability tointeract with that generative
AI, but they can only type to it.
They can only ask it certain questions.
They don't have the ability toeven copy and paste into it, right?
It becomes a complete abstract.
And what they found is that by doing so.
The utility of these, these freeapplications go down in the utility
(24:42):
of what they're paying for goes up.
And so that's really what they're tryingto do is drive the adoption of what
they're actually paying for instead ofjust having some, you know, a weirdo with
a credit card that's, you know, paying.
Quote unquote, for a version ofsomething that's out there that
is not under the umbrella and theprotection of their organization.
Chris Louie (25:00):
And I will open with
something very provocative that
will probably scare users and Iwill say, Google is going to start
listening to your phone calls.
So that I'm going to take a pause there.
And I'm going to wrapsome context around that.
One of the things that Google saidat, at their, their conference was
on Google pixel phones, Google willstart listening to your phone calls.
(25:23):
To detect fraud.
And I think that's an interestinguse case for AI, because if you read
the news, if you might even havepersonally know some family members
that have gotten you yourself might'vegotten these phone calls, they call
you and say, I'm from the bank.
There's something wrongwith your bank account.
Your direct depositdidn't show up this month.
Um, I need you to switch theaccount number to this temporary
(25:43):
account to Protect your moneyor the other typical calls.
I'm from the I.
R.
S.
You have to send me an iTunes giftcard or or else you're going to go to
the jail and we've all gotten thosecalls before and Google is going
to start listening to those calls,analyzing it, using a I using L.
L.
M.
S.
To detect when fraud happens.
Now, Many security professionals,I would like to think can pretty
(26:06):
detect that pretty quickly.
Like my CEO is never going to call meand say, send me 5, 000 in gift cards
for a gift for this, this customer.
That's likely never going to happen.
But when you think about my parents,if you think about my grandparents and
people their age, they get this calland says, your grandson's in, in jail.
And if you don't send me this money, he'sgoing to go to jail for a very long time.
(26:28):
And that, uh, That scares them.
And you can use a gen AI for nefariousreasons and, and voice clone that
person's grandson and say, Heygrandma, I'm in jail, send me money.
So Google will start listeningto voice calls and detect fraud.
When, when that happens, I thinkthat's, that's a good thing.
If, as long as you do it safely, as longas you do it on the device, as long as
Google doesn't data mine or steal yourdata that way, I think that's a good
(26:51):
application of AI to protect the mostvulnerable group of people out there.
Brian Deitch (26:57):
I would be more comfortable
if I was paying for that feature though.
If you're not paying forit, you're the products.
I do think they're going to data mine it.
So no pixel phone for Brian Dietsch,
maybe for grandma though.
Ed McNamara (27:10):
Yeah.
The, uh, the deep fakes scare me.
I, my, my, my dad's 85 and he called meonce and was asking me about this call
that he got from the bank and about threeminutes in, I'm like, wait a minute.
Like, are you even, do you evenhave an account at that bank?
And he's like, well, no.
And I'm like, okay, then like,we're, I think we're good.
You know?
So like, let's start, I had tostart way back at the beginning,
(27:31):
you know, I was already getting liketoo, too far down the rabbit hole.
I'm like, yeah, that, that isa vulnerable group for sure.
And, uh, you know, but, you know,obviously we're, you know, they're out of
the workforce, but on an individual level,that's a whole, whole nother podcast,
which I'm sure you guys have already done.
So in terms of, uh,advice for our listeners.
Um, for businesses looking to integrate A.
(27:52):
I.
Into security strategies.
You know, what's what?
What challenges are they facing?
And what advice do you have for them?
And Chris, you're acustomer success architect.
So maybe I'll start with you there.
Like, what do you What are youadvising on customers right now?
Um, for the most part,
Chris Louie (28:10):
Our, our advice for
customers and speaking, speaking
personally, I believe organizationsshould embrace generative AI and LLMs
and all the productivity gains thatyou can potentially get with that
it with always with the asterisk,always with the caveat that it needs
to be done in a safe and secure way.
And Brian has touched on a couple ofthe ways that we advise our customers.
(28:31):
Customers on, on how to implementthat correctly using browser
isolation, not allowing copypaste, not allowing file uploads.
So you can have things like akilobyte size limit for a transaction.
Because if I, if I type into chat,GPT, what are some good Italian
restaurants around my workplace?
Like that's a very small transaction,but if I upload the entire source code
(28:52):
for my application, that's a very large.
Amount of data that I'mgoing to be transferring.
So I can limit the number of bytesthat I can transfer up to chat GPT,
um, at, at, at a pretty basic level.
Um, there's other things like blockingkeywords, using a data protection
strategy, blocking keywords, blockthe word confidential, block the word
mergers, mergers and acquisitions,source code, watermarks, and the like.
(29:16):
Um, I remember there was a story awhile back of an executive uploading
Next year, next fiscal year businessplan to chat GPT because he was
too lazy to make a PowerPoint.
So chat GPT make me a PowerPoint ofall this confidential information.
Well, guess what?
You've just exposed thatdata out out to the public.
So there are ways to put the guardrailsaround it to ensure that nothing like that
(29:40):
potentially leaks out, but it has to bedone carefully and has to be considered.
Brian,
Brian Deitch (29:44):
Brian, what's your advice?
I would say it's like the, uh, the bullsin the nineties, whether you like it
or not, it's going to happen, right?
They're just going to win.
Um, and so for the, froma business perspective.
Don't be the naysayer, right?
It's gonna happen.
It's already happening, right?
And it's really up to you as leaders tomeet up with the business development side
of the house, um, to find out what they'redoing and then to figure out how do we,
(30:10):
how do we educate and how do we protect?
Right.
And Chris is already talking about.
I've already talked about it, but Ithink getting having a really strong
relationship between the business,the development side of the house and
not being there just to always say no.
No, we're not.
No, we're not.
Because there's you say, no,we're not going to do anything.
And you block it at your data center.
(30:30):
And all they're going to dois just go up to AWS and.
Insert making calls outthe chat from there, right?
Like they're gonna, they're gonnafind a way out one way or the other.
So collaborate with them, work with them,figure out the best strategy with it.
And at the end of the day, like, you know,whatever organization you work at, you're
all kind of playing for the same team.
And unfortunately, like, securityhas always been kind of like, uh,
(30:53):
A little bit of a black eye, right?
Where the, where the, uh, theevil step parent that is always
says no to, to the, to the funvacation or something like that.
And more of it, you know,it is a closer knit family.
We should be doing things together.
And I do believe that, uh, Zster isan enabler of the business, right?
We're not, we're not here tobe a hurdle or a speed up.
(31:13):
We want to be able to move to themarket fast, be able to do things.
I mean, heck, if you even look atour ability to classify, you know,
generative AI and LLMs that are outthere, that way you can figure out
what is sanctioned, what is not.
I think it's, uh, unprecedentedto be honest with you.
Ed McNamara (31:30):
And since the bulls
of the nineties were a thing.
far better defensive team than theyever give credit for, get credit for.
I really like what you did there,Brian's keep staying on theme.
Uh, just wanted to say thank you both somuch for your time today for listeners
interested in learning more about,um, or learning more or who, or would
like to reach out to you directly.
What is the best place to find you?
Um, yeah.
(31:52):
What's the best place to find you guys?
Brian Deitch (31:54):
LinkedIn probably, right?
Chris.
Chris Louie (31:55):
Yeah.
You can find us on LinkedIn.
I'm a, I just see his name.
C H Louie, C H L O U I E on LinkedIn.
He'll be able to find me.
Dad joke enthusiast.
Brian Deitch (32:05):
And then
Chris Louie (32:05):
you can just
find me Brian Deitch.
I actually
Brian Deitch (32:07):
think this, this sounded
like a great idea originally, but now that
I'm saying it publicly, it sounds stupid,but you can also find me as the cloud god
on there, but whatever, like, you know.
Judge me if you want,you're not my real dad.
Ed McNamara (32:22):
I saw that on LinkedIn.
You also have a wholeAvengers thing going on.
That's actually like areally good LinkedIn read.
So if anybody is, uh, is out there andwants to see how a LinkedIn profile
should be done, in my opinion, youknow, Brian Dietsch, absolutely.
You've got it going on there.
So kudos to you for that.
Um, and obviously you guys have apot, a podcast called PebCak, where
(32:45):
they get to speak to one anotherwithout me getting in the way.
So I would, Absolutelyrecommend checking that out.
Uh, and I believe that'sweekly, if I'm not mistaken.
Brian Deitch (32:53):
Yeah.
Yeah.
Uh, once a week on Mondays.
Yeah.
We're coming up to where I think our.
Millionth stream.
So I think that's goingto be very exciting.
Wow.
That's awesome.
That's awesome.
Ed McNamara (33:04):
Today's conversation
has provided deep insights into the
intricate world of AI and security.
As we've learned from both Brianand Chris, the fusion of AI with
cybersecurity presents both unprecedentedopportunities and formidable challenges.
For business leaders, staying ahead ofthese trends is not just about protection,
but about enabling innovation and growth.
We hope this episode has sparked somethought provoking considerations on how
(33:26):
you and your organization can leverageAI to bolster your security frameworks.
Uh, Chris and Brian, uh, thank you again,and thank you to the audience for joining
us on SHI's Innovation Heroes Podcast.
Until next time, staysecure and keep innovating.
Thanks guys.
Ad Spot (33:50):
This episode is brought
to you by Zscaler, secure your
digital transformation witha leader in cloud security.
Discover how Zscaler can protectyour organization at zscaler.
com.