Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
G Mark Hardy (00:00):
Hey, does data loss
prevention seem like it's too big
(00:02):
of a task to handle or the ideaof classifying everything in your
enterprise seem beyond your scope?
I'm going to tell you a littlebit about the history of DLP and
what you can do today and howyou can tackle that big monster.
Stay tuned.
You're going to hear itright after this message.
Intro Music
(00:29):
Hello, and welcome to
another episode of CISO Tradecraft,
the podcast that provides you with theinformation, knowledge, and wisdom to be
a more effective cybersecurity leader.
My name is G. Mark Hardy, and todayI'm going to be talking to you about
data loss prevention, or DLP for short.
Now, let's take a little trip backin time about how did we get to
(00:50):
where we are now with respect to DLP?
And in fact, what are we talking aboutwhen we talk about data loss prevention?
Essentially, we're looking attechnologies that allow us to tag
or flag certain information thatwe want to keep on one side of a
barrier from crossing over to another.
In the early days that, I rememberback in the 1990s, we had something
(01:11):
that we nicknamed a dirty word list.
And this is something we had inthe military when we had classified
networks and unclassified networks.
Now, there were opportunities tomove information back and forth, but
of course, you wanted to make sure.
You, reflected, it was a star propertyfrom the Bell Lapadula model for those of
us who took a CISSP many years ago, whereit's okay to, read information up, but you
(01:34):
can't move classified information down.
And so the idea was, is that if youknew of certain terms or project
names or anything else that would,Indicate that something were sensitive
or inappropriate to be put in theunclassified world that your dirty word
list would then pick up on a projectname or word secret or confidential
or something like that and flag it soyou could go ahead and say, Hey, maybe
(01:57):
this really shouldn't be doing it.
So that was probably the earliestinstantiation that I saw.
And that was, it was almost 30 years ago.
And what that happened is, youhad one network here that was
operating at one level system high.
Another network operating at a lower leveland this sat in the middle because there
was only one way to get from here to here.
Pretty simplistic modelunder the circumstances.
(02:19):
That's all we had were simplisticnetworks and life worked pretty well.
Now, a little bit later as we go ahead, wefind out that as risks increase, we have
not only accidental exposure or insiderthreats, but even external attacks.
And we have a lot more externalattacks today than we ever did before.
but We moved on, let's say, tothe early DLP, what we might call
(02:42):
simple content filtering, and thisis probably from around early 2000s
and included endpoint protection.
So then what was happening is we'regetting regulatory requirements,
requirements like HIPAA, Gramm LeachBliley Act, and Sarbanes Oxley.
And this is pushing companies to goahead and protect and restrict their
sensitive data from moving around.
(03:02):
Now, these initial DLP solutions werefocused pretty much on content filtering
and endpoint based controls, similarto what I had explained before, but now
with a little bit more sophistication.
Typically, what you'd find thenis being able, instead of just
looking for a word, you could writea regex, a regular expression.
And regex matching allows one to goahead and look for things such as credit
(03:23):
card numbers, three digits, a hyphen.
Or I'm sorry, social security numbers,three digits, a hyphen, two digits, a
hyphen, four more digits, or credit cardnumbers, which would be four groups of
four, 16 numbers, maybe email addresses,anything that might be considered to be
sensitive, PII, PCI, if we're looking atcredit card data and things such as that.
(03:45):
And so those are the early usage whereyou'd look to match something in an
email or transferred file becauseyou're looking for words or phrases
or something that would suggest thatit's confidential or proprietary.
The difficulty is though, is you're goingto end up with a high false positive rate.
Why?
Because it has no comprehensionwhatsoever as to the context.
(04:06):
So if I have something where I wassaying things such as, instead of
classifying a document as confidential,I would say that, last week we held
a confidential meeting with ourclient, and it went really well.
that would get flagged as apotential false positive, even
though you didn't disclose anythingthat you needed to protect.
And so on the keyword matching,it came up a little bit short,
(04:26):
but it was better than nothing.
And on the endpoints, what would happenis you're looking at data movement.
So what happens when Istick a USB device in there?
Someone's going to exfil data.
I can identify that andblock USBs from working.
Of course, back then we had theold PS two mice and keyboards,
which is a different port.
And today, where if you have an externalmouse or keyboard, it is pretty much
(04:48):
guaranteed to be a USB connection.
you block USB, then you can't operate thekeyboard, you can't operate the mouse.
That becomes a little bit of a problem.
And you had tools like devicelock, you could disable that.
But the problem with that might bethat, you disable functionality.
Because if you don't discriminate againsta USB data device versus a keyboard or
(05:09):
a mouse and then a little bit later, ofcourse, you have, for those of us who are
familiar with it, the concept of a rubberducky, which was a USB that declared
itself to be a keyboard and then it wouldallow you to queue a keystroke injection.
that's a little bit later, butlet's not even worry about that.
But again, high false positives isa problem due to lack of context.
Didn't scale very well when youincrease the data volume because now
(05:32):
if you're trying to move things ata much faster speed, your processor
might not be able to keep up.
And a lot of times you couldn'tsupport complex file formats, or
even things like structured data.
Or what we used to do is it was lookingfor your .Doc and would review that,
or you say you can't send a .Zip.
So what do you do?
You rename the file extension.
(05:52):
Instead of zip, it's zap.
And then it goes, oh,that's not interesting.
Off you go.
And these are almost trivial.
To go ahead and get around now,the next phase, probably let's do
it a little five year increments.
Somewhere around 2005 to 2010.
We moved on to advanced contentinspection and policy enforcement.
(06:13):
So now what's happening is wein the context of data breaches.
Remember the T. J. X. Breach thattook place back in 2007 there's more
regulations coming and so as a result wehave to up our game to be able to defend
against the loss of sensitive information.
So some of those technical approacheswould include things such as
fingerprinting where you could createa unique hash of a sensitive document.
(06:35):
Some structured data set, et cetera,so that any attempt to move that
block of data, you get a hash match.
You have a signature match andyou say, Nope, can't do it.
Not allowing it.
And so now I could use andsay, I have this database.
It has all my client data.
Here are all the filesthat pertain to customer X.
I record those fingerprintsat any time and attachment.
(06:57):
It's pretty trivial and prettyquickly to produce a hash of a
document, no matter how big it is.
It goes quickly.
And say, yep, that's a direct match.
Now, of course, you're alreadythinking what's going on here.
We have to say, how do Iknow that this is sensitive?
How do I tell somebody that you needto add this file to the database?
I have to have the conceptof the classification engine.
(07:17):
I want to be able to categorizedata, and it could be public
data, which is the lowest level.
It could be internal, itcould be confidential, or it
could be highly confidential.
That sounds a little bitlike purviews defaults.
we're going to get tothat in a little bit.
And so now what happens isyou can add tools like network
based data loss prevention.
It's going to monitor the network traffic.
(07:37):
Now, this is pre Snowden, so anawful lot of stuff was going HTTP.
Or even file transfer protocol,FTP, SMTP, mail going back.
And this stuff was all in the clear.
And so as a result, it was prettytrivial to go ahead and grab these
things, inspect them, and sit in themiddle, and tools like a WebSense
would go ahead and pioneer this networkbased filtering for sensitive content.
(08:01):
Now, what about a scanned document?
there's a solution for that and that wasbeing able to look at images and say, we
could build an OCR engine and that OCRoptical character recognition engine could
then go ahead and translate that intoASCII text, obviously a lot less space
than it would require to score that image.
But now I can run operations on it and Ican go back and use my matching algorithm
(08:25):
before to say three digits, a hyphen,two digits, a hyphen, four digits.
Looks like there might be a socialsecurity number in that scan document.
We probably want to dosomething about that.
And now what we can do is we can startto centralize our policy management.
We can define rules fordifferent types of data.
And with a tool that works enterprisewide, we can enforce things like block
(08:46):
all PII from being sent over email.
Alrighty, again, that worksgreat when your information
and your data is structured.
But in unstructured data,it's a little bit harder.
Again, if you have to update yourclassification rules, there's
a lot of maintenance involved.
If you miss something, thenall that stuff keeps going by.
And of course, once it's passedyour filters, it's lossed.
(09:08):
And with a lot of frequent blockingfalse positives, the users weren't
all that thrilled about that.
Now, let's go to another phasewhere we would have a unified DLP,
let's say the first half of the.
The teens, the 2010s or so.
Now we're going to integratewith the same endpoint agents.
And what we have is a cloud adoption.
In mobile workforces, this change inour behavior, instead of just client
(09:32):
server, here's my endpoint, here's myserver, and then maybe server to server.
Now, what happens?
I've got mobile workforces.
They're all over the place.
They're on laptops.
They're coming in from a lot of places.
I have cloud adoption where I'mmoving things up into Amazon and
Microsoft and Google and all the othercloud services that are out there.
And that's a problem becausenow I'm not running like a
(09:54):
perimeter type of a defense.
I don't have a perimeter anymore.
So where do I go aheadand put these DLP things?
it turns out that Some vendorsstarted integrating endpoint and
network and cloud DLP solutions.
So you had a single pane of glass.
You could monitor endpoints, monitor yournetworks, monitor cloud environments.
And then you could take a look at theseunified DLP platforms and determine
(10:15):
to say, it looks like we've got someinformation that might be moving from
the cloud to an endpoint, or maybegoing ahead and moving across networks.
And we could go ahead and identify it.
What we came up with is aconcept for behavioral analytics.
And U E B A, if you remember that alphaAlgorithm, acronym, sorry about that.
User and Entity Behavior Analytics, UEBA.
(10:38):
You could allow DLP to detectanomalies based on user activity.
For example, if a person is normally doinga certain amount of transaction volume
and all of a sudden there's a huge peak,like they're exporting a ton of files or
diagrams or internal things, you couldflag that as a potential insider threat.
Also, you could start to look at datasensitivity, not based on a pure list of
(10:59):
classifications and dirty words, if youwill remember the back and the old days.
But now look at the who, thewhat, the when, the where, the
how the data were accessed.
Provide some context so that if yousaid, I've got sensitive data transfer
with internal systems, that shouldbe okay because it's not going out.
Even though it's going across the network,it's going from one end point to another.
(11:20):
The destination is still internal.
And the route is still internalis compared to something
that's going external, which wesay, that violates this rule.
We ought to block it.
Now we could go ahead with the seams andintegrate this DLP data so that we could
correlate it and do some threat analysis.
So things like Splunk and ArcSight,they would ingest the DLP alerts.
(11:41):
And now you'd have deeper insights asto where potential problems might be.
Also, another innovation that reallystarted to come of age then is tokenizing.
Data upon detection ofsensitive information.
So you might say, I do needto move it from A to B.
And A is my Los Angeles office,B is my New York office.
They're both internal, but I'vegot to go across a public internet.
(12:02):
So how about we go ahead and tokenizeit so we agree on something that
the client name, if that's sensitiveor some project is sensitive, we
just substitute something for it.
And then we move it across andthen we re substitute for the
second user what was really there.
Now you could go full bore and encrypteverything, which would be better
in my opinion, but then you have tomanage keys and things such as that.
(12:24):
If you do it right, it works pretty well.
It's now getting more complex, andwith this breadth of coverage, these
multiple different platforms, not onlyare you going to have to worry about
performance issues, how am I goingto keep up with this volume, but you
might have a whole lot of alerts.
And as a result, you're goingto create alert fatigue.
It eventually becomes likethe boy who cried wolf.
You stop paying attention to it.
(12:46):
The next phase, if you will, ofDLP got into cloud and software as
a service using tools like a CASB,a cloud access security broker.
And now this is a companion.
To traditional DLP and software asa service and the increased use of
that back then says, Hey, we're goingto go ahead and push everything up
into the cloud tools like what'snow called Microsoft 365, but Oh,
(13:07):
365 coming out of age back then.
And Hey, I'm moving stuff up in the cloud.
It's moving out there.
It's going to be in myone drive, et cetera.
You have to go ahead and account for that.
Now with CASB, what I could do istake this cloud access security
broker tool, inspect data in mysoftware as a service platforms.
Now, if I'm using something likeO365, now Microsoft 365 or Salesforce,
(13:30):
I go, all right, that should bethere, that should not be there.
And Microsoft Defender started coming inline to integrate DLP with your cloud app
security, so that we could go ahead andpotentially flag where problems might be.
In addition, vendors came up withplugins, APIs, so you could have
a direct API connection to a cloudplatform, and you could do real
time data inspection and blocking.
(13:52):
Those like Netscope, for example,that would work with Dropbox and
G Suite, and even inline proxyDLP where I could leverage.
proxy based inspection tomonitor and block data flows
to and from cloud services.
So basically sit there in the middle.
Zscaler at proxy based DLPfor secure web gateways.
You could have identity aware DLPs.
(14:12):
You could integrate with identityplatforms that allow for kind of
dynamic policy adjustments basedon the user roles and permissions.
Forcepoint.
Had a tool that would allow youto adjust your policies based
on active directory groups.
About that time, I was working on astartup and I remember that they had
a blue coat and that firewall wasset up to go ahead and essentially
(14:35):
provide a man in the middlewhere it would serve up security
certificates that were rolled there.
So what happened is you said,Hey, what's my challenge?
Everybody's encrypting everything.
How do I view the encrypted traffic?
If I control the certificate thatyour machine trusts, then what I can
happen is when you ask for HTTPS andyou do the TLS protocol, that protocol
(14:56):
exchange is not with the ultimate server.
What happens is your device, your man inthe middle, so to speak, that the blue
coat was doing, As it would say, Hey, I'llaccept that call and I will go ahead and
do the four way handshake with your TLS.
Now, at that point, I have a secureconnection, but that secure connection
is to my trusted man in the middle,which would then decrypt it, allow me
(15:19):
for inspection, check for DLP, thenre encrypt it, because it's going to
go talk to the ultimate destination.
So what you have in the middle is aninstantiation of unencrypted data, which
allows you to then go ahead and inspecteverything, even though it's HTTPS.
And, with Ed Snowden, everybody elsesay, we ought to encrypt everything.
The amount of HTTP traffic and all thestuff in the clear really went down as
(15:41):
everybody got religion on encryption.
That could create some performancebottlenecks because you've got to
do this real time cloud inspection.
People don't want to goahead and wait for things.
You've got this on prem and cloud.
How do I coordinate these thingsand know that, this cloud is
mine, but that cloud's not mine.
And this is okay, but that's not okay.
And then the encryption, tokenization,things such that gets rather difficult.
(16:04):
Finally, if you will, our currentphase, we're now getting into
AI and machine learning DLP.
Now we can adapt and automate ourprotection because with this massive
amount of unstructured data out thereand potentially increased insider threat,
we've got to have a more intelligent.
Automated protection.
What can we do about that?
One is, use machine learning.
And machine learning could actuallybe used for data classification.
(16:26):
So my AI driven classifiers can improvemy accuracy, reduce my false positives,
and allow me to go ahead and havea higher fidelity for my process.
Microsoft Purview DLP would usemachine language to distinguish
between sensitive and non sensitivedata because it can ingest a lot of
the information and infer a context.
Natural Language Processing, or NLP,allows context based understanding
(16:50):
of human language and documents,communications, so it could figure it out.
So tools like Forcepoint would lookat that and say, this looks like
confidential contract language.
It should probably be protected eventhough it wasn't correctly marked.
And then have that policy tuneitself, allowing the machine
learning to adjust automaticallybased on user behavior and content.
(17:11):
And therefore, tools like Proofpoint,Forcepoint, you can allow that real
time so you adapt to your user'sbehavior and you don't complete the
continuous cycle of false positive,Once it's been told or trained that,
hey, this part's okay, off it goes.
And then finally, zero trust integration.
As we make DLP a key component ofzero trust strategies, we can tie data
(17:33):
protection to identity, location, devicehealth, all the things to ensure that if
we're going to encrypt and authenticateeverything, we should be okay.
Zscaler, NetScope willwork pretty well on that.
But that's a lot of complexity trying totune these machine language based models
and a lot of high compute requirements.
If you're going to do realtime data classification.
And of course, think aboutit, there's a privacy concern.
(17:56):
If you're going to be doing deepinspection of communications, something
that dives deep into all of your storeddocuments, how do you feel about that?
Is that a privacy issue?
maybe.
So today we're seeing a shiftfrom standalone DLP to essentially
integrated data protection.
We use Secure Access ServiceEdge or SASE platforms.
(18:16):
Potentially receive more homomorphicencryption, meaning I can work on the
encrypted data without decrypting it.
Now, it's interesting idea and it's beenproposed for a long time, but there's a
couple of functional instances out there.
Being a crypto guy, I might learna little more about that, because
I haven't studied all that much, beable to say, hey, I can produce a
(18:36):
result, operating on two encryptedinputs, create an encrypted output.
It's as if I were working on.
The clear text, but Istill get the right answer.
Interesting concept, but I can seehow mathematically you can do it.
Maybe we'll save that for another episode.
Agentless DLP models.
That way you don't have anyfootprint on your endpoint.
And you can go ahead and improve yourperformance because it's going to go ahead
(18:58):
and just look at stuff as it goes by.
And focus on things like datasovereignty, cross border data controls.
If we have GDPR and other regulatoryrequirements that put some
pressure on us, we're going tofind out that's a bit of an issue.
I'm working with a client this weekon how do we go ahead and do data
classification and use Microsoft Purview.
(19:19):
Now I'm just going to do a quick overviewbecause I'm not going to dive deep into
it, but I just want to use this as anexample of things you can do currently.
And Microsoft Purview, which is pluggedinto the 365 environment will give you
visibility to all of your data sets, letyou manage it across your environment,
help you secure your data throughoutits lifecycle between when it's created,
transmitted, modified, archived, andeventually destroyed, and help with
(19:43):
risk and regulatory requirements.
So one of the course, the first thingsis if you're going to look at doing
DLP, where is my data and what is it?
And how do I access it?
And so if you have everything.
In a known location, let's sayOneDrive and or SharePoint.
But you could also have Box and Dropboxand other services and things like that.
(20:06):
You really need to build a data map.
And using a tool like Purview,they say we can help you do that.
Take this metadata across all these otherenvironments, whether you're hybrid,
on prem, multi cloud, software as aservice, and be able to say, hey, we
can see where all this is, and we caningest all that and create a data map.
(20:26):
Okay, then what you want to be ableto do is make your data discoverable
so you can maximize its value.
To do you create a data catalog.
And now the data owners or stewardsof data can curate their data assets
and say, Hey, here's what it is.
So if you're a consumer, Ican search for things and say,
that's where I need to find it.
Again, not a big deal in asmall, simple organization.
(20:46):
When you get large andcomplex, it could be difficult.
Then we want to have policiesfor accessing the data, moving
it, perhaps even sharing it.
And again, data policiesthat you specify these.
So you could go ahead and say, Iwant to create and apply different
policies based on who's accessing it.
If it's DevOps and you're lookingat code, that's probably okay.
(21:06):
The data owner, that's probably okay.
somebody over in accounting or somebodythat's all in the assembly floor probably
should not be looking at source code.
And so be able to go ahead and limit that.
And so to do that, we're going to wantto secure our data and we're going to
secure it throughout the life cycle,whether it's on an app, in a cloud, in
a device, wherever it happens to be.
(21:28):
Key to this, and this is the hardpart that I'm facing right now, is
discovering and labeling everything.
Because I could start labelingall new information tomorrow,
but what about terabytes of data?
That have already there.
Do I have to go back there and pickthrough all these files that I may
never ever care to look at again.
(21:49):
And so it seemed like atremendous waste of time.
So the idea is, having tools thatcould go ahead and software development
kit where you can place things onthere to say, Hey, look at all my old
client files and here are the typesof things that would be sensitive.
pricing.
How do we go ahead and have a proprietarymethod for doing the engagement?
But standard boilerplate stuff thatwould be in our legal terms and
(22:11):
conditions may not be so insensitive.
So then go ahead and say, let AI,let your automation, let it go
scrounge all through your data.
So you don't have thetedium of doing that.
Then we want to deploy our DLPpolicies to restrict our data leakage.
And now data loss prevention isthe capability of purview that lets
you know when things are going out.
(22:31):
Now I'm using that right now andI've got it in a notification mode
rather than an actual interdict mode.
And I get a lot of false positives.
I gotta tell you that most of them arewith our chief financial officer, warning
credit card number going across, warningbank account information going across.
And I look at the source itemand I'm like, Yep, that's a CFO.
Now, yes, I'm going to filterthat out at some point, but I want
(22:52):
to see everything that it does.
So I get a little bit of noise.
But as a CISO or the person you designate,you want to understand what this
thing is capable and not capable of.
It's a little bit easier to take toomuch information and trim it down
with filters than it is to startwith not enough and say, how do I
get that stuff that I'm missing?
And so now what they're also as toolsavailable for insider risk management,
(23:13):
being able to use machine learningcan look for potential insider risks.
So you could say, Hey, thisjust doesn't look like what a
normal person ought to be doing.
A big challenge that most of usface or compliance and compliance
represents a tax, if you will, arestriction, a constraint on your
operations, that says you have tomanage things in a certain way.
(23:36):
Why?
because The regulation says we don'twant to disclose personal identifiable
information, protected health information,payment card information, all these other
types of three letter acronyms beginningwith P and ending with I, that say that,
yeah, this has to be protected becausenot only do you run the risk of the
embarrassment of disclosing something,But you may have financial consequences
(23:59):
for violating that as well, as thereputational damage that could occur when
somebody says that you just had a majordata breach and things such as that.
So by being able to build inclassification and governance at scale and
recognize patterns, these are the typesof machine learning types of capabilities
that you're looking for with your tool.
And then ultimately, as I said, youhave to keep core business records.
(24:23):
When do I dispose of data?
Information is not always an asset.
It can become a liability.
And in a situation where you've kept datafar too long, and then something comes
along where somebody's doing discovery ina court case, and they ask for everything.
not only do you have to producethat, because it's a court order.
But they're going to charge you,probably, to say, here, we spent hundreds
(24:43):
of hours reviewing all the data yousent us, and we charged by the hour.
Thank you very much.
Also, there might be thingsthat took place in the past.
It could create a potentialliability today, because let's
face it, when somebody goes aheadand goes to court, they don't try
to put your best foot forward.
They're going to take everything,as much as they can, I would
argue, potentially out of context,to try to use it against you.
(25:04):
But if you had an absolutelystrict Retention policy that said
interns, when they leave aftersix months, all the data is gone.
Employees, after two years,all the data is gone.
Executives, three years,all the data is gone.
Except if you have things like aFINRA requirement or some other
regulatory requirement for it.
Now you enforce that and youdo it over and over again.
Now you get served with a subpoena.
(25:25):
We said, hey, you had an intern thatworked with you four years ago and this
person is now facing some harassmentsuit and we want everything that person
ever said four years ago to try toeither build a case or defend a case.
At which point, you just hold up acopy of your data retention policy.
You say, I got logs, but sincesuch and such a date, when you
prove this policy, we've beendeleting things on time on schedule.
(25:47):
There is no business purpose to keepthis person's stuff after six months.
It's now been three or four years.
I cannot give you anything.
And that'll hold up in court.
they say, Judge, they'renot giving us anything.
Why aren't you giving them anything?
Our policy, which we've enforced foryears, says we delete everything,
and we've been enforcing it.
And if you have those logs,they're like, yeah, we got.
Also, make sure that if you do havelogs, that they're unalterable, that
(26:09):
you can go ahead and prove that.
Who knows, maybe stick them on ablockchain again, episode for another day.
And so now if you have e-discovery,that's a tool that allows you to go ahead
and preserve and collect and analyzeand review and export content that are
based on a particular investigation.
So now, instead of having to gothrough and manually pick through every
single email, every single file, everycorrespondence, let the AI tools do it.
(26:34):
Let some intelligent machine learningdo some deep indexing, email threads.
And the near duplication detection.
Because one of the things I found isthat when you do discovery, if I send
an email and then you send a reply,and then I send an email and a reply
and send my new end code back 10 times,that's 10 different emails I want.
Now you're supposed to print themall out and even have them by 11
depending on, certain size font.
They might take them electronically,but the whole idea is, that
(26:56):
you want to go ahead and say,look, this is all one thread.
The 10th message contains 987654321.
It is unaltered.
I can prove it was unaltered.
I'm just going to give you the one.
If you want to start pulling stringsafter a while, they realize, yep,
they got their act together and thecourt's going to leave you alone.
And then forensic investigations,being able to go ahead and
have audit ready reporting,insights and things such as that.
(27:19):
This is going to require somespecific information, but now
I can look at user activity.
When all of a sudden I have apeak in data access that's high
bandwidth, what's going on?
what's happening and things thatmight cause an audit to occur.
And now I can preserve audit logs.
And, for example, Microsoft Purview nowlets you keep them for up to 10 years.
It's compared to your normal data,which is going to be about 30 days,
(27:40):
unless you're on academic license,when I think it's about seven.
And then let's look forviolations, potentially.
If somebody is violating regulatorycompliance or business contact or
sending inappropriate communications,don't wait for the lawsuit.
Stop it and quash it immediately.
And then go ahead and ensure that you'reremaining compliant so that if you're
(28:01):
dealing with things like SEC or FINRA andyou have somebody who's making promises
that they can't deliver in terms ofinvestment results, you can spot that
very quickly and shut it down and notjust hide the evidence, but maybe have a
follow up to say, hey, this is compliance.
Yesterday, we received an email fromone of our employees who said, I can
get you 10 percent on a risk freeinvestment, A, that is not authorized.
(28:22):
B, we do not authorize that communication.
please disregard it and see thatperson's no longer working here.
But again, build yourself an audit recordso you can go ahead and protect yourself.
And then we want to make sure,ultimately, that you're tracking
your compliance effectiveness.
How well am I doing?
Can I continuously lookat my compliance efforts?
Have I reduced risk?
Because ultimately that's what I'mtrying to do for our organization,
(28:45):
our executives, is reduce theuncertainty of bad things happening,
so that I can go ahead and meet therequirements that are out there.
So these are the types of tools thatI can do with DLP, and I gain a much
better capability for my enterprise.
So in summary, if we think aboutdata loss prevention, it's a way of
ensuring that what goes out, around,through, or past some barrier, some
(29:08):
perimeter, some defined point, meetsa predefined set of requirements.
Of course, you have to predefinethose sets of requirements.
And some tools like Purviewgive you a head start.
They will give you fourdifferent classifications.
You can start putting differenttools in those buckets.
information in there, let it automaticallyscan, let it build up that case for
you and then proceed going forward.
(29:29):
It looks daunting.
It really is.
And so for a long time, I was like, doI really want to open that can of worms?
But as you move forward, you realize thatas a CISO, as a security leader, you're
going to have to open some cans of worms.
Why?
Because they're going to startto rot and smell and cause
danger for your organization ifyou don't tackle the project.
So if you've been holding off on DLP,if you've been holding off on data
(29:50):
classification, if somehow you feltthis isn't for me, or this is too big,
or I don't have the staff, or I don'thave the bandwidth, today's your day
because we've got AI, we've got allthese capabilities with machine learning.
And all those tasks that lookednearly impossible to accomplish
as human beings can be donepretty simply with these tools.
So hopefully you foundthis episode useful.
(30:10):
If so, share your insights with others.
Let them know that you're gettingthem here at CISO Tradecraft.
Follow us on LinkedIn if you'renot already following us.
We have more than podcasts.
Make sure you watch us on YouTubeif you're not subscribed there.
And Let other people know the sourceof your CISO tradecraft so you can help
them in their career paths as well.
Thank you very much forlistening or watching.
(30:31):
This is your host, G. Mark Hardy.
Until next time, stay safe out there.