Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Not all generative AI is created equal.
In fact, if data securityor privacy-related concerns
are holding your organization back,
today I'll show you how the combination
of Microsoft 365 Copilot andthe data security controls
in Microsoft Purview providean enterprise-ready platform
for GenAI in your organization.
This way, GenAI is seamlesslyintegrated into your workflow
(00:24):
across familiar apps and experiences,
all backed by unmatched data security
and visibility to minimize datarisk and prevent data loss.
First, let's level seton a few Copilot security
and privacy basics.
Whether you're using the free Copilot Chat
that's included with Microsoft 365
or have a Microsoft 365 Copilot license,
they both honor yourexisting access permissions
(00:46):
to work information inSharePoint and OneDrive,
your Teams meetings and your email,
meaning generated AI responses
can only be based on informationthat you have access to.
Importantly, after you submit a prompt,
Copilot will retrieve relevant index data
to generate a response.
The data only stays
within your Microsoft 365service trust boundary
(01:08):
and doesn't move out of it.
Even when the data is presented
to the large language modelsto generate a response,
information is kept separate to the model,
and is not used to train it.
This is in contrast to consumer apps,
especially the free ones,which are often designed
to collect training data.
As users upload files into them
or paste content into their prompts,
including sensitive data,the data is now duplicated
(01:30):
and stored in a location outside
of your Microsoft 365service trust boundary,
removing any file access controls
or classifications you'veapplied in the process,
placing your data at greater risk.
And beyond being stored there for indexing
or reasoning, it can be used
to retrain the underlying model.
Next, adding to thefoundational protections
of Microsoft 365 Copilot,
(01:51):
Microsoft Purview hasactivity logging built in
and helps you to discover
and protect sensitive data
where you get visibility into current
and potential risks, such as the use
of unprotected sensitivedata in Copilot interactions,
classify and secure data
where information protection helps you
to automatically classify,and apply sensitivity labels
to data, ensuring it remainsprotected even when it's used
(02:15):
with Copilot, and detectand mitigate insider risks
where you can be alerted
to employee activities withCopilot that pose a risk
to your data, and much more.
Over the next few minutes,
I'll focus on Purviewcapabilities to get ahead of
and prevent data loss and insider risks.
We'll start in DataSecurity Posture Management
or DSPM for AI for short.
(02:36):
DSPM for AI is the one place to get a rich
and prioritized bird's eye viewon how Copilot is being used
inside your organization
and discover corresponding risks,
along with recommendations
to improve your data security posture
that you can implementright from the solution.
Importantly, this is whereyou'll find detailed dashboards
for Microsoft 365 Copilotusage, including agents.
(02:58):
Then in Activity Explorer, we make it easy
to see recent activitieswith AI interactions
that include sensitive information types,
like credit cards, IDnumbers or bank accounts.
And you can drill into eachactivity to see details,
as well as the prompt andresponse text generated.
One tip here, if you are seeing a lot
of sensitive information exposed,
(03:18):
it points to an informationoversharing issue
where people have access
to more information thannecessary to do their job.
If you find yourself in this situation,
I recommend you also check outour recent show on the topic
at aka.ms/OversharingMechanics
where I dive into thespecific things you should do
to assess your Microsoft 365 environment
for potential oversharing risks
(03:40):
to ensure the right people canaccess the right information
when using Copilot.
Ultimately, DSPM for AI givesyou the visibility you need
to establish a data security baseline
for Copilot usage in your organization,
and helps you put in placepreventative measures right away.
In fact, without leaving DSPM for AI
on the recommendations page,
you'll find the policieswe advise everyone to use
(04:02):
to improve data security, such as this one
for detecting potentiallyrisky interactions
using insider risk managementand other recommendations,
like this one to detectpotentially unethical behavior
using communicationcompliance policies and more.
From there, you can dive in
to Microsoft Purview'sbest-in-class solutions
for more granular insights,
(04:23):
and to configure specificpolicies and protections.
I'll start with information protection.
You can manage data security controls
with Microsoft 365 Copilot in scope
with the information protection policies,
and the sensitivity labelsthat you have in use today.
In fact, by default, anyCopilot response using content
with sensitivity labels
(04:44):
will automatically inheritthe highest priority label
for the referenced content.
And using data loss prevention policies,
you can prevent Copilotfrom processing any content
that has a specificsensitivity label applied.
This way, even if usershave access to those files,
Copilot will effectivelyignore this content
as it retrieves relevantinformation from Microsoft Graph
(05:06):
used to generate responses.
Insider risk management helps you
to catch data risk basedon trending activities
of people on your network
using established user riskindicators and thresholds,
and then uses policiesto prevent accidental
or intentional data misuseas they interact with Copilot
where you can easily create policies
based on quick policy templates,
(05:28):
like this one looking
for high-risk data leakpatterns from insiders.
By default, this quick policywill scope all users in groups
with a defined triggeringevent of data exfiltration,
along with activity indicators,including external sharing,
bulk downloads, label downgrades,
and label removal inaddition to other activities
(05:48):
that indicate a high risk of data theft.
And it doesn't stop there.
As individuals performmore risky activities,
those can add up to elevatethat user's risk level.
Here, instead of manuallyadjusting data security policies,
using Adaptive Protection controls,
you can also limit Copilot use
depending on a user's dynamic risk level,
for example, when a user exceeds
(06:10):
your defined risk condition thresholds
to reach an elevated risklevel, as you can see here.
Using Conditional Accesspolicies in Microsoft Entra,
in this case based onauthentication context,
as well as the condition for insider risk
that you set in MicrosoftPurview, you can choose
to block their permission whenattempting to access sites
with a specific sensitivity label.
(06:32):
That way, even if a user is granted access
to a SharePoint site resource by an owner,
their access will be blocked
by the Conditional Access policy you set.
Again, this is important
because Copilot honors theuser's existing permissions
to work with information.
This way, Copilot willnot return information
that they do not have access to.
Next, Communication Compliance
(06:53):
is a related insider risksolution that can act
on potentially inappropriateCopilot interactions.
In fact, there are specific policy options
for Microsoft 365 Copilot interactions
in communication compliancewhere you can flag jailbreak
or prompt injection attempts
using Prompt Shields classifiers.
Communication compliance can be set
to alert reviewers of that activity
(07:14):
so they can easily discover policy matches
and take corresponding actions.
For example, if a person tries
to use Copilot in an inappropriate way,
like trying to get it towork around its instructions
to generate contentthat Copilot shouldn't,
it will report on that activity,
and you'll also be able
to see the response informing the user
that their activity was blocked.
Once you have the controlsyou want in place,
(07:35):
it's a good idea to keepgoing back to DSPM for AI
so you can see where Copilot usage
is matching your data security policies.
Sensitive interactions perAI app shows you interactions
based on sensitive information types.
Top unethical AI interactionssurfaces insights based
on the communication compliancecontrols you've defined.
Top sensitivity labels referencedin Microsoft 365 Copilot
(07:59):
reports on the labels you've created,
and applied to reference content.
And you can see Copilotinteractions mapped
to insider risk severity levels.
Then digging into these reportsshows you a filtered view
of activities in Activity Explorer
with time-based trendsand details for each.
Additionally, because allCopilot interactions are logged,
like other Microsoft 365 activities
(08:19):
in email, Microsoft Teams,SharePoint and OneDrive,
you can now use
the new data securityinvestigation solution.
This uses AI to quickly reasonover thousands of items,
including Copilot Chat interactions
to help you investigate thepotential cause of risks
for known data leaks in similar incidents.
So that's how Microsoft 365 Copilot,
along with Microsoft Purview,
(08:40):
provides comprehensive controls
to help protect your data, minimize risk,
and quickly identify Copilotinteractions that could lead
to compromise so you cantake corrective actions.
No other AI solution has this level
of protection and control.
To learn more, check outaka.ms/M365CopilotwithPurview.
(09:00):
Keep watching MicrosoftMechanics for the latest updates
and thanks for watching.