All Episodes

May 6, 2025 9 mins

Stay in control as GenAI adoption accelerates across your organization using Data Security Posture Management for AI in Microsoft Purview. With built-in visibility into how AI apps and agents interact with sensitive data—whether inside Microsoft 365 or across unmanaged consumer tools—you can detect risks early, take decisive action, and enforce the right protections without slowing innovation.

Monitor usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work.

Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance GenAI innovation with enterprise-grade data governance and security.

► QUICK LINKS:
00:00 - GenAI app security, governance, & compliance
01:30 - Take Action with DSPM for AI
02:08 - Activity logging
02:32 - Control beyond Microsoft services
03:09 - Use DSPM for AI to monitor data risk
05:06 - ChatGPT Enterprise
05:36 - Set AI Agent guardrails using DSPM for AI
06:44 - Data oversharing
.css-j9qmi7{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;font-weight:700;margin-bottom:1rem;margin-top:2.8rem;width:100%;-webkit-box-pack:start;-ms-flex-pack:start;-webkit-justify-content:start;justify-content:start;padding-left:5rem;}@media only screen and (max-width: 599px){.css-j9qmi7{padding-left:0;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;}}.css-j9qmi7 svg{fill:#27292D;}.css-j9qmi7 .eagfbvw0{-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;color:#27292D;}

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
Do you have a good handleon the data security risks
introduced by the growingnumber of GenAI apps
inside your organization?
Today, 78% of users arebringing their own AI tools,
often consumer grade, to use as they work
and bypassing the data securityprotections you've set.
And now, combined with theincreased use of agents,
it can be hard to knowwhat data is being used

(00:24):
in AI interactions to keepvaluable data from leaking
outside of your organization.
In the next few minutes,
I'll show you how enterprisegrade data security,
governance, and compliance
can go hand in hand with GenAI adoption
inside your organization
with Data SecurityPosture Management for AI
in Microsoft Purview.
This single solution not onlygives you automatic visibility

(00:47):
into Microsoft Copilotand custom apps and agents
in use inside your organization,
but extends visibilityinto AI interactions
happening across differentnon-Microsoft AI services
that may be in use.
Risk analytics thenhelp you see at a glance
what's happening with your data
with a breakdown of the topunethical AI interactions,

(01:08):
sensitive data interactions per AI app,
along with how employeesare interacting with apps
based on their risk profile,either high, medium, or low.
And specifically for agents,
we also provide dedicated reports
to expose the data risks
posed by agents in Microsoft 365 Copilot
and maker created agentsfrom Copilot Studio.

(01:30):
And visibility is just onehalf of what we give you.
You can also take action.
Here, DSPM for AI providesyou proactive recommendations
to help you take immediate action
to enhance your data securityand compliance posture
right from the service using built-in
and pre-configuredMicrosoft Purview policies.
And with all AI interactions audited,

(01:50):
not only do you get thevisibility I just showed,
but the data is automatically captured
for data lifecycle management, eDiscovery,
and CommunicationCompliance investigations.
In fact, clicking onthis one recommendation
for compliance controls
can help you set uppolicies in all these areas.
Now, if you're wonderinghow activity signals
from AI apps and agentsflow into DSPM for AI

(02:13):
in the first place, the good news is,
for the AI apps and agents you build
with either Microsoft Copilotservices or with Azure AI,
even if you haven'tconfigured a single policy
in Microsoft Purview,
activity logging is enabled by default,
and built-in reports aregenerated for you out of the gate.
As I showed, visibility and control

(02:34):
extend beyond Microsoft services
as soon as you take proactive action.
Directly from DSPM for AI,
the fortify data security recommendation,
for example, when activated
under the covers leverageMicrosoft Purview's
built-in classifiers todetect sensitive data
and to log interactionsfrom local app traffic
over the network,

(02:55):
as well as the device level
to protect file system interactions
on Microsoft Purviewonboarded PCs and Macs,
and even web-based appsrunning in Microsoft Edge,
to help prevent risky usersfrom leaking sensitive data.
Next, with insights now flowing in,
let me walk you through
how you can use DSPM for AI every day
to monitor your datarisks and take action.

(03:17):
I'll start again fromreports in the overview
to look at GenAI apps that arepopular in our organization.
Something that is really concerning
are the ones in use by my riskiest users
who are interacting withpopular consumer apps
like DeepSeek and Google Gemini.
ChatGPT consumer is atthe top of the list,
and it's not a managedapp for our organization.

(03:38):
It's brought in by users whoare either using it for free
or with a personal license,
but what's really concerning
is that it has the highestnumber of risky users
interacting with it,
which could increaseour risk of data loss.
Now, my first inclination might be
to block usage of the app outright.
That said, if I scroll back up,
instead I can see aproactive recommendation

(04:00):
to prevent sensitive dataexfiltration in ChatGPT
with adaptive protection.
Clicking in, I can see thetypes of sensitive data
shared by users and their prompts.
Creating this policy
will log the actions of minor-risk users
and block high-risk users from typing in
or uploading sensitiveinformation into ChatGPT.

(04:21):
I can also choose tocustomize this policy further,
but I'll keep what's there and confirm.
And with the policies activated,
now let me show you the result.
Here we have a user withan elevated risk level.
They're entering sensitiveinformation into the prompt,
and when they submit it, they are blocked.
On the other hand, when auser with a lower risk level

(04:41):
enters sensitive informationand submits their prompt,
they're informed that theiractions are being audited.
Next, as an admin,
let me show you how thisactivity was audited.
From DSPM for AI in the Activity Explorer,
I can see all interactions
and any matching sensitiveinformation types.
Here's the activity we just saw,
and I can click intoit to see more details,

(05:02):
including exactly what wasshared in the user's prompt.
Now for ChatGPT Enterprise,there's even more visibility
due to the deep API integrationwith Microsoft Purview.
By selecting this recommendation,
you can register yourChatGPT Enterprise workspace
to discover and govern AI interactions.
In fact, this recommendation

(05:23):
walks you through the setup process.
Then with the interactionslogged in Activity Explorer,
not only are you able to seewhat prompts were submitted,
but you can also get complete visibility
into the generated responses.
Next, with the rapiddevelopment of AI agents,
let me show you howyou can use DSPM for AI
to discover and set guardrails

(05:44):
around information used withyour user-created agents.
Clicking on agents takesyou to a filtered view.
Immediately, I can see indicators
of a potential oversharing issue.
This is where data accesspermissions may be too broad
and where not enough of my data
is labeled with corresponding protections.
I can also see the totalagent interactions over time,

(06:05):
the top five agentsopen to internet users,
with interactions byunauthenticated or anonymous users.
This is where peopleoutside of my organization
are interacting with agents
grounded on my organization'sdata, which can be bad.
I can also quickly see a breakdown
of sensitive interactions per agent
along with the topsensitivity labels referenced

(06:27):
to get an idea of the type of data in use
and how well protected it is.
To find out more, fromthe Activity Explorer,
I can see in this AI interaction,
the agent was invoked in Copilot Chat,
and I can view the agent's details
and see the prompt andresponse just like before.
Now what I really want todo is to take a closer look
at the potential data oversharingissue that was flagged.

(06:50):
For that, I'll return to my dashboard
and click into the default assessment.
These run every seven days,
scanning files containing sensitive data
and identifying wherethose files are located,
such as SharePoint sites
with overly permissive user access.
And I can dig into the details.
I'll click into the topone for "Obsidian Merger"
and I can see label coveragefor the data within it.

(07:12):
And in the protect tab, thereare eight sensitivity labels
and five that are referencedby Copilot and agents.
Since I want agents tohonor data classifications
and their related protections,
I can configure recommended policies.
The most stringent optionis to restrict all items,
removing the entire site fromview of Copilot and agents.

(07:32):
Or for more granular controls,
I also have a few more options.
I can create default sensitivity labels
for newly created items,
or if I move back tothe top-level options,
I have the option to"Restrict Access by Label."
The Obsidian Merger informationis highly privileged,
and even if you're on thecore team working on it,
we don't want agents toreason over the information,

(07:54):
so I'll pick this label option.
From there, I need to extendthe list of sensitivity labels
and I'll select Obsidian Merger,
then confirm to create the policy.
And this will now block the agent
from reasoning over the content
that includes the Obsidian Merger label.
In fact, let's look atthe policy in action.
Here you can see the useris asking the Copilot agent

(08:16):
to summarize the Project Obsidian M&A doc
and even though they are theowner and author of the file,
the agent cannot reason over it.
It responds, "Unfortunately,
I can't provide detailed information
because the content is protected."
As I mentioned, for bothyour agents and GenAI apps
across Microsoft andnon-Microsoft services,

(08:37):
all activity is recorded in Audit logs
to help conduct investigationswhenever needed.
In fact, DSPM for AI logged activity
flows directly into Microsoft Purview's
best-in-class solutions forinsider risk management,
letting your security teamsdetect risky AI prompts
as part of their investigationsinto risky users,

(08:58):
communication complianceto aid investigations
into non-complianceuse in AI interactions,
such as a user trying toget sensitive information
like an acquisition plan,
eDiscovery, where interactions
across your Copilots, agents, and AI apps
can be collected and reviewed
to help conduct investigationsand respond to litigations.

(09:19):
So that was an overviewof how GenAI adoption
can go hand in hand
with your enterprise gradedata security, governance,
and compliance requirementsfor your organizations,
keeping your data protected.
To learn more, check outaka.ms/SecureGovernAI.
Keep watching MicrosoftMechanics for the latest updates,
and thanks for watching.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.