Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
While generative AI can help you do more,
it can also introduce new security risks.
Today, we're going todemonstrate how you can stay
in control with Microsoft Defender
to discover the GenAIcloud apps that people
in your organization are using right now
and approve or blockthem based on their risk.
And for your in-house developed AI apps,
we'll look at preventing jailbreaks
and prompt injection attacks
(00:23):
along with how everything comes together
with Microsoft Defenderincident management,
to give you completevisibility into your events.
Joining me once again to demonstrate
how to get ahead of everything
is Microsoft Security CVP, Rob Lefferts.
Welcome back.- So glad to be back.
- It's always great to have you on
to keep us ahead of the threat landscape.
In fact, since your last time on the show,
we've seen a significant increase
in the use of generative AI apps,
(00:44):
and some of them are sanctioned by IT
but many of them are not.
So what security concerns does this raise?
- Each of those apps reallycarries their own risk,
and even in-house developed apps
aren't necessarily immune to risk.
We see some of the biggestrisks with Consumer apps,
especially the free ones,which are often designed
(01:04):
to collect training data asusers upload files into them
or paste content into their prompts
that can then be used toretrain the underlying model.
So, before you know it,
your data might be partof the public domain,
that is, unless you get ahead of it.
- And as you showed, this use of your data
is often written front and center
in the terms and conditions of these apps.
(01:25):
- True, but not everyonereads all the fine print.
To be clear, people go into these apps
with good intentions,to work more efficiently
and get more done, but theydon't always know the risks;
and that's where we give youthe capabilities you need
to identify and protectGenerative AI SaaS apps
using Microsoft Defender for Cloud Apps.
(01:46):
And you can combine this withMicrosoft Defender for Cloud
for your internally developed apps
alongside the unified incidentmanagement capabilities
in Microsoft Defender XDR
where the activities fromboth of these services
and other connected systemscome together in one place.
- So given just how manycloud apps there are out there
and a lot of companiesbuilding their own apps,
(02:08):
where would you even start?
- Well, for most orgs,it starts with knowing
which external apps peoplein your company are using.
If you don't have proactivecontrols in place yet,
there's a pretty good chance that
people are bringing their own apps.
Now to find out what they're using,
right from the unified Defender portal,
you can use MicrosoftDefender for Cloud Apps
(02:28):
for a complete view ofcloud apps and websites
in use inside your organization.
The signal comes in from
Defender-onboarded computers and phones.
And if you're not alreadyusing Defender for Cloud Apps,
let me start by showingyou the Cloud app catalog.
Our researchers at Microsoftare continually identifying
and classifying new cloudapps as they surface.
(02:50):
There are over 34,000 apps
across all of thesefilterable categories that are
all based on best practiceuse cases across industries.
Now if I scroll back up to Generative AI,
you'll see that thereare more than 1,000 apps.
And I'll click on this controlto filter the list down,
and it's a continually expanding list.
We even add to it when existing cloud apps
(03:12):
integrate new gen AI capabilities.
Now once your signal starts to come in
from your managed devices,
moving back over to the dashboard,
you'll see that I have visibility
into the full breadthof Cloud Apps in use,
including Generative AI appsand lots of other categories.
The report under Discoveredapps provides visibility
into the cloud apps with the broadest use
(03:34):
within your managed network.
And from there, you can again
see categories of discovered apps.
I'll filter by Generative AI again,
and this time it returns the specific apps
in use in my org.
Like before, each app has adefined risk score of 0 to 10,
with 10 being the best, basedon a number of parameters.
And if I click into any one of them,
(03:55):
like Microsoft Copilot,I can see the details
as well as how theyfair for general areas,
a breadth of security capabilities,
as well as compliance withstandards and regulations,
and whether they appear to meet
legal and privacy requirements.
- And this can save a lot of valuable time
especially when you'retrying to get ahead of risks.
- And Defender for Cloud Apps
doesn't just give you visibility.
(04:17):
For your managed devicesenrolled into Microsoft Defender,
it also has controls that caneither allow or block people
from using defined cloud apps,
based on the policies youhave set as an administrator.
From each cloud app, I can see an overview
with activities surroundingthe app with a few tabs.
In the cloud app usage tab,I can drill in even more
(04:39):
to see usage, users, IPaddresses, and incident details.
I'll dig into Users, and here you can see
who has used this app in my org.
If I head back to my filtered view
of generative AI apps in use,
on the right you can seeoptions to either sanction apps
so that people can keep usingthem, or unsanction them
to block them outright from being used.
(05:01):
But rather than unsanction these apps
one-by-one like Whack-a-Mole,there's a better way,
and that's with automation
based on the app's risk score level.
This way, you're not manually configuring
1,000 apps in this category;nobody wants to do that.
So I'll head over to policy management,
and to make things easieras new apps emerge,
(05:22):
you can set up policies basedon the risk score thresholds
that I showed earlier,or other attributes.
I'll create a new policy,and from the dropdown,
I'll choose app discovery policy.
Now I'll name it Risky AI apps,
and I can set the policyseverity here too.
Now, I'm going to select a filter,
and I'll choose categoryfirst, I'll keep equals,
(05:45):
and then scroll all the way down
to Generative AI and pick that.
Then, I need to add another filter.
In this case, I'm going tofind and choose risk score.
I'll pause for a second.
Now what I want to happen is that
when a new app is documented,or an existing cloud app
incorporates new GenAI capabilities
and meets my category and risk conditions,
(06:08):
I want Defender for Cloud Apps
to automatically unsanctionthose apps to stop people
from using them on managed devices.
So back in my policy, Ican adjust this slider here
for risk score.
I'll set it so that any appwith a risk score of 0 to 6
will trigger a match.
And if I scroll down a little more,
this is the important partof doing the enforcement.
(06:30):
I'll choose tag app as unsanctioned
and hit create to make it active.
With that, my policy is set
and next time my manageddevices are synced with policy,
Defender for Endpoint will block
any generative AI app witha matching risk score.
Now, let's go see what it looks like.
If I move over to a managed device,
(06:50):
you'll remember one of ourfour generative AI apps
was something called Fakeyou.
I have to be a little careful with
how I enunciate that app name,
and this is what a user would see.
It's clearly marked as being blocked
by their IT organization
with a link to visit the supportpage for more information.
And this works with iOS, Android, Mac,
(07:11):
and, of course, Windows devices
once they are onboarded to Defender.
- Okay, so now you can see and control
which cloud apps are inuse in your organization,
but what about thosein-house developed apps?
How would you control the AI risks there?
- So internally developed apps
and enterprise-grade SaaSapps, like Microsoft Copilot,
would normally have the controls
and terms around data usage in place
(07:32):
to prevent data loss and disallow vendors
from training their models on your data.
That said, there are other types of risks
and that's where Defenderfor Cloud comes in.
If you're new to Defender for Cloud,
it connects the securityteam and developers
in your company.
For security teams, for your apps,
there's cloud security posture management
to surface actions to predictand give you recommendations
(07:55):
for preventing breachesbefore they happen.
For cloud infrastructure and workloads,
it gives you insights to highlight risks
and guide you with specific protections
that you can implement
for all of your virtual machines,
your data infrastructure,including databases and storage.
And for your developers, using DevOps,
you can even see best practice insights
(08:17):
and associated risks withAPI endpoints being used,
and in Containers see misconfigurations,
exposed secrets and vulnerabilities.
And for cloud infrastructureentitlement management,
you can find out where you havepotentially overprovisioned
or inactive entitlementsthat could lead to a breach.
And the nice thing is that from
(08:37):
the central SecOps teamperspective, these signals all flow
into Microsoft Defender forend-to-end security tracking.
In fact, I have an example here.
This is an in-house developed app
running on Azure that helpsan employee input things
like address, tax information,
bank details for depositing your salary,
and finding informationon benefits options
(09:00):
that employees can enroll into.
It's a pretty important app to ensure
that the right protections are in place.
And for anyone who's entered a new job
right after graduation,it can be confusing
to know what benefitsoptions to choose from,
things like 401k or IRAfor example in the U.S.,
or do you enroll into an employeestock purchasing program?
(09:20):
It's actually a really goodscenario for generative AI
when you think about it.
And if you can act onthe options it gives you
to enroll into these services,
again, it's superhelpful for the employees
and important to have theright controls in place.
Obviously, you don'twant your salary, stock,
or benefits going intosomeone else's account.
So if you're familiar withhow generative AI apps work,
(09:43):
most use what's called a system prompt
to enforce basic rules.
But people, especially modern adversaries,
are getting savvy to this and figuring out
how to work around these basic guardrails:
for example, by telling these AI tools
to ignore their instructions.
And I can show you an example of that.
This is our app's system prompt,
(10:03):
and you'll see thatwe've instructed the AI
to not display IDnumbers, account numbers,
financial information, or tax elections
with examples given for each.
Now, I'll move over to arunning session with this app.
I've already submitted a few prompts.
And in the third one, witha gentle bit of persuasion,
basically telling it thatI'm a security researcher,
(10:25):
for the AI model toignore the instructions,
it's displaying informationthat my company and my dev team
did not want it to display.
This app even lets me update
the bank account IBAN numberwith a prompt: Sorry, Adele.
Fortunately, there's a fix.
Using controls as partof Azure AI Foundry,
we can prevent this informationfrom getting displayed
(10:47):
to our user and potentially any attacker
if their credentials ortoken has been compromised.
So this is the same app on the right
with no changes to thesystem message behind it,
and I'll enter theprompts in live this time.
You'll see that my exact same attempts
to get the model toignore its instructions
no matter what I do, evenas a security researcher,
(11:11):
have been stopped in thiscase using Prompt Shields
and have been flaggedfor immediate response.
And these types of controlsare even more critical
as we start to build moreautonomous agentic apps
that might be parsingmessages from external users
and automatically taking action.
- Right, and as we saw inthe generated response,
protection was enforced, like you said,
(11:32):
using content safetycontrols in Azure AI Foundry.
- Right, and thoseactivities are also passed
to Defender XDR incidents,so that you can see
if someone is trying towork around the rules
that your developers set.
Let me quickly show you wherethese controls were set up
to defend our internalapp against these types
of prompt injection or jailbreak attempts.
(11:54):
I'm in the new Azure AI Foundry portal
under safety + security for my app.
The protected version of the app has
Prompt shields for jailbreakand indirect attacks
configured here as input filters.
That's all I had to do.
And what I showed before wasa direct jailbreak attack.
There can also be indirect attacks.
(12:14):
These methods are a littlesneakier where the attacker,
for example, might poisonreference data upstream
with maybe an email sent previously
or even an image with hidden instructions,
which gets added to the prompt.
And we protect you in both cases.
- Okay, so now you havepolicy protections in place.
Do I need to identify and track issues
in their respective dashboards then?
(12:35):
- You can, and depending on your role
or how deep in any area youwant to go, all are helpful.
But if you want to stitchtogether multiple alerts
as part of something likea multi-stage attack,
that's where Defender XDR comes in.
It will find the connectionsbetween different events,
whether the user succeeded or not,
and give you the detailsyou need to respond to them.
(12:58):
I'm now in the Defender XDR portal
and can see all of my incidents.
I want to look at aparticular incident, 206872.
We have a compromised user account,
but this time it's not Jonathan Wolcott;
it's Marie Ellorriaga.
- I have a feelingJonathan's been watching
these shows on Mechanicsto learn what not to do.
- Good for him; it's about time.
(13:18):
So let's see what Marie,
or the person usingher account, was up to.
It looks like they found
our Employee Assistant internal app,
then tried to Jailbreak it.
But because our protections were in place,
this attempt was blocked,
and we can see the evidence of that
from this alert here on the right.
Then we can see that they moved on
to Microsoft 365 Copilotand tried to get into
(13:40):
some other finance-related information.
And because of our DLP policies
preventing Copilot fromprocessing labeled content,
that activity also wouldn'thave been successful.
So our information was protected.
- And these controlsget even more important,
I think, as agents alsobecome more mainstream.
- That's right, andthose agents often need
(14:00):
to send information outsideof your trust boundary
to reason over it, so it's risky.
And more than just visibility, as you saw,
you have active protections to keep
your information secure in real-time
for the apps you build in-house
and even shadow AI SaaSapps that people are using
on your managed devices.
- So for anyone who'swatching today right now,
(14:21):
what do you recommendthey do to get started?
- So to get started on thethings that we showed today,
we've created end-to-end guidance for this
that walks you through the entire process
at aka.ms/ProtectAIapps;
so that you can discover and control
the generative AI cloudapps people are using now,
build protections intothe apps you're building,
(14:42):
and make sure that you havethe visibility you need
to detect and respondto AI-related threats.
- Thanks, Rob, and, ofcourse, to stay up-to-date
with all the latest tech at Microsoft,
be sure to keep checkingback on Mechanics.
Subscribe if you haven't already,
and we'll see you again soon.