All Episodes

January 20, 2025 22 mins

In this episode Michael, Gladys, Mark and Sarah talk to guest Diana Vicezar from the Microsoft Entra team about security Generative AI applications. Note, this is a short, simple intro episode to introduce three follow-on episodes.

We also cover security news about TLS 1.3 and Azure Event Grid, big updates to Microsoft Defender for Cloud, Azure Database for MySQL, SQL Managed Instance and Confidential Ledger.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy,

(00:09):
reliability and compliance on the Microsoft Cloud Platform.
Hey everybody, welcome to episode 108.
This week we actually have the full gang.
It's myself, Michael, Mark, Sarah and Gladys.
And our guest this week is Diana, who's here to talk to us about securing Gen.ai applications.
But before we get to our guests, let's take a little lap around the news.

(00:31):
Gladys, why don't you kick things off.
Hi everyone, happy new year.
I only have two set of news that I want to discuss about.
The first one is about TLS 1.3.
Starting March 1st, 2025, Azure services would require using TLS 1.2 or higher in the support

(00:53):
for TLS 1.0 and 1.1.
This is to enhance the security and provide best-in-class encryption.
So all the services will enable 1.3 on the same day and disable the earlier support.
The next news that I wanted to talk about is several features that Microsoft Defender

(01:17):
for Cloud released in December.
The first one is that they're changing the scanning turbo for AWS and GCP.
The scanning turbo settings, which is capturing all the security findings from AWS and GCP,
determines how often Defender for Cloud discover that information.

(01:41):
So this change ensured a more balanced scanning process and it optimized performance and minimized
the risk of reaching API limits.
The next feature that Microsoft Defender changed is the way Microsoft Defender for endpoint

(02:01):
will be required in order to receive file integrity monitoring experience.
And basically just some of the data provided provide better monitoring information.
And last, Defender for Cloud security posture management, sensitive scanning capability

(02:22):
will be included in Azure file shares in addition to blog containers.
So in my part of the world, I've gotten a little bit ranty on LinkedIn and the other
social networks.
Professional, positive, encouraging, but still like kind of really ranting about a couple
different things.

(02:42):
I've become more and more convicted lately that ultimately there's this sort of simple
principle in life that you can't blame someone for decisions or actions they don't make.
And the reality is that's often what happens in security.
And so one of the things I've gotten a lot of clarity on and was discussing publicly
in the socials was just around, ultimately, the way that organizations treat security

(03:07):
is often kind of broken.
And so it's like, hey, you don't blame the lawyer if they said, hey, that's a bad idea.
Or you don't blame the finance person if the CEO made the decision to spend a whole
bunch of money on something that didn't make sense.
You know, it's just, you know, but yeah, we do that in security.
We say, oh, you know, they said there's no maintenance windows.
They didn't patch it.
They didn't configure it.

(03:27):
They didn't do any app sec or threat modeling or anything.
And we're going to blame the security team who had nothing to do with any of the production
of all those things and blame them for the incident.
And so it's this really interesting kind of oddity that we deal with in security where
it just hasn't been seen as sort of like a normal support discipline like legal or HR
or finance or any of that kind of stuff in an organization.

(03:51):
And it really, you know, it's one of the biggest changes, I think, that we need to make as
an industry.
So I'll pop a couple links to some of these discussions.
The other one is just around collaboration.
It's really important that, you know, you look at security as something you do, not
something you have.
And it's just you're constantly taking inputs from either incidents or other events inside
and outside the organization and using those as a stimulus to drive security improvements

(04:17):
and run your processes.
So got a couple links to those there.
And then not quite released yet.
So think of this as more of a tease than an actual announce.
But putting the very final touches on the updated CISO workshop and the updated cybersecurity
reference architecture.
So those are getting close to a release and hopefully we'll have some good news in future

(04:44):
episodes very soon.
That's all I got.
Okay, before we get into my news, I just want to add a bit more detail about the TLS 1.3
item that Gladys mentioned.
Correctly she said that, you know, TLS 1.3 provides better encryption.
That is true, better encryption, and it's also a stronger integrity check than prior
versions of TLS.

(05:04):
But honestly, the most important thing about TLS 1.3 is actually the strongest server authentication
that you get.
And you get that via better cipher suites.
In TLS 1.3, you have to realize that one of the big things that the protocol did is basically
got rid of a whole bunch of legacy and pretty lousy authentication ciphers.
So they've all gone.
So TLS 1.3 is a lot more streamlined than TLS 1.2 and prior versions.

(05:26):
So wherever possible, you should always be going to TLS 1.3 rather than TLS 1.2, even
though we do support TLS 1.2 simply because of backward compatibility.
With that being said, we are seeing compat problems with TLS 1.0 and 1.1 being deprecated.
But honestly, they are so old and so broken that you really should be moving to 1.2 and
preferably 1.3.
All right, so here's my news.

(05:47):
So the first one is from my old stomping ground.
SQL managed instance now supports service endpoints for Azure storage, which basically
means that you can restrict where SQL managed instances pull their data from.
This lets you really have really fine-grained control over the storage accounts that are
used.
Net effect is stronger security and honestly better egress control, so reducing the chance

(06:11):
that attackers can actually egress data.
Next one is Azure confidential ledger now has achieved SOC 2 type 2 compliance, a big
deal for all those who require SOC 2 compliance.
And the last one that I have is again from my old stomping ground, Azure database for
MySQL now supports accelerated logs by default, including with support for customer managed

(06:35):
keys.
Accelerated logs are not what we know in the security industry is like log files in terms
of audit logs and debug logs and so on.
These are actually the transaction logs.
Not only does it support accelerated logs by default, but it also supports customer
managed keys as well, which is really great to see.
In fact, that's the only reason I brought it up is because it does support customer
managed keys as well.

(06:55):
So you get a huge performance boost and your log files can still be protected by your own
keys.
All right, so with that, let's switch our attention now to our guest.
As I mentioned before, our guest this week is Diana and she's here to talk to us about
securing Gen.ai app.
So Diana, thank you so much for joining us this week.
Would you like to take a moment and sort of introduce yourself to our listeners?

(07:16):
Sounds good.
Hi everyone.
Thank you so, so much for having me here today.
My name is Diana Misesad.
I am a product manager at Microsoft on the Entra Copilot team focused on bringing Gen.ai
capabilities to Entra.
So you might have heard of our recent public preview release of security copilot embedded

(07:38):
in Entra last November.
So super excited to be here today to talk about a very important topic.
All right, so this one's actually a very interesting episode because it's actually going to be
one of four episodes covering securing Gen.ai applications.
So before we sort of get into the guts of this, Diana, why don't you spend just a quick
brief moment and go over why we've got four episodes and just really quickly touch on

(08:02):
what the other episodes are so people know what to expect.
So with a group of colleagues on the identity space, we wrote a MS Learn article on how
you can use Entra capabilities to secure your Gen.ai apps.
So the reason why we're going to have a series of four episodes coming up on this topic is

(08:23):
because we do want to take the time to talk about each of the very important steps and
each of the very important sections of this article where we're going to share more about
how you can use Microsoft Entra capabilities to protect your environment, to protect your
Gen.ai apps.
And this is the first episode of this series where we're going to be talking about why

(08:45):
it's important to secure your Gen.ai apps, why businesses are implementing AI more than
ever and how you can learn more about this because I think it's a topic that everyone
should know a little bit about, right?
Everyone should be able to understand the impact.
Everyone should be able to understand the importance of these topics.

(09:05):
So super excited to kick off this series of four episodes and we hope that everyone enjoys
and learns a lot.
So most people have heard about AI, actually, hopefully everyone, right?
Even my kids keep talking about AI and how to use it in school and things like that.
But can you explain a little bit about why it's important to have AI in the side of business

(09:33):
today and secure it?
Yeah, that's a great question.
I always like to say that that's everything in life.
It's always a good and a not so good side of everything.
And with AI, it's the same thing, right?
We've seen how AI is so helpful for multiple reasons in order to do multiple types of tasks.

(09:54):
And many of the organizations that are implementing AI in their operations, they do feel some
sort of anxiety about using AI because they're worried about sensitive data, the leakage
of sensitive data, right?
So being able to understand some of the risks that are associated with AI, using AI in your
operations is so, so important.

(10:16):
It's more important than ever, right?
And in order to provide a better picture of this, I like to talk about AI discovery.
And the way that I think about AI discovery is with every new piece of technology, you
know, it's AI is this exciting new thing that everyone wants to learn about, everyone wants
to play with AI and AI discovery is great, right?

(10:38):
We can use AI for so many different reasons.
For example, you know, if we have a PowerPoint presentation that we're working on with a
colleague and we just lost track of it, I can use an AI assistant and say, hey, you
know, can you show me what are some of the files that I've been working on with this
colleague?
An AI assistant can lead me to those files.

(11:01):
And that's great, right?
Because it really facilitates a lot of the things that we do in our day to day life and
at work.
And AI discovery is great, as I said, right?
And if I am a bad actor, you know, and this is really important to understand, using AI,
AI is also a great piece of technology, because if I'm a bad actor, I can get control of an

(11:23):
identity.
And this is very serious.
And it's way more than just asking, hey, who I am, you know, if I can ask an AI assistant
about all sorts of confidential information that I shouldn't have access to, that I shouldn't
know about, right?
So it could be like a new product release and things of that nature.
So we can really see how AI can really be a force for good for certain situations, but

(11:44):
can really be used, you know, for malicious type of intent, right?
So understanding that there is a good and a bad of these and how to mitigate the result,
you know, the backside of it is really important.
So tell me about what are some of the key security risks that are associated with integrating

(12:06):
GenAI into business operations?
What are the things that you're seeing on your side of the house that people are worried
about?
No, I love that question.
And I talk a lot about this with my friends as well.
Because I was talking about intentional AI discovery, right, which is great.
But then there's also unintentional AI discovery.

(12:28):
And I think that everyone should know about these organizations, so you have trainings
for their teams to make sure that they understand these type of situations.
So you know, discoverability of AI, right?
We're talking about, you know, situations where AI systems can provide you with information
that you need for your job, for example, right?

(12:49):
But then we have unintentional AI discovery, which is situations where the AI system provides
you with information that you shouldn't have access to it.
I think I provided a quick example earlier.
But in terms of, you know, some of the key security risks that I've seen, and I have
stories I was talking to Bailey, who is one of my colleagues that is going to be part
of the upcoming episodes on this topic.

(13:11):
She was telling me about a story where she heard about this person at her workplace that
was asking an AI assistant about some sort of information that she needed for her job,
right?
And then she was prompted with information that she realized she shouldn't have access

(13:32):
to and she was like, okay, what is going on?
So that was a clear example of what happens with unintentional AI discovery.
It's not like someone wakes up one morning and is like, oh, you know, like, let me get
access to sensitive confidential information.
No, it's like literally somebody who is like being told, hey, use AI, it's going to help
you for your task and you use it.
And then all of a sudden, the H&AI app is not being well maintained or managed and you

(13:58):
have access to multiple files that you shouldn't have access to and you come out across this
type of data, right?
So for we know that, you know, there are situations like inside a risk where you know that there
are people that are trying to do bad things within your organization.
So this could be like, for example, asking an AI assistant for your colleague's performance

(14:19):
evaluation, which you know, you are their colleague, you shouldn't have access to that.
You are their peer, right?
You shouldn't be able to query that type of information with an AI assistant.
If you are their manager, you know, you can ask about, you know, evaluations or salary
information because you do have access to the data, to the information.
But if you don't have access to that information, you shouldn't come across that with an answer

(14:44):
from an AI assistant, right?
So you know, really talking about those types of real life scenarios.
Another scenario that I have is with healthcare data where you can come across PII of patients
or employees.
Another example that I heard recently was in finance, right?
So if someone in finance, let's say a financial analyst is asking an internal AI assistant

(15:08):
about like financial modeling.
If you are working at a big bang and you are, you want to know about like, hey, I'm looking
into, you know, what should I be looking at this year in terms of like financial forecasting
for such and such industry?
And then all of a sudden, you find out about merger and acquisition that on the other side
of the firm, someone is working on and you shouldn't know about that, right?

(15:32):
So those are examples of unintentional AI discovery and these are very common today.
So how do we make sure that we prevent those situations from happening?
That's definitely one of the things that are going to be discussed in the next few episodes
with my colleagues, but really, you know, trying to make sure that people understand,
hey, these are things that are happening today.

(15:53):
And the reason why they're happening is not because of AI, right?
It's because we're not doing the basics well.
And when we say the basics, we're talking about why are these situations happening?
You know, these examples that I mentioned, they're being caused by things like excessive
access.
So like people are having privilege that they should no longer have.

(16:15):
And this is very common when like people are changing jobs within an organization and they
still have the old permissions that they had before, right?
And they don't need those anymore, but they still have those permissions.
Could also be caused by, for example, improper DLP labeling, right?
So we are not labeling our files the way we should.

(16:35):
And it could also be like data isolation.
So these are things that are the basics.
We should be taking care of these basic things.
So one thing that we really want to highlight is like AI is not bad for your organization.
AI is not bad for your operations.
The reason why these situations keep happening is because we're not doing the basics well.
So yeah, that's pretty much some of the examples, real life examples that I have for you all.

(16:59):
Yeah, I really want to reiterate that, you know, Gen A high is all this new stuff and
large language models and blah, blah, blah, blah, blah.
But you can never lose track of the basics, right?
I mean, good secure application development, good secure application design.
You should never ever lose track of the basics, even though it's this new whiz bang feature
thing.
So yeah, I really want to reiterate that.

(17:22):
I also want everyone to know who's listening that, you know, we're really going to focus
on some of the depth.
This is really an episode to sort of set the frame for the next three episodes where we're
going to a lot more depth.
I think it's really important that everyone understands kind of what the problem space is
and how we're sort of working on it and how you, you know, as a consumer of these products

(17:43):
should also understand about how to, you know, how to protect the environment.
By the way, just so if anyone listening is not aware, the prior episode to this, we actually
spoke about some aspects of oversharing using basically blueprints, blueprints with a lower
case B, not as your blueprints with an upper case B, but just blueprints, like ideas about
how you can actually go around securing gen AI applications.

(18:06):
In this one, we're going to go in a lot more detail and some of the other aspects that
are, you know, essentially the basics of securing environments for, you know, gen AI environments.
So I think this is good.
All right.
So we've gone over some of the examples.
I mean, we could, you know, we could talk about, you know, example, you know, sort of
violating gen AI models and getting them to do things they shouldn't do until a blue in

(18:28):
the face as many, many, many examples.
So Diana, why don't you just give a quick overview of each of the other two, like what's
in episode two, what's in episode three, and then what's in episode four?
Yeah.
So as I said, and you were also saying, this is just an intro episode to what's to come
in the next, the next couple of episodes.

(18:49):
So my colleagues from the identity space, as I said, we wrote this MSLearn article,
which is all about how to secure gen AI applications with Entra.
So in the next three episodes, we're going to dive deep into the specifics and more technical
aspects of using these Entra capabilities.
So for example, in episode two of this series, my colleague is going to talk about how to

(19:11):
discover over permission issues with Entra.
So how to use Microsoft Entra permissions management to identify, to manage over permission
identities in multi-cloud environments to reduce security base.
So this is very important and they're definitely going to talk about, you know, other real
life examples as well, but getting into the deep technical aspects of it is going to be

(19:33):
so helpful.
So for anyone interested in learning more about, please, please check out episode two
of this series.
Then in the next episode, my colleague is going to talk about enabling access control.
So she's going to talk about Microsoft Entra ID governance and how lifecycle workflows
can help you also prevent many of the situations that we discussed today.

(19:54):
And then finally, in the fourth episode of this series, we're going to talk about monitoring
access.
So how you can use Entra permissions management, Entra ID governance to monitor the access,
to make sure that everything is okay in your environment and that you are preventing this,
the situations that are awful, right?
Nobody wants people in their company, in their organizations to have to experience this type

(20:19):
of unintentional AI discovery situation.
So the next three episodes are going to be very interesting.
So if you're, if you're excited to learn more about the technical aspects and these capabilities
that we briefly discussed about today, please, please join the whole series.
Fantastic.
So with the sort of intro episode out the way, one thing we always ask our guests, Diana,

(20:40):
is if you had just one thought to leave our listeners with, what would it be?
Yeah, so that's a, that's a great question.
I, as I said at the beginning of this episode, I think it's really important to learn about
this topic specifically, right?
To learn more about what happens when you are using general applications in your, in

(21:01):
your organization and you implemented them into your operations.
It's important not only, you know, like as work, someone who works as part of our organization,
but even if you are just interested in learning more about AI, I feel like this is a very
important aspect of the application of AI in operation.
So definitely, you know, like take some time to read about it.

(21:23):
There's plenty of information out there to learn more about this topic and please check
out the upcoming episodes on, on, on security engineering with Entra.
Okay.
Let's bring this episode to an end.
Again, Diana, thank you so much for joining us this week.
I've been very, very busy.
And again, just to level set, you know, this is an introduction to the other three, the
other three episodes that are coming out.

(21:45):
We hope to do those in relatively rapid succession.
So to all our listeners out there, we hope you found this intro of use, at least just
give me a background as to what the problem space actually looks like.
And again, we'll, we'll show some real solutions in the next, next three episodes.
So again, thank you for listening.
Stay safe and we'll see you in the next one.
Thanks for listening to the Azure Security Podcast.
You can find show notes and other resources at our website, azsecuritypodcast.net.

(22:10):
If you have any questions, please find us on Twitter at Azure Setpod.
Creative Music is from ccmixtor.com and licensed under the Creative Commons license.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Law & Order: Criminal Justice System - Season 1 & Season 2

Law & Order: Criminal Justice System - Season 1 & Season 2

Season Two Out Now! Law & Order: Criminal Justice System tells the real stories behind the landmark cases that have shaped how the most dangerous and influential criminals in America are prosecuted. In its second season, the series tackles the threat of terrorism in the United States. From the rise of extremist political groups in the 60s to domestic lone wolves in the modern day, we explore how organizations like the FBI and Joint Terrorism Take Force have evolved to fight back against a multitude of terrorist threats.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.