All Episodes

May 20, 2025 20 mins

“We have to make sure AI doesn’t just automate what we've always done. It should elevate what’s possible.”

Notable Moments

00:40 – What’s pushing us to talk about AI now?

04:22 – A call for AI mission statements

08:18 – When tools lead before people: the risk of reactive adoption

11:05 – Defining AI boundaries: what it should never replace

15:33 – ChatGPT, Canva, Magic School: the tools already in use

18:42 – The importance of transparency and human oversight

22:55 – Reframing AI as “instructional support,” not just automation

AI isn’t something on the horizon. It’s already woven into our daily workflows, often in ways we barely notice. As Redox team members, we’re right in the thick of it, navigating both the promise and the risks that come with this powerful technology.

Our aim is to make AI practical, secure, and empowering across our organization. With insights from our security engineering team and guest Brent Ufkes, we focused on key strategies that work for us. When new AI tools crop up, curiosity comes first, but we never skip the important questions: Who’s using it? What kind of data is involved? How does it fit into our existing risk frameworks?

Our approach is audience-centered. We evaluate AI exactly as we would any other tool, by layering data classification and security reviews to make sure nothing sensitive, especially PHI, gets mishandled. Education sits at the core: regular updates in Slack, comprehensive living documents, and clear policies all aim to keep things transparent and flexible. Brent reminds us that all policies work together. AI doesn’t trump privacy or compliance, and training never ends.

We’re building a “culture of learning,” leaning on established security tools like DLP solutions and endpoint monitoring to keep things safe behind the scenes. AI tools are only as good as the context we provide and the prompts we write, and we’re always improving together.

The biggest takeaway? AI can give us a real edge if we put security, clarity, and cooperation first. At Redox, we don’t just adapt to change; we shape it, one secure workflow at a time.

Resources

Have feedback or a topic suggestion? Submit it using this linked form.

  www.redoxengine.com

Past Podcast Episodes 

https://redoxengine.com/solutions/platform-security

Matt Mock  mmock@redoxengine.com 

 

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Welcome to Shut the Back Door, brought to you by Redux. Shut the
Back Door is a health care security podcast dedicated to keeping
health data safe one episode at a time. I'm your
host, Jody Mayberry, and with me this episode, of course,
you've come to know her well by now, Megan McLeod, a senior
security engineer at Redux. Hello, Megan. Hi, Jody. Nice to be

(00:25):
back. And speaking of being back, we have Brent Ufkes
with us, a staff security engineer. Hello, Brent. Hey.
Thanks for having me again. Well, I am glad you're back. And this
topic, I feel, is so relevant right now. We're go we're
going to talk about AI, artificial intelligence. It's
become such a hot topic on not just health care, but

(00:48):
many industries. And I think health care is
one that, personally, I think, well, how is that impacting
health care? And I've got the perfect people to ask. How are health care
organizations approaching the use of
AI? Yeah. This has been an incredibly hot
topic because there's applications across all sorts of

(01:10):
health care companies, personnel. We see that this is
starting fairly organically as in people are just
generally interested in in the topic. They see news pop up, and
they wonder, how can this make my life more efficient?
Or how can I use this for fun? Or
how can we use this to help increase or

(01:33):
improve patient outcomes? And so we see it across the industry
doing things like increasing operational efficiency, helping
with research or drug discovery, improving
security and compliance, clinical decisions, you
name it. Pick a health care topic. There's probably an application
for the use of AI there. Yeah. So when you're talking about organically,

(01:55):
I know we have seen so many people ask so many questions
in our organization about, can I use this? I know that
this would help my job or, like you said, I'm just really curious
about this thing that I'm seeing in the news just over and over again
every single day. So how can we get it involved in our
processes? Yeah. Some background on it. A lot

(02:18):
of companies, when this was initially coming out, when these tools were becoming
more publicly available, a lot of companies were leaning towards
just banning these tools outright, saying we are not comfortable.
These tools are not compliant with our processes. Don't put data
here. And we've seen that change over time. These
tools have been going through various different certifications that are required to work

(02:40):
in the healthcare industry. Now, your HIPAA regulations,
your HITRUST SOC two compliance.
And those certifications have made these tools actually available for
use in the health care industry. And seeing
these, like, formal governance programs established, it
allows us to actually start using some of these tools.

(03:03):
One of the things I'm curious about, you've probably,
by now, gone through a shift of AI is coming
out. I see maybe a a tool that
might be helpful in my job. I'm curious about it.
So you've probably gone from curious, can I use it,
to more formal? How do you

(03:25):
evaluate that? Like, if Brit has this idea of using
an AI tool, what's the formal process for figuring
out if it can be used, if it's the right tool, is it safe
enough? Give us some insight into that. Yeah. And
I I do wanna call it that my perspective is from, like, the information
security side of things where we're managing our

(03:47):
company's security, the usage of devices, tools.
So the thoughts and opinions there are from that perspective.
But from that perspective, it outright it seemed that this is
kinda similar to any other tool in which you need to figure out what
is the intended use case for the tool, who will be
using the tool, what kind of data are they gonna be putting into the

(04:10):
tool. And that spun us off on kind
of starting to establish some sort of framework where we could define,
like, this is how this tool is intended to be used. So is it
intended to be used to handle purely
public data? Is it intended to handle internal
data that might be, like, regarding the business and just

(04:33):
sensitive information regarding how the business does its work?
Could it be finances, or is it actually, like, health care
information? And so once we started to kinda establish these
levels, we used an existing data classification
framework that we have here at Redux that helps us say, like,
what level, how do we need to secure certain data. That gave us a good

(04:54):
guideline as to, like, how we were going to enable the use of these
tools. So then given the use case of, like, we want to use
this tool to do this task, It allowed us to use that data
classification framework to say that we are comfortable with
this sort of protections put in place or maybe
that this certain type of information is just no longer

(05:16):
allowed from usage within the tool.
But generally, it's just figuring out, like, how is this tool
intended to be used and what else do we need to do
to protect the usage as, like, an information security team. And
also, there's all sorts of other education that we wanted to do to end users
too around, like, how do they know that

(05:39):
they're doing the right thing when it comes to using these tools? How do they
know what kind of data they can use? What protections are
put in place to prevent them from doing things on accident? And like you mentioned,
it's I think that a lot of people see AI as this
really big, like, somewhat insurmountable kind of
tool where it seems so different from

(06:01):
anything else. But it's really not in the sense that, like you said,
you a lot of places already have certain ways to
evaluate when they're bringing on new platforms, tools, anything they're
using in their organization and being able to just bring it
into the context of, well, yeah, what are you using it for? Is this something
that, like you said, is gonna have sensitive data, then it's going to have to

(06:23):
go through a more intensive review process and
things like that. So bringing it back into the context of
existing policies and strategies that you already have, I
think, makes it not such a big problem. It kind of, like, brings
it down to earth a little bit to put it in more of a
manageable evaluation. So when

(06:46):
there's a new AI tool introduced, you've evaluated it,
you've looked at it. How do organizations take that
evaluation? Well, first, let let me wrap that into it. How
do you evaluate the AI tools? And then
once they're evaluated, how do you end up implementing them? Yeah.
So to start with the how do we evaluate tools, there's a couple

(07:08):
different things that we do, and it just depends on to what level the tool
is going to be used. So a big part of it is
relying on the community and just the industry as a whole
to verify, like, how good this tool is. So we bring in a lot
of feedback from just really from the wider Internet
around, like, what entities are trusted around creating these AI

(07:30):
tools. So we tend to put a little bit more faith behind tools
made or what I think of as, like, the big fish, the Amazons,
the Facebook, Meta, Google. There's
already that trusted relationship that they're going to be delivering
products that work and are secure and generate good
output. So relying on those models

(07:53):
does help us increase our confidence around the use cases of the
tools. But also, if we're trying to verify
it, depending on how we're using the tool, there's measurements that we
can put in place in terms of like, how accurate the responses are. So when
you're asking an AI model or prompting an AI
model, you can check to see, like, how good is the outcome or how

(08:15):
close is the outcome to what I was expecting. So when you structure it
with prompts that say things such as, like, give me an output in this response,
you can actually measure the success based on what comes back to you.
So at a large scale, it's like you can set up ways to
compare the models to say, like, is this going to give me the data that
I'm expecting as a response with with very clear prompts?

(08:37):
And being able to choose the ones that give you the most accurate
response helps the company. Outside of
relying on the the trusted entities or the big fish
as I referred to, there are all sorts of tools from other
lesser known or lesser trusted entities. And so those are the the
companies that we do spend a bit more time evaluating from

(08:59):
both a compliance perspective where we don't have, like, an existing relationship, but
also verifying the output inputs and outputs of these
tools to make sure that they're handling our data properly, giving
us good results. Once you've evaluated
the platforms and come up with a policy, how do you
educate your staff and then get the policy in place? That's

(09:23):
a good question. We we do a handful of different things. We see
that, typically, it's most efficient for us to be providing, like,
regular updates to our employees. So in
at our company, a lot of users rely on our our
messaging. So, like, Slack, for example. And
so we post regular updates. We try to give

(09:45):
tips and tricks out to people So giving heads up, like, this
tool is now available to you. Here are some tips and tricks how to use
that. And just being able to share the you know,
fostering that culture of learning for people. So we end up
going to a lot of meetings, sharing notes. People ask
questions publicly on, like, can I do this thing? Or I I would like to

(10:06):
try this workflow. What steps do I need to take so that I can do
that? And we spend a lot of time doing those sort of public interactions.
Yeah. And I think what I've seen be really helpful as well is having
somewhat of a comprehensive document. Not that you can capture
everything, but when you've put together something
that you can point the users to to be like, hey. These are things

(10:28):
that we have approved and evaluated. These are things that are in process.
These are the types of work that are approved for
these different AI tools that we've evaluated,
things like that. So that, yes, we have the messaging, which is really important because
that gets it that gets it a little more public facing. But then we also
have this, like, document that people can actually refer

(10:51):
to when they have these questions so that they can be
a little bit more clear about what they can and cannot do or
what what different advice that we have for them as well. Now I'm
not involved in health care, but I I think this is
probably true in in any organization, but
even more sensitive in health care that when you implement

(11:13):
an AI policy, you probably have to be clear and
explain, like, now this does not overrule other
policies. Like, personal data is still a top priority
and things so how do you reinforce that when
you introduce a AI policy? Yeah. With
that, we've been maintaining a publicly visible

(11:35):
framework that people can refer to and come back to whenever
they have questions around what they can do. But we've also paired with, like,
our legal and compliance team so that we could actually craft an
official AI policy and put that into
our regular training such that anytime that an a new employee
starts, they can become educated on what our stance is, what they're allowed to

(11:58):
use, as well as it comes around with security training.
That way that people can see updates as they're happening. As
because AI is rapidly changing, we wanna make sure that we're able to be
flexible and provide people with updates and enable them to use the
tools that they wanna use. Yeah. I know. As someone who does a lot of
our security training content and things like that, it is

(12:20):
interesting to have new spaces to
explore. And as Brent said, since since it is evolving
always, security as a whole is evolving always. So
being able to update our training, whether that's through
annual trainings or through their monthly trainings that they get
or different things like that, AI has now become a

(12:41):
larger part of that, whereas maybe a year, two years
ago, that was not the case. So just being able to be flexible with
how we're getting the messages out and what kind of content we provide
and what's the the most important content. And within that as
well, like, when we're talking about AI specifically,
But even as a whole, like, it's never putting health care

(13:03):
information into these tools. That is something that we have a
culture with at our company already, and I think a lot of health care
companies would have this knowledge that PHI
is a type of data that is very restricted and can only
go in certain areas. And so having that established
with our other tooling, I think, makes it not as daunting

(13:26):
with the AI side of things because that's already kind of
expected with our company as a whole. Well, you've talked about
evaluating AI. You've talked about coming up with the the
policies. What kind of protections are you
seeing organizations use to manage the risks related
to using AI? We are seeing people use

(13:48):
a whole suite of security tools that are already in many
teams' stack. We're seeing the use of
tools such as data loss prevention, DLP tools
to help watch how people are interacting with with the
different applications. These tools oftentimes sit at some sort of,
like, proxy level in between the tool and the back end so that they

(14:10):
can monitor what sort of traffic is being sent through these
tools. That way, if health information or PHI is sent to an
application, it can see, like, oh, by the way, this this message
was sent here. Do we want to allow this to happen, or do we want
to investigate further and educate those users on, hey. This is not an
authorized use of this application. And usually, that finds pretty good

(14:33):
success about, like, warning people of, hey. You you might be doing something that's unauthorized
at this point in time. But we also see things such as endpoint
monitoring or basically watching from an IT perspective what sort
of applications users are using. That way if we see them
download some sort of AI tool that they're not authorized to use, we get flagged
as a security team to say, Hey, by the way, this user wants to use

(14:55):
this tool. Is it approved or not? And it gives us the chance
to go and interact with those users and find out like, is this a valid
use case and should we put this through our more formal process where
we evaluate the application and then its use cases, or is
this just unauthorized? And on top of that, we
do logging and auditing so that we can double check what

(15:17):
end users are doing. This just gives us the overall visibility if we do
need to go do those future investigations as to
activity happening. And with the restrictions on
what tools people can and can't use, I mean, yes, we do have
policy, but but like Brent said, I mean, there are some that, of
course, you can have applications of software downloading, but then

(15:38):
also web content filtering to prevent access to
some of these sites is also possible. So if you don't want them
to access it through the website either, that's another possibility
to actually prevent the use as well. Well, I'm sure
over the years, there's there's been other tools
that have come into health care security

(16:01):
that seemed like a problem that ended up being a benefit. Well, even
at one point, the Internet. Right? We didn't have the Internet when it came to
health care security, and now look at what a a big
boost that is to everything we do. Maybe we'll find the the
same with AI. Megan, before we move
to wrap up this episode, I'm really curious from your

(16:23):
perspective being more in a a senior
role. Has AI do you think it has helped
enhance the work you do? Or so far, is it still on the
fence? So I think that there definitely is an enhancement in some
areas. As long as you're using it responsibly and
you're, like when we're falling in these guidelines and making sure that

(16:45):
we are, again, not entering sensitive data, things like
that, it definitely can improve a good portion of
processes. Like, I wouldn't say that, at least, I personally use
it for everything. Of course, there are there are very
limited areas where I use it for. But for,
like, some day to day tasks that might be a little more repetitive

(17:07):
or if you're looking into trying to get, like,
summaries from some of our documentation, things like
that, it can be really helpful. And I think the other
thing with AI is that people don't realize how ingrained
it is in a lot of, like, just day to day
life that you're using. So if people are super anti

(17:29):
AI, they might not realize that they're actually engaging with
it already without even, you know, being overly aware
of it. So it's kind of something that I see as somewhat of an
inevitability. Like, it's going to be part of
maybe not your work life, but still probably, but also your personal life.
There's just it's so prevalent at this point in

(17:52):
time, and I I don't see it going away. I see it
more increasing, I guess, in the in the span of time that we have.
So Yeah. Megan, you're you're right. It is it's
everywhere, and I think of I have heard
people say, oh, I'm against it. I don't wanna use it at
all, not realizing how much they actually interact with

(18:14):
it throughout the day. Brent? Yeah. And I think it's
like any other tool out there. There's
the right use cases for given tools. You can't
just pick one AI tool and expect it to solve all of your problems.
Many of the models are trained to do specific tasks. So it it's important
to pick the tool that works best for the job, as well as learning

(18:37):
how to use the tool efficiently. We talked about prompt engineering
and that being a whole topic around, like, how do you get these tools
to respond in a manner that is consistent and expected.
And there's a bit of an art to it, and there's practice, and there's
tweaking that happens that if you don't ask the right question of these
tools or give it the right prompt to work off of, it's probably not gonna

(19:00):
respond in the way that you want to. And oftentimes, it does require
some training of the input that you're giving it or the refer to that as
context. And so giving it additional context into the problem that
you're trying to solve oftentimes fills the gaps that those tools
don't have and can help craft a better response. Yeah. And that
kinda ties back into the whole education portion of it as well.

(19:22):
So I think it's been really interesting at Redox, at least,
where we do have the opportunity to have these
talks and to do these kinds of, like, lunch and learns or other
formats of education because, again, people are curious
and making the most efficient
use of these AI tools is is really beneficial in

(19:45):
the long run. Yeah. That's a great thought, Megan. It
it's going to be here. It can make our work a lot
easier. I personally use use it quite a bit, but I I
was not sure how it shows up in health care
and health care security. So this is this has been a a wonderful
conversation. Join us in the next episode as we

(20:08):
discuss more security challenges impacting health care and
practical ways to address them. Brent and Megan, do you have
anything else to add as we wrap up this episode?
No. Just thanks for having me, Jody. Yeah. And just remember that we do
have a link in our show notes for ideas,
comments, feedback. We like to hear from everyone about

(20:30):
other topics that you're interested in or different kinds of
perspectives on the topics that we've already covered. And
don't forget to lock the back door.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.