All Episodes

March 25, 2025 36 mins

"That's when it starts getting really scary. This is no longer just an email trying to get some gift cards. This stuff can lead to the bigger attacks that then can directly impact patient care."

Notable Moments

01:02 Phishing: Persistent Cybersecurity Threat

03:27 Cybersecurity’s Evolving Threats

09:15 Phishing Scams: Calls and Video

10:23 Rise of Deepfake Scams and Counterfeit Reality Attacks

15:43 Vulnerability in Healthcare as Cybersecurity Threats Escalate

21:49 MFA and Password Management Trends

24:39 Stopping Phishing with Email Security

28:24 Advanced Phishing Training Strategies

32:05 Effective Phishing Training Strategies

34:07 Ineffective Automated Training Solutions

Episode Resources

CrowdStrike 2025 Global
Threat Report CrowdStrike 2025 Global
Threat Report
 

Resources

 www.redoxengine.com

Past Podcast Episodes 

https://redoxengine.com/solutions/platform-security

Have feedback or a topic suggestion? Submit it using this linked form.

Matt Mock  mmock@redoxengine.com 

Meghan McLeod mmcleod@redoxengine.com

Receiving a suspicious email, a text message claiming a lottery win, or an urgent request from a "bank" are instances of a cyber menace many know as phishing. While the term might initially bring the mental image of casting a line into a tranquil lake, this type of phishing is anything but relaxing. It’s a threat lurking in our inboxes and beyond, which is why it is important to stay vigilant with the ever-evolving social engineering attacks.

Phishing has been a thorn in the side of cybersecurity for ages. The goal is to secure sensitive data like passwords or financial information or to install malicious software on a device, all under the guise of legitimate communication. The attackers attempt to capitalize on human error, exploiting the trust between people and technology. Phishing remains a top method for hackers due to its low cost and unfortunate high success rate. As Matt Mock highlights, phishing's simplicity is what makes it so dangerous. 

Grammatical errors or suspicious links used to make phishing attempts easy to spot. Now AI advancem

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Welcome to Shut the Back Door, brought to you by Redux. Shut the
Back Door is a health care security podcast dedicated to keeping
health data safe one episode at a time. I'm your
host, Jody Mayberry, and joining me is Megan McLeod,
a senior security engineer at Redux, and Matt Mach,
the chief information security officer at

(00:25):
Redux. And I have to say, Matt and Megan, when you
sent me the title for this episode, I got really
excited because you know I'm a park ranger. I live in Washington state,
and I saw we were gonna talk about phishing, but then I realized
it was spelled wrong, and I went down a wrong path. All the
research I did on salmon and trout now is out the window, and I

(00:48):
have to change direction. So tell me, if I'm wrong from
my park ranger point of view on what fishing is, what exactly are we
talking about? So, yeah, we're talking about phishing from
the security side of things, which would be
more of a social engineering kind of attack against
someone in an organization that can come in a lot of

(01:10):
different formats. That can be through email, through voice calls,
video, all sorts of things with the intentions
of gaining something. So it's a malicious behavior. It could be
through trying to get financial gain, asking you for
money, you know, the the typical scams that you hear about people
saying like, oh, I need help. I'm stuck in jail in another country

(01:32):
and need $5,000. Can you send it over? So, like,
those are kinda, like, the the ones that everyone's aware of. And I know that
fishing has been a concept and known about
for a really long time. So people might be wondering why
talk about fishing now. And I may be a little
biased because that is one of the areas that I work in the most

(01:55):
is fishing. But it is also really critical because
no matter how long fishing is around, it still remains one
of the most common causes of organization breaches
because they're gaining credentials or access or installing
malware on your devices. And as many
technical things that you put in place to try to

(02:18):
avoid these breaches, phishing targets a different element, which
is the people element, and people are not perfect. So
phishing is always going to be relevant. And it's one of those
things that, like, we're going to discuss a bit, like, how phishing has evolved
and what current trends exist because it is just becoming more
and more sophisticated. Even with trained experienced

(02:40):
users, people are having a harder time identifying between
emails that are legitimate coming from customers who are sending
invoicing or previous employees who need to change their
banking information or things like that. It's hard to tell the difference
between a legitimate request and someone who is
doing it maliciously to try to gain information that they should not

(03:03):
have. In Washington, there's a big one right now on
text message about unpaid toll fees. And
since we have a lot of toll roads here in Washington, it's it's caught on.
Oh, yeah. Yeah. I've gotten some of those texts for, like, Boston. I don't live
in Boston, but they're like, oh, you have Boston toll fees. And I'm like, I'm
very sure that I don't. But but I know that scam has been going around

(03:25):
quite a bit right now. Yeah. Those are, like, great points, Megan. And I and
I think one the big thing, like, why we're talking about, you know,
phishing and social engineering is this has been around quite some
time. It continues to evolve, and the big thing is
it's highly successful for the attackers. It's
very low cost. It's a low barrier to entry, but it always

(03:48):
leads to that next thing. So it's very easy for them
and cost effective to just spam a bunch of people and you only
need a very, very small amount to respond to something
or to click a link, to call a number or whatever it might
be to get you that next stage and then whatever can kick
off, whether they're selling the data or they're trying to install something.

(04:10):
The other thing in, you know, since, you know, 2020 and beyond with
more and more remote workers, the big thing too is also people's home
networks and their home machines. So phishing someone's
work email has always been a big thing. You're always trying to grab their network
credentials. But now they all also can really
focus in on their their home accounts where, you know, most likely

(04:32):
you're gonna have less technical hoops to jump
through there. But even if you're limiting what folks have
access to on their personal machines, if they can log into
anything with some of their network credentials on their home machine,
there's the potential for an attacker to capture that info.
So I think that also has you know, raise this up in in recent,

(04:54):
years as well. Yeah. And people can be really creative in the stories that they
tell when they are phishing you because they can come at it from a a
variety of ways, whether it's trying to sound urgent and authoritative
saying, you need to do this. Otherwise, x y z will happen,
or they try to become your friend. They don't even do anything
malicious seeming at the beginning, and they get in your good graces and

(05:17):
seem like they're a trustworthy person
and convince you that way to to give up information. I
know that I personally had a phone call. This was a while ago, but
I got a call, and it seemed very legitimate where this
person was saying that my aunt was wanted for
some crime that they needed to and knowing my aunt, I was like, this is

(05:40):
ridiculous. But, they had all of her information. They were
they sounded very authoritative. They were like, she is not responding to
messages, and we are not able to locate her. So we need your help in
doing that. And and things like that, like, will make people
kind of forget about the security trainings that they have if they if they're
caught in this, like, web of these lies that are being told

(06:03):
that are convincing enough. Well, maybe it was your aunt that was
driving your car in Boston. That's where the tolls came from. That's where it came
from. That ties it all together. Well, this is
interesting to to hear that it's phishing's been around
forever, and it's still an issue and keeps getting
worse. The Economist did a series recently called

(06:24):
Scam Inc all about this. I had no idea it was this bad,
but there are bad people that set up factories,
I for a lack of better word, where it's just people that this is all
they're dedicated to is just moving from Matt to Megan
to Jody trying to to get someone to bite on one of these.
Well, since phishing is always evolving, it's

(06:47):
been around forever, but we it just keeps changing. What are the trends
in phishing attacks that you're seeing now? Yeah. I think the big one
that that we're seeing for, like, a a trend and what's helping it
evolve is definitely AI. Right? Everybody is AI
all the time now. Everything's looking how they can use AI
to to better make things more efficient or effective.

(07:10):
And these attackers and scammers are are no different. It's
some of the things that it's actually doing are taking out of some of the
telltale signs that you used to see in things like
email messages or text messages. It's taking out the grammar
issues, the spelling issues that you would see that
right away would raise up like, this is not somebody that so, you

(07:32):
know, English is their native language, for example. Now it's
in seconds, somebody can have a well crafted phishing
email. It can grab information from multiple sources.
You can use, you know, like notebook lm, for
example. You can put in a ton of different sources and have it
craft, you know, email messages based on all this different data

(07:55):
and continually evolve on those things. So I think that's one
thing that that you're seeing, you know, around the AI. The other is
just on the attack vector itself. AI can help to
better craft the attacks as far as finding out who it should
hit and what those should look like over time.
So maybe the person's not responding and it can change based on if they

(08:18):
didn't respond or how they responded or one vector didn't
work. So it kind of speeds up that, and also takes the
human element out. So it's it's lower cost there. The
other thing that we see with AI in particular
is all the deep fake stuff that's coming out now too. So the,
you know, voicemail phishing, the vishing, if you will, all the weird

(08:41):
ways to to take phishing and make it into whatever it's supposed to
to be in. Those were very hard in the past
to to be successful with because it's very easy that to tell
it's it's a robotic voice. It can't really respond
well. Yeah. There's lots of tells in there to to let you know that this
is not real. But now you can have these AI voices

(09:02):
generated and also have them in real time react to what they're
hearing to really convince people, this is
something that is a real person, helps with, like,
help desk attacks. So folks getting a call with, like, their
seemingly from their help desk, especially in larger organizations where people
don't know individually who's working on their help desk, there's attacks

(09:25):
where, you know, they start spamming the person, and they follow it up with
the voice mail or the phone call to
say that, you know, hey. We're calling. Are you noticing an increase in
spam today? And, of course, they are because this person is doing it, and it
helps them give the, you know, foot in the door, if you will, to get
past that. And then you're also seeing it not just from the voice

(09:47):
side but also on the video now. They're in, you know,
recent news, you've seen these cases where people have fallen
for video calls of people in their own
organization of usually the c levels who they have enough
video out there so they can craft these videos. But of people getting
fooled into sending money to the wrong place or, you know, giving

(10:09):
up credentials because it's hard to get on a video call and it
looks real, it sounds real, And especially if somebody doesn't interact
with the person a lot, it's it can be really tough. So you're seeing
a lot of that stuff out there as well. Yeah. So on that
side of things, Gartner estimates that by 2028,
'40 percent of social engineering attacks will utilize

(10:32):
these counterfeit reality techniques. So the deep takes that Matt was
talking about for the audio and video, like you
said, those are really, really hard to distinguish between. And I
know it's kinda AI can be kind of a joke when you see it on
social media. You see videos that it creates or images, things like
that. And people will be like, why does this person have seven

(10:54):
fingers or things like that? But it's getting more and more sophisticated
to where that there aren't those obvious tells of, oh, they look like a
Simpson character, and that's not what a real person looks like or or whatever
it is. So it it's just getting increasingly difficult. And
following the line of statistics as well, CrowdStrike found
that just from the beginning of 2024 up to

(11:17):
the end of 2024, there was a 442%
increase in vishing, so that voice phishing
intrusion attacks that were that they were seeing you being
used as a way for people to try to attack an
organization. Yeah. And I think the those are just in
recent, you know, history. And as things start to improve,

(11:39):
you're gonna see these go up and up. And I think the with the
Gartner stat, for instance, one of the things that, you know, they're trying to hint
at there or point to you is that your traditional methods of
dealing with these are no longer going to be good enough, especially if
you're, you know, not using MFA for some reason, which is
a whole another issue if you're not using that at this point.

(12:02):
But even if you are in the type that you're using, that these are
going to get around some of those traditional methods and have
people fall for these a lot more than they have in the
past and just seeing year over year with how
much they're improving and the barrier to entry
is just so low. Yeah. The cost to do this stuff is getting

(12:25):
lower and lower, and the quality is going higher and higher. So it's
going to allow these attackers to do it more cost
effectively, And that's definitely in their mind. Yeah. They're looking they're in
this to make money for most of these cases. Now there
are other things that are gonna drive them, but
most times they're looking for the money aspect. So if they can do these

(12:47):
attacks for cheaper, they're gonna do that as well. So I think we're
just gonna keep seeing some, you know, crazy phishing evolutions
over the next few years due to how AI is evolving.
Well yeah. And like you were alluding to it, the traditional security
features that people use to to try to prevent these aren't as effective. So not
to say that email security isn't going to be critically important. It it

(13:11):
will be. That's still an important element. But if they're
utilizing voice phishing and video calls and things
like that, that's not just from a a strict email
security and training on just email. Now you're having to go into a whole
other realm of training and detection
with your tools. How does phishing

(13:33):
impact health care? Or or what's unique in health care
when it comes to phishing? So when we're
talking about motivations for phishing, like Matt said, a lot of
it does have a financial element. However, they
attain that, you know, through installing malware, through getting
credentials, things like that. But with health care data, that is a

(13:55):
very highly valuable form of data. It's
something that if people can sell health care data,
it comes at a higher cost than than some other things, like just a simple
phone number or other information like that. So because of that,
there is a higher risk for
these attacks to be kind of more catastrophic. Because outside of

(14:18):
just the value financially for health care data to the
individual, their health care data is also very valuable and important.
So for me, like, having my phone number out there in
public is not as big of a deal as getting all of
my personal health care records publicized or
in the wrong hands. Yeah. I think that that's really great

(14:39):
points around that that we've seen in the past, you know, year and a half
now how valuable health care data can be to
attackers. They know the price it's gonna pay. That has
resulted in higher attacks across the board for health
care, where in years past, health care can
kinda sit back and be like, well, for health care, we probably won't get as

(15:01):
tack as tact as much. People will be like, it's there's
patients involved, there's patient data, this isn't super valuable, or we don't
want to, you know, step into this realm. And that's no longer the
case. Now that these attacks are just increasing, you
know, year over year for health care, They know there's a lot of money to
be made there. And we've also seen the direct impact

(15:23):
that like a phishing attack that can lead to
ransomware attack then or just systems becoming
unavailable, how that directly impacts patient care has,
you know, started to come to light with different studies around there too, and you're
starting to see that there's a direct correlation to a cyberattack
and mortality rate. And that's when it starts getting really scary

(15:45):
of this is no longer just an email trying to get some
gift cards where this stuff can lead to the bigger
attacks that then can directly impact patient
care. And I think with health care, there's just a lot of different
angles for them to come in at. You you think, you know, if
you're a health care company that has, you know, just an

(16:07):
office corporate type presence, for instance, you know, you have
the traditional ways you you battle that, but then you start branching out
to where you have your hospital sites or your clinical
settings and now you have a lot of different scenarios that an
attacker can come after. You know, they can, you know, go after your
providers to try to get them, you know, to answer something that

(16:29):
maybe it sounds like one of their patients or, you know, maybe
vice versa that the patient's thinking that it's their doctor and gives up
information. There's just a lot of attack vectors to think of in in
health care. And now the attackers know that that data is is, you
know, very valuable, and they wanna try to try to get to that
data the quickest, easiest, and, you know,

(16:51):
most financially, to report, the cheapest way
rather to to get to it. Yeah. And along the lines of the
attack factors, I mean, health care is so heavily
integrated. Like, there there's not really a health care organization that
operates in its own bubble. They have, you know, your X-ray
company that they're working with or the people

(17:14):
translating their health care data or whoever else. There are so many different
elements that it's not just the individual health care
organization. It's the entire, like, community
around them that you're creating. So it's definitely
a complex issue when it comes to those integrations
and everything. This is a little more out there, but,

(17:36):
Matt, I I just happen to think about it. When we're thinking
about how there is this increase and potential
for people to attack health care on a broader
scale, is there some element of because before I feel like with
ransomware, they would say they didn't wanna do it because there's that potential
for loss of life or loss of quality of life,

(17:58):
things like that. Because if the health care organization goes down, that's an issue.
And those moral considerations
have not necessarily changed. So I'm just curious
to hear your thoughts on, like, is there a shift, do you think,
in the way that attackers are thinking morally about this? Is the
financial payout something that is maybe influencing

(18:21):
their motivations to do that even though there's still those those
moral concerns? Yeah. I I think we've seen it both ways too.
The there have, you know, been in, some groups in the past, like,
six months who have come out and said, one way or another, like, we're not
attacking health care companies or specifically ones that
deal in patient care. And then there's some that say,

(18:44):
you know, you should be protecting your patients better. It's not our fault that you
fell for this and didn't take the correct precautions and the money
outweighs that part. I think you'll continue to see that based on the
group and their motivations. There's also a lot
of issues with the group knowing what they're attacking. I think you'll see that a
lot, especially with groups that are doing the as a service

(19:06):
type, methods there where their
affiliates may be doing the attacking, and they don't have the same,
like, quote, unquote, morals that the other guys do. And
because of that, it can get a little bit
foggy around, like, oh, is this is this actually
impacting patient care? Is it not? I've I

(19:28):
do think that, you know, if this happens to you and then it gets
down to, like, a ransomware event or another
one, you know, trying to actually get that line of
communication going with those attackers to to let them know exactly what they're
doing and try to pull on those heartstrings a little bit is a
a definitely a good and viable tactic because sometimes they may just

(19:50):
not know and they need that information. They may
not care, but it's a good thing to at least, you know, start with
there to make sure that they're aware that this is the
target, and this is just not some bank
or, you know, widget factory that isn't directly
impacting patient care. Yeah. That actually ties in really well

(20:12):
to the third party thing too because I remember
looking into an attack that happened a while ago, but it was against
a university. So the attackers the intent
was to attack this university, but not realizing
that all the university's health care systems were also tied
into that as well. So all of these students now were not able to get

(20:34):
the care that they needed. And I do believe in that attack, at least they
were like, oh, whoops. Like, that's not what we meant to do. We weren't meaning
to, like, impact patient care. So it is, like, an interesting thought
to think about. Do they know what they're attacking in in that sense?
Yeah. For sure. Matt, not long ago, you used
a term I didn't understand, MFA and it made

(20:56):
me realize that, oh my goodness. If, if I don't know
tech really well, if I don't know it really well and someone
comes and I think it's you starts emailing me about
if you're not using MFA or other terms I don't know. I
feel like I could easily get duped there. Yeah. And that this
is, like, one of the key concepts, yeah, out there. There there's kind of two

(21:19):
things that they're kind of baseline now that you feel that,
like, everybody should be be doing, but but they're still
not. One of them is MFAs, which is multifactor
authentication. That's using, you know, some other form than your
password. So you gotta put your password, and then you have to, you
know, use your authenticator app to click a button or put in a

(21:41):
code. Could be biometric, so you're using your your
fingerprint, something that aspect. It could be your facial
ID. So it's that second piece so that if somebody gets your
password, they still need one other other piece. And
it's it's definitely not foolproof. There are many ways around that stuff these
days. That's why, you know, there's a big trend for

(22:02):
zero trust and going password list, and that's that's definitely
another show that we'll have to get into, a lot of details. But I think
going with MFA is is people using some sort of, you know,
password management system to The the
days of just, like, writing down individual passwords that you can
remember, you know, those can be broken so quickly now with the

(22:25):
power that that we have. Using a password manager allows you to
use something that you don't even really know and it's different for
each thing. So you combine that with MFA. It helps you out.
Where the password part doesn't do you a lot of good is if you fall
for these phishing attempts and you put that credentials in there. That's when
you're hoping that the MFA piece then is enough to protect

(22:46):
you because now they have your password, but they might not have the
MFA piece. In some cases, they will have that as well depending
if they've hit you with certain info stealers and they get it to it in
time. But, yeah, that's like the extra piece. You know, it's
kinda like the key in the deadbolt. You know, maybe they have the key, but
they they can't turn the deadbolt type thing. Well, yeah. And not like you

(23:07):
mentioned earlier when we were talking the, like,
MFA spamming. Just if you do just have a push notification,
there are times and it's funny because I actually just recently, sent
out a training for this for our organization. But if it's a push
notification, the attacker can just keep spamming it so that
your phone, if it's your phone that you're using to,

(23:29):
verify, is almost unusable because you just keep getting
these notifications of accept this attempted login, accept this attempted
login over and over again. And that does wear people down.
And so, eventually, there is the case where you just might end up clicking,
yes. Sure. Just stop stop bugging me, kind of thing. And
that's where having a more sophisticated MFA in

(23:52):
place is helpful as well. Because if it's something
where your device itself, like, has a token on
it that, like, it needs to be on this device to
in order to validate it, it's not a push notification. Notification. It's
validating through your device. So there there are different ways that can kind of
help with phishing proof or more

(24:15):
phishing proof MFA situations as well.
Well, what Megan and Matt just told us certainly gives
us some ideas on what we can do MFA
and others. But what else can we do to prepare
for phishing attacks? Because now you've got me believing it's just
eventually going to happen. You're eventually going to get tech attacked.

(24:37):
So what can we do to prepare for that? Yeah. I mean, I would think,
honestly, probably most people have been attacked with phishing in
some way or another depending on how sophisticated it was
or not. You might not, you know, it might not be something that actually matters
to you. Or, like, your spam policies might have trapped it in your email. So
that is kinda one of the first lines of defense, like, we're

(24:58):
talking about for for email security at least, for email phishing, specifically.
If you have good email security in your organization, you
can actually stop those messages from even getting to the user in the first place.
So if there's a really convincing email that says, here's this
invoice that you need to pay or whatever it is,
if the email security stops it, that user doesn't even have the opportunity to see

(25:21):
it and have to try to negotiate with themselves. Is this real? Is
this not? Or just click on it quickly, because that's a trend as well. People
click very quickly. So with that, like, email security
is great. Matt, do you have thoughts about
other forms? So, like, the voice and video
phishing prevention situations? Yeah.

(25:44):
I think there's definitely new technology coming out around,
like, the deep fake stuff. So I think you'll start to see that being
incorporated with some of the, like, endpoint detection
systems that are out there and some new standalone systems too
that companies can deploy to look out and, like, alert
when something is a potential deep fake. For the voice

(26:06):
stuff, no. Most times, those are gonna come on on personal
cell phones or on cell phones. So I think, and this is a general theme
for phishing is, like, still the the best thing that you can do
is is train your employees. You know, it's the least
fun of any of the stuff. You know, there's no cool
technical implementation to it. People will get kind

(26:28):
of overwhelmed and tired of hearing about phishing
training, but it is still, like, the number one thing that you can
do. But I think you need to do it for these new
emerging types. And a lot of the very
boring canned check the box phishing training
that are out there are going to do the traditional stuff, and people

(26:50):
are gonna be like yada yada yada through it. And they're
gonna be like, I I took it. And then they're not gonna be thinking that
I could be on a call with somebody who's not that person, or I could
get a, you know, voice mail that sounds exactly
like our CEO and it not be that person.
And getting on those topics, like, are are really

(27:12):
important in making sure that that extends
throughout your organization. So everybody who works there
is getting it. It's not just you know, you know, the main folks
in finance or or IT. You really need everybody to
have training and to have that reoccurring so that
they stay on top of trends. And then think,

(27:34):
Megan, you're probably, very close to you know, you
do all the trainings, but then you have to make sure that that training's
working and, you know, how do you do that? Yeah. No. I I I
also love, Matt, that you said check the box training versus
effective training because that's something that I talk about a
lot. You probably heard me in another podcast also just saying, like, check the box

(27:56):
security versus, like, actual effective security. So with that,
like, keeping the training engaging and being
like, going with the trends, like you said, is very important. And
with the phishing tests, how you mentioned seeing that they are
effective. So so you wanna make the phishing tests difficult. If you're just
starting out with phishing training, it's good to probably get, like, a baseline with your

(28:18):
email phishing tests and things like that and see, you know, how many people are
gonna fail this, who needs more training, things like that. What kind of training do
they need? Are they is everyone always clicking on attachments, but ignoring things that don't
have attachments, things like that? But you can get harder and harder with your tests
as you mature through the program. So, like, I know that there are different
departments in our organization who pride themselves on

(28:40):
never ever falling for phishing tests, and they think they're so easy and all of
that kind of stuff. But lately, like, we have been able to craft phishing
tests that they're falling for as well. So they see that even though
they are trained and they're very on top of
looking out for some of these indicators, those aren't always gonna be the most
obvious indicators, like, we've kinda chatted about. And I'm actually really

(29:02):
excited to see, being on the fishing test side of
things, how that does evolve with the
increase in AI and voice phishing
and video calls. Because I would honestly
love to be able to start to craft trainings that incorporate,
like, a video call from our CEO and handing

(29:24):
it out to employees and see see how they
react. And I think that when you get those that data from
the phishing test that you're doing, you can see where your organization needs to
work a little bit more on that training, and you can
then move your training programs accordingly. So you're you're matching the
tasks with the, training as well. And I

(29:46):
know I tend not to be the most popular right after a phishing test
when people realize it. But the goal is not to
shame the people at the organization. That's, like, not the
point of it at all. When people are going through a phishing
test, there are, like, multiple things to consider as far as, like, past
failure situations. Of course, if they enter their credentials or they're giving away

(30:09):
data, that's not good. But if they report that to
security right away, that's another part of that phishing test.
So, like, when you're testing users, you're also wanting to test for
their reporting and making sure that they're letting security know
when something goes wrong. Because and so that kinda ties into the, like, don't
shame your employees. Because people will want to sweep things under the rug

(30:31):
if they think, like, oh, I just did something bad, and I don't want
anyone to know about it. For one, security does tend to know,
in the phishing tests, at least. I can see. So if you
try to hide it, it doesn't work. But if it was a real an actual
phishing attack, that could be pretty devastating to an organization.
If a person does realize that, like, whoops, I did something that might

(30:54):
be bad and they don't bring it up to anybody, then we
might not know that there even is a problem out there. So someone could have
the access for a much longer time than if if that
happens and immediately security is notified, we're able to act
on that. And, like, use our other, you know, large suite of
security tools to identify where that person

(31:16):
could be. We can help the user change their passwords
and adjust accordingly. But if we don't know, we can't
help. So Yeah. Couple great points in there.
The thing to add around, like, the the phishing test and folks
not maybe passing those or or failing the test. I think
it's important for, you know, security or IT to work with the

(31:38):
HR department as well to come up with yeah. It it should
really be part of just, like, the doing your job type standard
policy. But if you have people who are just repeat
offenders to have a way to deal with that, because the the normal, you're gonna
have people falling for these. That's the education part. But if you do have
users who just continually they're not getting it,

(32:01):
Those are your weakest parts as far as falling for these these scams. So
you wanna have those ongoing talks and have a way to
to address it and then to be able to measure that to you because
sometimes that's the hard conversations to have because you're, like,
overall in the phishing, it's not something that we're looking to
get people in trouble for. You want people to to come forward, but

(32:24):
inevitably, you're going to have that percentage of people who just
aren't getting it. And they might need different training. They might
need to have more examples to show the impact. I
think that's also really helpful for a company to even
do individual department trainings with the impact of,
like, this is what could happen, and here's some examples

(32:46):
of these phishing attempts that have come in that people have fallen
for, and here's in the news things that happened to this.
And that kinda helps them connect the dots. And to Meaghan's
point earlier about the training, I think the really boring training people
are not gonna come away with anything new. So
looking for those trainings that are keeps people's attention,

(33:08):
they're shorter, They hit on the topic that you're trying to get
to. Those are our key. Because even if you quiz after there,
you know, there's people will get around that stuff. But having it be
entertaining and people want to watch the training, that is a key
piece of that. It's not your typical just corporate training. Watch this
video that's fifteen minutes long to talk about fishing, and, you

(33:31):
know, nobody's paying attention to that. Yeah. So it's, like, some of our
more recent trainings that we've moved to that are more entertaining,
I love to see the chatter around it. Because it might not the
conversation that people are having, it might not be about the security
element specifically. But they're still watching the video and they're noticing
funny little things. And then they are like, oh, wow. I realized

(33:53):
that maybe I shouldn't have the sticky note on my laptop with my password,
which or things like that. Things that, like, you might think, like, yeah, everyone
knows or is not doing. But then they see it done in this way, and
they're like, oh, that would be a silly thing to do. So it is nice
to keep it engaging. And then along with the the repeat
offenders as well, I think I see organizations a lot just have an automated

(34:16):
training video that's the same video every time or the same
quiz or test that they have to go through every time when they are repeat
offenders for failing tests. And that's not
effective. Because if if it's not moving the market all, if they are still repeat
offenders and they're getting the same training every time, obviously, that is not
fixing the problem. So like you were saying, Matt, you have to actually, like, take

(34:38):
those metrics and do something about it. So if you see that that training is
not actually doing anything for anyone, and it's just the same
habits are continuing, then you have to adjust whether that is, like you
said, through more personal training, department wide, or even for the
individual if it's a specific individual that keeps that keeps on doing it.
There are other conversations that have to be had. And again, in health care

(35:00):
specifically, there will be a population of your
organization that has more PHI, like health care, data
access, and things like that. So their training might focus
on those kinds of attacks a little more. Or, like, your finance
department, like I've mentioned, invoicing a lot and things like that. Your finance
department will have to deal with that a lot more. So having and knowing

(35:22):
these nuances is really helpful in an
organization. This has been a fantastic conversation,
and I know I will rethink it the next time. Or when Matt
asked me if I wanna go on a fishing expedition, I'm gonna have to rethink
that. Well, if you've enjoyed this episode, you can join
us next episode as well as we discuss more security

(35:44):
challenges impacting health care and discuss practical
ways to address them. Matt and Megan, do you have anything
final to leave us with? Just if anybody has any
ideas, thoughts, opinions, topics that they like us to
cover, you know, please, submit those. There'll be a link
in the show notes to be able to do so, or feel free to to

(36:06):
reach out to Megan or myself as well. And, yeah, we will also
have some resources in the show notes as well going over what
we have chatted about. And don't forget to lock the back door.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.