Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
you're listening to
the audit presented by it audit
labs.
My name is joshua schmidt, yourco-host and producer, and today
we're joined by the usualsuspects, eric brown, nick
mellum of it audit labs.
Today we're joined by ed gadetfrom sensonet, so we're going to
be talking about ransomware andhow that's affecting patient
health.
Um, I know ed has done you'vedone a lot of work with that and
so, more specifically, whenransomware attacks hit a
(00:27):
hospital, most people think it'smostly about stolen data, but
patients are actually beingimpacted by these attacks.
So we're going to get into thattoday and, without further ado,
I'm going to turn it over to Ed.
Can you give us a littlebackground on yourself, ed, and
what you've been working on?
Speaker 2 (00:43):
Yeah, thanks, joshua.
Thanks folks for joining thepodcast today.
I'm Ed Gaudet, I'm the CEOfounder of SenseNet and I guess
you'd call me the serialentrepreneur, although I hate
that label.
People are so much more thanthe labels, right, but I've been
at tech and solving problemswith tech since I graduated
(01:05):
college, so it's been a long,strange trip, as they like to
say, in the wonderful world ofthe Grateful Dead, joshua.
I thought I'd throw that inthere.
Speaker 1 (01:15):
A little shout out to
the song we talked a little bit
about that earlier.
Yeah, Speaking aboutconnections, Eric just joined
you on your podcast Risk NeverSleeps, which you aptly named
after the Russ Never Sleeps NeilYoung record, I believe.
Speaker 2 (01:30):
I did.
Yeah, I did.
You know I was thinking about away to personify risk, and
especially in healthcare.
It's 24-7, right, it's not abank, it doesn't close right
after five Patients are comingin at all hours.
Care is being delivered at allhours, so risk is always there,
(01:51):
it's always on, it's alwayspresent.
And when you think about thepersonification of risk, you
know people we eat, we sleep, wework, we play, we sleep, we
work, we play.
This notion of sleep was reallyinteresting and pertinent,
because risk never sleeps, it'salways there, it's always on.
Speaker 1 (02:09):
Well, I'll start out
here with a little statistic,
then I'll turn it over to Ericand Nick a little bit here.
You shared with me from 425incidents impacting 27 million
people in 2020 to 592 incidentsaffecting 250 million in 2024.
What's driving this explosivegrowth in healthcare targeted
(02:30):
attacks.
Speaker 2 (02:31):
Well, I think the bad
guys are following the money
right.
It's the old adage, you know goto bank robbers, steal from
banks, because that's where themoney is right.
So it went from data theftwhich, again, again, you know
it's problematic, especiallyfinances figure out a way to
make people whole if they losttheir identity.
(02:52):
But in healthcare it's a littledifferent.
If your data's out, your data'sout.
It's hard to get that genieback in the bottle.
And that was all good and welluntil right around 15, 16, we
started to see this thing calledransomware occur, and it was
the ability of the attackers toactually shut down the
(03:13):
operations based on the data, solocking up the data in a way
that was unusable.
So therefore you couldn'toperate.
And today everything, mosteverything, is digital when it
comes to patient care, and thisnotion of locking out doctors
and nurses and clinicians andadministrators from delivering
(03:34):
care becomes a real problem,especially if you're 24-7, 365
days a year.
You've got people that aregetting ready, maybe for surgery
next week or the next day or ina couple of hours, or somebody
is coming in via ambulance andthey can't get into the health
(03:55):
system, and maybe that healthsystem is in a burgeoning city.
So the next one is a couple ofmaybe a mile away or two miles
away.
But if you're somewhere out inthe country and the next
available hospital is two and ahalf hours away, that's a
problem, especially if you'rethe one in the ambulance having
the heart attack right.
Minutes mean muscle and musclemeans lives.
(04:18):
So we started to look at it andwe started to again,
anecdotally, believe thatransomware and these attacks and
these incidents, events, werecausing patient harm.
But no one really could pointto any research, qualitative or
quantitative or otherwise.
So we back in.
Just I think it was actuallyduring COVID we approached the
(04:40):
Poneman Institute you may knowthem, larry, poneman does a lot
of research on the cost of adata breach and the cost of a
data record stolen.
And I had this thesis that, hey, there's more happening within
the healthcare environment thanwe know.
It's not just the data theft,it's actually impact care.
So let's qualitatively studythat.
(05:02):
And so we built out a researchstudy that literally showed if a
hospital had a ransomwareattack, they were getting some
increase in mortality, or theywere getting diversion of care
coming into an ambulance, orthey were canceling labs or
other procedures.
They were canceling labs orother procedures.
(05:24):
We were seeing this data andrealizing holy cow,
qualitatively or otherwisedirectionally, something is
happening here.
And then I think we publishedthat on a Wednesday I got a call
from Josh Corman over at CISAand he's like what's your
methodology?
I shared it with him.
He said we're publishing astudy in two days which is
quantitative.
(05:45):
It'll back up what you'refinding.
So that's when we realized,okay, we've got a big problem
here.
Right then we realized this ismore than just data.
It's about lives, savingpeople's lives.
And it became personal becauseeveryone has been a patient or
they have a family or a friendor a mother or an aunt, you know
(06:06):
, sister, brother or friendhooked up to a life-saving
device and you definitely don'twant a ransomware attack on that
hospital or that device how doyou?
Speaker 3 (06:17):
in your point of view
, how is the healthcare industry
, cyber security, different fromoutside healthcare, whether
it's OT or private organizations, a school district per se?
Is there a big differencebetween those two areas?
Speaker 2 (06:30):
Well, I think, if you
compare it to traditional
industries and again, havingbeen through creating products
for many different companies andfocusing on many different
industries, I've always lookedat healthcare as late adopters
of technology.
They used to be five years, youknow, behind everybody else
prior to 2009.
(06:51):
And then in 2009, the Obamaadministration passed the high
tech and the ARA legislation,which provided really a forklift
upgrade from a technologyinfrastructure to these
healthcare organizations andthey were able to really
digitize all paper-basedprocesses right, care delivery,
(07:16):
which is on a paper chart, wentto an electronic medical record
or an EHR.
An electronic health record andeverything else around that
that supports the electronicmedical record was also
digitized right.
And so now you went from beingable to not worry so much about
downtime of servers or yournetwork or your infrastructure
(07:39):
to wow, if it goes down, wecan't deliver care.
And it was in a very shortperiod of time 2009 to, I'd say,
2019, really that 10 years wasa very transformative period for
healthcare.
Whereas it took, you know,everybody else sort of at the
pace of the technology, right,healthcare had to catch up.
(07:59):
So you had sort of thatcompression of that
infrastructure update, thetransformation of all those
processes now onto electronicsystems, and then you throw on
top of it the introduction ofransomware.
It's the perfect storm.
Speaker 4 (08:16):
And Ed.
I spent some time in healthcareover the years and doing some
of those transformations ofsystems, going from paper to
tools like Epic and the ICD orthe International Classification
of Diseases was that ICD-10.
I think ICD-10 is the currentversion of that and interested
(08:41):
on your take on this, but what Ilearned about that was largely
this transition from paper tothis electronic health system.
Yes, it benefits the patient,it makes data portability easier
, but also it is really thebenefit of insurance and these
(09:05):
ICD codes are billing codes thatinsurance uses and there's you
know, you can search on theinternet there's some pretty
funny ones out there, likeencounter with a duck or I think
there's like initial bite froma pig or something like these,
just these different codes andyou know there's thousands of
(09:28):
codes that essentially, care isprovided and then the care that
the person receives is codifiedand then that helps insurance.
Speaker 2 (09:42):
With reimbursements.
Exactly, you're spot on, eric.
Speaker 4 (09:45):
You know, as you
think about it, it is absolutely
wonderful that as a patient Ican log into a portal and I can
get information about the carethat I just received, I can see
the doctor's summary and all ofthose things.
I think insurance has driven alot of the technology
(10:06):
advancements but with thoseadvancements, as you say, in the
health care space introduces aton of risk and we've seen over
the years millions of recordsexposed in ways that they really
shouldn't have been if they hadbeen protected with due
diligence.
Speaker 2 (10:25):
Yeah, that's, that's
right.
And you know, I feel in someways, you know we made such
progress on the threats aroundransomware and now we've got a
new threat which hasexponentially opened up the
attack surface called AI.
I'm sure you guys have heardthat.
Yeah, so you know, on one hand,we wouldn't be able to realize
(11:08):
all the workflows into those usecases to deliver better care
outcomes.
We always talk about that andapplying technology to do that,
but for the most part, I wouldsay over the last decade or two,
we've really been, you know,paying our dues, if you will.
It's been more painful than ithas been more of a, you know, of
a process of healing.
I feel like we're starting togo through that now.
(11:29):
But with AI, there's a hugepromise around the advancement
of technology.
The problem is that it existswithin the context of all of the
products and services that wecurrently use, that are in
inventory Right, and some ofthem we know about but most of
(11:49):
them we don't know about.
I remember when, when Adobe, youknow, turned I don't know if
you guys went through this, butall of a sudden Adobe's got AI
in the, in the, in the product,and I'm like what the hell, I'm
trying to turn it off.
I can't find the actual X.
There's no X to turn it off ordisable it or anything.
So I go on Reddit and Reddit'slike blown up, it's all red, red
(12:10):
, right, it's all about they'rejust slamming Adobe for putting
this turning something off.
They eventually recanted andthey made it optional, but man,
it was brutal.
I literally had to uninstall it, because we're a security
company.
I couldn't risk having Adobe dowhatever it was doing.
I had no idea what it was doing.
That was the point right, and soI immediately took it off my
(12:33):
system and then I monitored it,and then, shortly thereafter,
they did the right thing, whichis good, but Microsoft's doing
the same thing.
Shame on you, microsoft.
Like you're doing the same, Iturned you off, and then you're
doing the same.
I turned you off and then youturned yourself back on.
Speaker 3 (12:47):
What the hell?
Why are?
Speaker 2 (12:49):
you doing that?
What is wrong with you?
Right, it's a big, it's a hugeissue.
What are they doing?
Why are they doing that?
Do they not learn?
But sure enough, they don'tlearn, right?
So you imagine that's just onour own laptops, right?
Imagine you're a health systemand you have to manage thousands
of nodes and endpoints andlaptops and devices, and both
(13:11):
personal and professional, andthey're turning it on and
turning it off and all hell'sbreaking loose.
That's what the CSO and the CIOare dealing with today, on a
daily basis dealing with todayon a daily basis.
Speaker 3 (13:24):
I think the issue
that we had if I remember this
correctly, it was at a clientand they're obviously a
Microsoft shop and Microsoft'smigrating MFA, and I don't have
all the details in front of meright now, but we had the text
option turned off at the clientso you couldn't receive the code
through the text and you had totell us the time to migrate.
We had all that in place.
(13:45):
The migration was basicallydone, but it hadn't been turned
on yet.
Right, because we're just rightwhere we weren't quite to the
point.
Microsoft on their own, turnsback on the text messaging
option and that somebody did doit and a phishing email they use
that login to get into thepersonal Gmail on a company
computer computer received aninvitation for a luncheon and
(14:06):
they clicked it went through, sowe had an issue there it could
be worse it could be worse, wecould be worse.
Speaker 2 (14:12):
Nick, we could be at
a cold play concert with our hr
director, I had to do it, guys,I'm I'm sorry.
Speaker 1 (14:23):
Too soon.
Speaker 4 (14:23):
Too soon.
Speaker 1 (14:24):
No Perfect timing.
Speaker 3 (14:26):
The power of the
internet right.
Speaker 4 (14:33):
Ed, you talk about
Microsoft turning on and turning
off things.
I just recently ran throughthis with a customer where they
went a different direction thanthe Defender product suite,
which Microsoft is trying tocram into everything, and that
thing is like a virus.
They went with the Palo Altosolution.
Yeah, it's a real problem.
Speaker 2 (14:53):
I mean, you know, if
you're going to be secure by
default and secure by design,right then by default.
And this is we learned thislesson.
We obviously, like everybodyelse, we took a look at AI when
it came out.
I mean, it's been aroundforever, right, we've used it in
certain areas of the product,but never like generative AI or
like eugenic AIs today.
(15:14):
So we looked at that when itcame out a couple years ago and
said, okay, this is game changer, but we have to figure out how
we build it in, because we'renot like everybody else.
We have a fiduciaryresponsibility to our customers
to not just force them to adoptAI, like I went through with
Adobe.
So how are we going to do it?
So we built it in, we partneredwith AWS, we did a
(15:42):
self-contained approach to theinfrastructure and to the
architecture and then we enabledcustomers by default.
It was turned off and on demand.
If they wanted one or theywanted all or they wanted some
combination, they could turn onthose capabilities when they
were ready, based on theirability to consume that, and
that's really what customersshould be getting and that's
(16:03):
what vendors should bedelivering to customers, not
forcing these on by default,which is disastrous right.
Speaker 4 (16:11):
And that Defender,
like you've got Defender for
Office, you've got Defender forSQL, you've got just all the
Defender for 365.
They just throw Defender infront of it and then, like a
virus, are turning it on behindthe scenes, making it really
hard to disable, and it was like, well, if they made a good
product, hey, we'd love to turnit on, you know, and use it.
(16:34):
But it's just not a greatproduct and it's unfortunate
that Microsoft has stepped sofar away from the ecosystem of
the office suite and emailmanagement into all of these
derivatives where they arecertainly not the best in class,
and it's really frustrating,like you say.
Speaker 2 (16:57):
No, and they're using
their customer base as a lab.
Right, they're turning it on tomake it better because
eventually you know they'llfigure it out right, and they
must do.
You know they're smart guys.
I've been out to Redmond, I'mout to Bellevue a lot.
They're really smart, right?
So someone's running a modelTalk about you know an actuarial
(17:17):
model, like the insurancecompany.
Someone's running models thatsay you know what running models
and say you know what we'regoing to benefit better if we
know it's a problem.
We know people are going tocomplain about it, but guess
what?
The benefit's greater than thepain we're going to cause.
So we're going to just do that,although I can't understand why
they're doing it.
They're Microsoft, they don'tneed to do this.
Speaker 4 (17:36):
It's a brilliant
model, though, because if their
software wasn't so shitty tobegin with, we wouldn't have to
patch it constantly.
I mean, look at the amount ofpatching that comes out for
their nonsense month over month.
So I got to buy your crap tobegin with, and then I got to
pay you to secure it as well,which is just ridiculous, right?
So, no, I'm not going to buythe Defender product.
(17:59):
You're not going to just forcefeed me a truckload of your
stuff.
Speaker 2 (18:04):
That's right, but you
know, on the other hand, there
are worse vendors, so Microsoft.
I am a happy user, so pleasedon't make my world.
Speaker 4 (18:16):
Please don't make my
world a hell, please.
Speaker 1 (18:19):
You both already have
a ca AIca.
I know exactly.
Speaker 2 (18:23):
I feel like I'm in
the matrix.
It's going to come and open upmy pod and rip me out.
You already have a ca AI ca.
I know exactly.
The future is these agents.
I feel like, yeah, I'm in thematrix.
It's going to come and open upmy pod and rip me out.
Speaker 3 (18:30):
I'm like slamming
down, I'm not going to say a
word.
Speaker 2 (18:33):
Yeah, so you know, I
noticed you guys are moving
around.
Am I supposed to?
Speaker 1 (18:39):
Is it okay if I just
stick you a little here.
You can do whatever you want,okay.
So speaking, yeah, bringing itback to you know kind of this
risk and around AI and some ofthese user interfaces, you know,
chat, gpt and other AItechnological advancements, what
are some of the risks thatyou've seen crop up that our
(19:00):
listeners might not be aware of?
Speaker 2 (19:02):
Yeah, I mean there's
risk to data.
The data quality is a big issue.
The data could be biased, right, so parts of the population
that you're serving could getdifferent types of care.
You could get different typesof diagnosis and results from
the actual analysis based onbias.
You could also havehallucinations of the data,
where it just is not in syncwith the context of what you're
(19:25):
trying to accomplish.
You know you could lose thedata because you're sharing it
for training purposes.
So let's say you're working witha tool that's helping you, you
know, through the analysis andimplementation of better care
through data, but the data thenis going outside, and maybe it's
(19:46):
going outside into an LM that'snot well protected or to a site
that's not well protected.
So you're sort of at the behestof the controls that are in
place with the third party.
I mean, there's a milliondifferent challenges with AI and
especially the model.
Change over change, right.
So I had this model, I I testedand I verified it.
(20:07):
Maybe we we trained against it,we trained against it, but
these updates are coming soquickly.
How do you verify that the newmodel that you take isn't going
to blow away all of theimperfections that you sort of
you know, wrung out over thelast, you know, four months or
whatever, three months or twoweeks, right?
(20:28):
So there's that, and there's thething I've been worrying about.
I'll share it with you guys.
You've heard it for the firsttime.
You're hearing it All right,okay, here we go.
This is the thing that bothersme the most.
Go, this is the thing thatbothers me the most.
(20:51):
It's the fact that we haven'tseen more stuff happen in a bad
way to the industry, in asystemic way, right, the fact
that we're not seeing somethingreally big or we haven't heard
about some really big attackcoordinated yet.
And the reason I worry aboutthat is because it feels like
over the last six months, thingshave actually been pretty quiet
.
Yeah, you get your data breachhere.
You get your event there, youget some ransom or whatever.
Speaker 4 (21:13):
Are we becoming numb
to it?
Speaker 2 (21:15):
Ed no no, no, no,
we're not numb to it.
Come on Eric, come on man.
Come on man.
No, we're not numb to it.
That's why my spidey senses aregoing Like what are they doing
right now?
Like what's going on?
You know, it's sort of that.
You know it's that silencebefore the big assault.
(21:37):
It's the Tet Offensive, right.
Oh, everyone's off celebratingTet, right.
Speaker 3 (21:44):
No, they're just
preparing right.
And with that spidey sense,what's coming down the pipe?
What are the spidey sensestelling Ed?
Speaker 2 (21:52):
I just feel like this
major coordinated attack across
more than one criticalinfrastructure.
I think it's that level where,if you think about the evolution
of these attacks right, theywere individual.
Right, and oftentimes they wereyou know, some kid trying to
figure out some new toolkitright, they weren't coordinated.
(22:15):
And then, about five years ago,six, seven years ago, they
started to become more organized.
Right, we all know organizedcrime.
Right, but organized crime aswe know it never looked like
this crime.
Right, but organized crime aswe know it never looked like
this.
Now, all of a sudden, andthey're having these
microservices, this concept ofmicroservices they're applying
to actually the process ofransomware.
(22:35):
Right, Somebody collects themoney, somebody creates the
virus, somebody sends it out.
Right, everyone has a differentjob versus one person doing
everything.
So that was the first thingthat was like whoa, okay,
they're leveraging technologynow in an organized way.
That's scary.
(22:55):
And we saw the big spike inransomware and now I feel like
it's kind of leveled off alittle, gotten quiet, and but
yet we have this unbelievabletool called ai.
What are they doing?
What are they doing?
What are they doing?
Now?
We're still seeing stuff.
It's not like everything hasgone quiet, but it's just eerily
quiet for me.
So if I think about exponentialstep function attacks, which is
(23:20):
what we've seen, what would bea step function?
A step function would beorganized in a way that takes
out multiple criticalinfrastructures at once.
Speaker 4 (23:29):
That would be bad.
Going back to Josh's question,when he was asking really, what
are they going after and this isa conversation internally we've
had quite a bit with ourcustomers too of the threat
actors, wherever they are from,whoever they are, there's really
(23:50):
only three things that theycould be going after, you know.
One would be some form ofhacktivism.
Another would be some form ofmoney.
Right, they're going after away to get money ransomware,
what have you?
Right, there's a monetarymotivator there.
And then the third and final isthe nation state where they're
(24:14):
really looking.
You know, nation state alsodoes financial, but also to
disrupt critical infrastructure.
So if you break it down tothose three things and then you
figure out where yourorganization is in that, right,
if you're making shoes, well,they're probably not going to go
after you to disrupt criticalinfrastructure, but there could
(24:35):
be certainly a ransomware or amonetary component to it.
Follow that tree back to whereyou were saying of like, what is
next?
I've been kind of thinkingaround in the financial space,
particularly in the crypto space, and how actively crypto is
(25:01):
trading now, especially with theBitcoin boom of where you know
what is it?
In the last 90 days, bitcoinhas, you know, just about
doubled.
So when we look at that, if youcould go after, you've got to
spend money, time and resources.
If you're a threat actor totarget an individual or an
(25:24):
organization, you know it's justnot going to happen by osmosis.
You've got to put energy intoit to get through their systems,
their people, what have you?
Are they going to be goingafter systems that they could
ransom or are they going to bespending that time and effort
trying to get into somebody'sBitcoin wallet because the cash
(25:46):
reward there may be moreshort-term advantageous?
Speaker 2 (25:51):
I actually was
thinking I've been thinking
about this week.
It's kind of weird that you'retalking about Bitcoin.
But you asked the question,Joshua, about why healthcare,
and I gave you one answer.
The other answer is it's easyright, it was an easy target
right.
Gave you one answer.
The other answer is it's easyright, it was an easy target
right.
And I think Bitcoin is not soeasy.
So I think there are peoplethat are testing it.
(26:13):
But here's what's happening.
It's more physical assaultright.
They're finding out who's aholder of Bitcoin and then
they're actually kidnapping andthat's real and that's scary.
A coordinated attack on Bitcoin.
It's's distributed.
That would be really hard.
They'd have to.
I don't know how.
I guess they'd have to hit someof the major servers, but then
they only take out sort of thatarea and not so much everything
(26:35):
else, and you know it thatthat's an interesting attack
vector to to considermathematically it's been more
through like binance or coinbaseor just directly with users or
engineering.
Speaker 1 (26:48):
Yeah, yeah, I had to
do some moving around some
crypto and it asks you if you'regoing to view your.
You know your passcode.
It asks you like are you OK,you know?
Are you?
Are you being ransomed rightnow?
Speaker 4 (27:01):
Yeah, no, it's true.
What's your coin?
What's your wallet address,josh?
Speaker 1 (27:05):
Yeah, it's like tie
this what's your coin, what's
your wallet address, josh?
Yeah, tie this all in together.
So risk never sleeps, rightwe're.
We're at a kind of a 24 sevenmodel.
I'm curious how you knowhealthcare industries
approaching that 24 sevensecurity model and maybe what
other industries could learnfrom that.
I know Eric and Nick have jokedabout stories of you know 3am
(27:27):
or help desk job.
You know you know have jokedabout stories of you know 3am or
help desk job.
You know you know being on callat night.
You know some of our morejunior members you know getting
their their teeth cut on that oncall kind of status.
So what does the approach looklike compared to maybe did in
the past when we're operating ona 24 seven model?
Speaker 2 (27:45):
Great question, I
think you know.
Think up until probably aboutthree years ago, it was all in
the identity, detect, protectpart of NIST CSF.
If I can take the framework andjust sort of decouple it.
To answer your question, right,everyone was focusing on
identifying the risks, detectingthem and then ultimately
(28:08):
protecting them, not doing muchin the form of response or
recovery.
The other areas are governednow, which is part of the new
update to the NIST CSF, and Ithink we as vendors have always
known you can't have 100%security.
It's not possible.
Right, your facility would beunusable if it was 100% secure.
(28:33):
Your house would be unusable ifit was 100% secure.
Right, you would never get UberEats to the door.
You'd never be able to get yourfood from the.
Thank you, nick, thanks forthat.
So if you don't have 100%security, you know you're going
to get hit.
(28:53):
It's not a matter of if.
It's a matter of when, which Idon't like to use, right, then
you have to reduce the aperture,though you got to reduce the
aperture of the when as much asyou can.
So you do what you can and thenyou put as much investment in
the response, recover, and Ithink that was the shift that's
happened recently.
People are now getting more andmore investment on the response
(29:13):
recovery side.
Because if you get hit and it'sa critical function and you
have to be up in hours, thenmake sure you're up in hours and
start to think about it thatway.
Not all products, not allvendors, not all applications or
devices are created the same,so tier them accordingly into
(29:35):
critical functions that you needto operate your organization,
whether it's a hospital or abank or an ice cream stand or
whatever it is right.
Think about those applicationsthat are critical, tier them, do
as much assessment around thoseand then go to the next level
of high.
Do maybe a little less, butstill have that discipline and
(29:58):
rigor.
And then, when you get to themediums and the lows, you can be
a little looser, right, butmake no mistake, a low vendor
can still cause you pain if theyget through, if there's an
attack, right.
So I'm not saying don't assessand don't look at any vendor or
(30:20):
all vendors.
I'm saying use a tieredapproach because you don't have
infinite time, you don't haveinfinite resources or money.
Make sure you cover the firsttwo buckets right.
And if you do that and you havecorresponding continuity and
disaster recovery plans andyou've tested them.
You've tested your RTOs andyour RPOs.
Then if it goes down, you knowyou're going to recover, so you
(30:41):
can manage the downtimeaccordingly.
You can manage the impact youhave on patient care.
You can manage a number ofthings better than you could a
decade ago.
Speaker 1 (30:51):
I think that's a
great spot to leave it.
Thanks so much for joining ustoday, ed.
It's been a really stimulatingconversation when speaking to Ed
Gaudet from SenseNet and theusual suspects Nick Mellom and
Eric Brown from IT Audit Labs.
My name is Joshua Schmidt, yourco-host and producer.
Thanks for listening to theAudit.
Please like, share andsubscribe, and leave us a review
on Apple Podcasts If you get achance.
(31:13):
We also have video now onSpotify as well.
We'll catch you in the next one.
Speaker 4 (31:18):
You have been
listening to the Audit presented
by IT Audit Labs.
We are experts at assessingrisk and compliance, while
providing administrative andtechnical controls to improve
our clients' data security.
Our threat assessments find thesoft spots before the bad guys
do, identifying likelihood andimpact, while our security
control assessments rank thelevel of maturity relative to
(31:42):
the size of your organization.
Thanks to our devoted listenersand followers, as well as our
producer, joshua J Schmidt, andour audio video editor, cameron
Hill, you can stay up to date onthe latest cybersecurity topics
by giving us a like and afollow on our socials and
subscribing to this podcast onApple, spotify or wherever you
(32:03):
source your security content.