Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:28):
And welcome to
Technology Tap.
I'm Professor J.
Rodney.
In this episode part two of thetop ten hacks of twenty
twenty-five.
Let's tap in.
(01:15):
As I said in the previousepisode, I'm not naming the
companies.
This actually could be ateachable moment for students
and teachers out there.
You can listen to it and try todo a little research and find
out which company this affected.
And it's one that's various,like the one for the learning
management system of the school.
There's actually a couple ofschools.
I know one that's local to methat it that's exact, you know,
(01:40):
that happened to.
So it may not just be one, itmay be more.
Anyway, for those of you whodon't know me, my name is
Professor J.
Rod.
I'm a professor ofcybersecurity.
And I love helping students passtheir A Plus, Network Plus, and
Security Plus exams.
But every now and then I, youknow, do a little twist on the
topics.
I like to talk about like thehistory of technology, and now
(02:03):
I'm doing the top 10 hacks of2025, part two.
So let's kick it off with numberfive.
After attackers realized theycan extract a value without
breaking anything, they startedasking, What if we don't steal
data at all?
What if we steal something morevaluable?
(02:25):
Time.
That's hack number five.
Healthcare.
And the moment the attackersrealize availability hurts more
than breaches.
Alright, let me start this onewith something that usually
makes the room go quiet when Isay out loud.
Hospitals can survive data loss.
They cannot survive confusion.
(02:47):
And in 2025, attackers finallyunderstood the difference.
For years, healthcare breachesfollowed a familiar script.
Patient records stolen, databaseencrypted, HIPAA violations,
press release.
Everyone knew the playbook.
So security teams hardened.
Electronic health records,billing systems, and patient
databases.
(03:08):
And they did a decent job, whichis why attackers stopped going
after those systems.
Instead, they asked a differentquestion.
Not how do we steal data, buthow do we stop the hospital from
functioning?
And that's a very differentmindset.
And the answer led themsomewhere unexpected.
Scheduling.
(03:28):
Let me slow this down becausethis is subtle.
When people think hospitalsystems, they think patient
charts, lab results, imaging.
But none of that matters if youdon't know who's coming in, when
procedures happen, where staffneed to be.
Scheduling is the nervous systemof healthcare, and the nervous
systems don't like beingdisrupted.
(03:50):
This didn't start withransomware splash across the
screen.
It started quietly.
A low privileged account, aphishing email that looked
boring, someone clicked clickingduring a long shift.
From there, attackers movelateral, not fast, carefully,
until they reach the schedulesystems.
And here's what made this attackso efficient.
(04:13):
Scheduling systems integratewith everything, affect
everyone, are time sensitive,and are often less protected.
Encrypt a file and you canrestore it.
Encrypt coordination and chaosspreads immediately.
Appointments disappear.
Surgeries overlap.
Staff don't know where toreport.
(04:33):
And suddenly leadership wasn'tasking about cybersecurity.
They were asking, can we operatetoday?
Remember the API attack?
How nothing broke but valuedrained out?
This is the opposite.
Nothing was stolen, buteverything stopped.
Different tactic, sameprinciple.
Let me say this the way I say itin class.
(04:55):
Availability is not secondary,it is operational survival.
We teach confidentiality first,integrity second, and
availability last.
Attackers flipped that script.
This attack didn't triggerimmediate panic for one reason.
No patient data was leaked,which meant slower legal
(05:15):
response, quieter mediacoverage, and delay executive
escalation.
But internally, everything wason fire.
Patients waited, doctorsimprovised, and nurses
scrambled.
And every minute increasedpressure.
The ransom note didn't threatenexposure, it didn't mention
data.
It simply said restorationrequires payment.
(05:38):
That was it.
No countdown timer, no threats.
Just an inventability.
Let me ask you this (05:45):
if backups
exist, why didn't they just
restore?
Because restoring schedulesisn't just restoring files, it's
restoring trust and time.
Which appointment is right,which version is accurate, which
schedule is correct or current.
You can't easily fix that.
(06:06):
If this shows up on the exam,the question wouldn't ask what
malware was used, it would askwhat security principle was the
most directly impacted.
And the answer wouldn't beconfidentiality, it would be
availability.
You can lose data and recover,you can lose time and people get
hurt.
Because this is wherecybersecurity stops being
(06:29):
abstract.
And once attackers realize theycould shut down operations
without ever touching patientrecords, they ask another
question.
(06:52):
Alright, before I tell thestory, let me ask you something
simple.
When was the last time youhesitated before installing an
update?
Not a sketchy pop-up, not arandom download, a real update
from a trusted vendor.
Signed.
Right?
Trusted.
Exactly.
The hesitation or lack of it iswhere this hack lives.
(07:14):
For years, security trainingtold us keep system patched,
install updates promptly,unpatched systems get breached.
And that advice was and still iscorrect.
But in 2025, attackers realizedsomething subtle.
If defenders are trained totrust updates, then updates
become the delivery mechanism.
(07:35):
No deception is needed, justpatience.
The notification popped up likeit always did.
Same vendor, same branding, samedigital signature.
IT teams pushed it outautomatically.
Why wouldn't they?
This was best practice.
No one called it suspicious, noone delayed it for review, and
(07:55):
that's the moment trust became aliability.
Let me slow this down.
Security teams spend enormouseffort validating users, but how
often do they validate softwarebehavior after installation?
Most environments check is itsigned and is it from a known
vendor?
And then they stopped askingquestions.
(08:16):
Attackers counted on thatsilence.
This wasn't about trickingcustomers.
This attack happened upstream.
The attackers didn't breachthousands of companies, they
breached one, the vendor.
Specifically, the buildenvironment, the update
pipeline, and the system thatsigns code.
Once malicious code was addedbefore signing, every downstream
(08:39):
system accepted willingly.
No alarms, no warnings, becauseauthentication was never the
problem.
Remember hack number 10, thevoice call?
That worked because peopletrusted authority.
Remember hack number nine, thebuilding system?
That worked because theorganization trusted vendors.
This is the same pattern, justscaled.
(09:00):
Trust doesn't disappears, itcompounds.
Here's what surprised people.
The update didn't encrypt files,didn't crash the system, didn't
announce itself.
It waited, it observed, itestablished quiet persistence
because loud malware getsremoved.
Quiet software becomesinfrastructure.
(09:21):
From the endpoints perspective,this was a loud behavior.
The software had permission torun, it had permission to
communicate, and nothing it didimmediately violated policy.
Which brings us to anuncomfortable realization.
Most security tools are designedto stop unknown software, not
trusted software behaving badly.
(09:43):
Let me say this plainly becausethis is where the students
usually pause.
Digital signatures prove whosent the code.
They don't prove what the codewould do.
We confuse authentication withsafety.
Attackers exploit thatconfusion.
The attack wasn't discovered byantivirus, it was discovered by
analysts comparing notes.
(10:05):
Different organizations noticedsimilar strange behavior, subtle
things, unexpected outboundconnections, processes that
shouldn't exist but did.
Only later did someone ask theright question.
What if the update itself is theproblem?
That question took weeks to ask,and by then the damage was
widespread.
(10:26):
If this shows up on the exam,the question wouldn't be, how
did the malware get installed?
It would be which security riskarises when trusted third-party
software is compromisedupstream?
And the answer wouldn't involveend users at all.
It would be supply chain risk.
A sign update proves origin, notintent.
(10:49):
So say that again.
Because once you internalize it,you stop treating updates as
unquestionable.
And once the attacker realizedthey can spread silently through
trusted software, they askanother question.
What if we didn't attack thesystem at all?
What if we attacked the peoplewho fixed the systems?
This brings us to hack numberthree casinos, help desk, stay
(11:13):
on the line urgency.
And the front door no onethought was the front door.
Alright, before I tell you whathappened here, let me ask you
something I ask every semester.
Where is the front door in yourorganization?
Most people point to thefirewall, the VPN, the login
page.
Almost no one says the helpdesk, and that's exactly why
(11:36):
attackers did.
Casinos don't work like normalbusiness.
They operate 24-7.
Downtime costs moneyimmediately.
Customer experience iseverything, which means one
thing for IT.
Speed matters more than caution.
Help desk staff are trained tofix problems quietly, keep
(11:57):
operations moving, and avoiddisrupting guests.
Attackers study that culturecarefully.
They didn't start with malware,they started with phone calls.
Polite, urgent, frustrated.
The caller sounded like anemployee who couldn't log in,
had guests waiting, neededaccess now.
(12:17):
And here's the important part.
They sounded believable.
Because the attackers havealready done their homework.
Let me stop here and underlinesomething.
Social engineering works bestwhen it sounds boring.
No dramas, no threats, justinconvenience.
Before the calls ever happen,attackers gather information.
(12:38):
Not hacking tools, information.
Employees name from LinkedIn,job roles from public posting,
schedule shift from casualconversation, vendor names
mentioned in forms.
So when the help desk answered,the attacker didn't say, Hi, I
need help.
They say, hey, this is Mike fromTable Services.
I'm locked out again.
(12:59):
Statement was last week.
I'm on the floor and I got theguests waiting.
That sentence does somethingpowerful.
It creates pressure.
And pressure collapse process.
The help desk follow procedure.
That's the uncomfortable truth.
They ask a few questions, theyverify what they could, they
reset credentials.
Because from their perspective,the caller knew internal
(13:21):
language.
The request made sense.
And denying access would causeimmediate problems.
And this is where securitytheory meets reality.
Policies don't survive urgencywell.
Remember hack number 10 and AIvoice call?
Same weakness.
Authority plus urgency equalscompliance.
(13:43):
Different tool.
Same human response.
Once attacker had one account,things moved quickly.
Because casinos, like many orlike many organizations, rely
heavily on centralized identitysystem, role-based access, and
trust between departments.
One reset became another,another call, another employee
(14:04):
needed help.
Within hours, access systemswere confused, credentials
overlapped, trust chain broke.
Room keys stopped working, staffaccounts conflicted, operations
slowed, and suddenly leadershipwasn't asking about
cybersecurity.
They were asking, why can'tguests get into their rooms?
(14:25):
Let me say this plainly.
They didn't bypass multi-factorauthentication.
They convinced someone to undosecurity on their behalf.
And that's a very different kindof failure.
Security logs didn't show bruteforce attempts, suspicious
malware, or abnormal traffic.
(14:46):
They showed help desk activity,credential resets, account
changes, all legitimate.
Which means from the system'sperspective, nothing was wrong.
If this showed up on an exam,the question wouldn't be what
exploit was used.
It would ask which rolesrepresent a high risk attack
surface due to socialengineering.
(15:06):
And the answer wouldn't beadministrator, it would be help
desk.
The help desk isn't a supportfunction, it is an access
control system staffed byhumans.
Because once you see it thatway, you stop treating help desk
security as optional.
The incident didn't end withdata death, it ended with
downtime, confusion,embarrassment, and a painful
(15:30):
realization.
The attackers didn't need tobreak in, they just needed
someone to help.
And once attacker proved theycan talk their way past the
fences, they ask even a biggerquestion.
What happens if we don't needpeople at all?
What happens if we get oneaccount that controls
everything?
That's hack number two, thecloud, admin access, and the
(15:51):
morning entire environmentsdisappeared.
Alright, before we talk aboutwhat happened here, I need you
to clear I need you to clear onemisconception.
The cloud is not safer bydefault.
It is faster by default, moreconvenient by default, more
powerful by default, and powerwithout restraint is dangerous.
(16:13):
Just ask Peter Parker, right?
The incident didn't start withalerts, it started with
confusion.
An engineer logged in into thecloud dashboard and froze.
No instance, no storage buckets,no database.
At first, the assumption wassimple.
Dashboard glitch.
Refresh, same thing.
(16:34):
Another engineer logged in, sameemptiness.
And then the realization hitslowly, painfully.
This wasn't an outage.
This was deletion.
Let me stop here and ask thequestion I always ask.
Who has admin access in yourenvironment?
Really?
Not who should, who actuallydoes.
(16:57):
Because this hack lives in a gapbetween the two answers.
There was no malware, nophishing email that morning, no
bruise force attack.
The attacker already had whatthey needed.
An admin token.
And once they have that in thecloud, you don't break rules.
Use that.
(17:18):
This part is uncomfortablebecause it's mundane.
The token didn't come from someelite hack, it came from a
repository that was accidentallymade public.
A CICD pipeline log.
A configuration file that wasn'tsupposed to be exposed.
In other words, normal mistakes.
And attackers are very good atwatching for normal mistakes.
(17:40):
They didn't start deletingthings.
That would have been loud.
First they checked permissions,then they check logging, and
then they did something verysmart.
They turned the lights off.
Audit logs disabled, monitoringreduced.
Because if no one is watching,everything that follows looks
like signs.
Silence.
(18:01):
Remember hack number six, theAPIs?
Remember hack number seven, thesessions?
Same pattern here.
The attacker didn't need toexploit anything.
They stepped into a role thatalready had authority.
Only after everything was quietdid the real damage begin.
Resources deleted, notencrypted, deleted.
Databases, compute, storage, andthen backups.
(18:26):
Because in many environments,backups are protected by the
same admin account, which meansthe safety net fell with
everything else.
Let me say this plainly.
(18:59):
Which version is clean?
Which amount can't even restoreanything now?
Which account can't even restoreanything now?
Recovery took weeks.
Some data never came back.
This has shown up in the exam.
The question wouldn't be whatmalware caused the breach?
It would ask, which missedmisconfiguration allowed a
(19:20):
simple compromised credential tocause total loss?
And the answer would beexcessive privilege, lack of
least privilege.
Say that again.
The cloud doesn't forgetmistakes, it amplifies it.
Because once you understandthat, you stop treating admin
(19:41):
access casually.
After this incident,organizations started asking new
questions.
Do we really need permanentadmin accounts?
Are backup isolated fromproduction identities?
Can logging be disabled by thesame people it monitors?
Those questions came too latefor this victim, but they
(20:02):
defined the future.
And now we arrive at the finalhack.
Not because it's the mosttechnical, not because it's the
most expensive, because it's theone every single other hack
depends on.
Hack number one isn't malware,it's in it isn't phishing.
It's the human layer.
And the year everyone finallyadmit that humans were never
(20:24):
meant to be security controls.
Alright, before I say anything,let me say this plainly.
Hack number one isn't a singleincident.
This wasn't one breach, thiswasn't one company, this wasn't
one headline.
Hack number one was a pattern.
And once you see it, you canunsee.
Think back through everything wetalked about so far.
(20:47):
The phone call that soundednormal, the building system no
one monitored, the library noone thought to secure, the LMS
sessions that never expire, theAPIs that answer too freely, the
hospital schedules thatcollapse, the trusted updates
that betrayed everyone, the helpdesk that just wanted to help.
The cloud admin account thaterased everything.
(21:08):
Different industries, differentsystems, different tools.
Same root cause.
Someone trusted something thatthey didn't verify.
Let me slow this down becausethis is where people usually
push back.
They'll say, Well, users needbetter training, or people are
the weakest link.
I want to challenge that idea.
Humans aren't the weakest link,they're the most overused
(21:31):
control.
We ask people to do theimpossible things.
Detect fake voices that soundreal, identify malicious
requests that look normal,question authority under
pressure, follow perfectprocedures during chaos.
And then we act surprised whenthey don't.
In 2025, attackers didn'texploit ignorance, they
(21:52):
exploited expectations.
They behaved exactly the way thesystem and people expected them
to.
This year the defender stoppedsaying, if users follow the
rule, and I start asking, whydid the system allow this to
depend on a person at all?
That's a big shift, and it's anuncomfortable one.
Because it means the problemsisn't users, it's design.
(22:16):
Remember the line from the startof the first episode?
Attackers in 2025 didn't breaksecurity, they operated inside
it.
This is what it means.
Every hack worked because thesystem said, yes, that's
allowed.
Let me give you the framing thatties this whole episode
together.
Cybersecurity used to be aboutkeeping the bad people out.
(22:36):
In 2025, cybersecurity becameabout containing damage when
trust is abused.
Because trust would always beabused.
That's not pessimisticpessimism, that's realism.
After these incidents,organizations started shifting
how they think.
Not overnight, not perfectly,but noticeably.
(22:57):
They start asking differentquestions.
Why does one person have so muchauthority?
Why does one login grant so muchaccess?
Why do we trust the systemwithout watching it?
And those questions arehealthier than any tool.
Security doesn't fail whenpeople make mistakes.
It fails when the systemrequires people to be perfect.
(23:20):
Because that's hack number one.
If you're a student listening tothis, you're not training to
memorize tools.
You're training to recognizepatterns.
If you're already working in IT,your job isn't fixing solutions.
It's just uncomfortablequestioning about trust.
And if you're designing systems,never ask a human to be your
last line of defense.
(23:40):
So when people look back at 2025and ask, why were there so many
breaches?
The answer won't be hackers gotsmarter.
It'll be this.
We finally realized that trust,unchecked, unmonitored,
unquestioned, was the mostdangerous vulnerability we had.
And once you understand that,you start building systems
(24:00):
differently.
Alright, that's gonna do it forthis episode of Technology Tap.
If this one made youuncomfortable, good.
That means you're payingattention.
Because the goal isn't fear,it's awareness.
And the moment you startquestioning trust, that's when
security actually begins.
(24:20):
I'm Professor J Rod, and happynew year.
And let looking forward to what2026 brings us all.
(24:43):
This has been a presentation ofLittle Cha Cha Productions, art
by Sabra, music by Joe Kim.
We're now part of the Pod MatchNetwork.
You can follow me at TikTok atProfessor J Rod at J R O D, or
you can email me at ProfessorJrodj R O D at Gmail.com.