Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Popular NPM Linter packages hijackedvia phishing to drop malware.
Ukrainian cert discovers lame hugmalware linked to a PT 28 and using large
language models for phishing campaign.
Microsoft says it is stopped usingChina based engineers to support
defense department computer systems.
(00:21):
And ex IDF cyber chief explains why socialengineering worries him more than O days.
This is Cybersecurity today, andI'm your host, David Shipley.
Before we get started, Ijust wanted to say thank you.
Last week Jim shared that CybersecurityToday made the top 10 news podcasts in
(00:43):
Canada, according to the Feed Spot listof Canadian News Podcasts, we're also
counting down to our 10000000th download,which likely happened over the weekend.
Wow.
I am so grateful to be part of thisshow and of so many of your routines
for the last few years, starting onthe week and month review panels,
(01:05):
and now as your Monday morning host.
It's an honour and a privilege.
Some of you may know that before Ifounded Beauceron Security, I was
a newspaper reporter, and beingable to be a journalist again
on the side is so fulfilling.
So thank you for listening and thankyou, Jim, for inviting me on the ride.
Now let's get to the news and boydo we have a lot to talk about.
(01:30):
One click millions atrisk again another week.
Another open source supply chain mess.
This time with ES lint config prea JavaScript package downloaded
over 30 million times a week.
Yes, million with an M. So what happened?
The maintainer got phished.
A slick email spoofing support@npmjs.comlured them into giving up
(01:54):
credentials, and that's all it took.
Suddenly the attacker was inside thedeveloper's NPM account publishing
malicious versions of E Es lint dash,config dash prettier and ES link, dash
plugin dash prettier just like that.
The trusted became toxic.
These poison packages contained apost-install script install JS that looked
(02:17):
like it was checking for dis space, butsurprise it was actually running a DLL
via run DLL 32 on Window systems that DLLAknown Trojan weekend, it's still flying
under the radar of most antivirus tools.
Only about 19 of 72 engines onvirus total detected and developers
noticed something was off.
(02:37):
The GitHub repo hadn't changed, butthe NPM registry showed new versions.
ES lint, config, prettier.
Eight point 10.1, 9.10.1, 10.1 0.6, 10.1 0.7.
And Es lint plugin prettier.
4.22 and 4.23.
(02:58):
No change logs, no commits,just malware sneaking in under
the guise as business as usual.
Now, to his credit, the maintainerJohn Quinn came clean quickly.
This is awesome.
It's not about blame,it's not about shame.
It's quickly telling people something badhappened, and that's exactly what he did.
Thank you, quote.
(03:19):
I've deleted the NPM token andwill publish a new version asap.
Thanks all and sorry for my negligence.
End quote.
The important part telling people thisincident is part of a disturbing trend.
In March over 10, major NPM libraries werecompromised and turned into info dealers.
Last month, 17 glue stack packageswere hijacked to deliver a remote
(03:41):
access Trojan and the common threadphishing and credential theft.
This isn't just about bad codemaking, its way into the supply chain.
It's about people being targeted.
Maintainers are overwhelmed, volunteersfor the most part, or they're
small teams, and the open sourceecosystem runs on trust and goodwill.
(04:02):
And that trust can be shattered withone phishing email and one click.
now?
Avoid the bad versionsthat we've talked about.
Check your package.
Lock JSON or yarn Lock forany signs of those versions.
Audit your CI ICD pipelines andruntime environments for any suspicious
activity, especially in Windows.
(04:24):
Rotate any credentials or secretsthat may have been touched by
compromised builds, and assume anyother packages from the affected
maintainer may also be compromised.
Review them.
This is yet another warning bell about thefragility of our software supply chain.
It's time we all got serious aboutmaintainer security, and that means
(04:46):
helping them, not just blaming them,they need, multifactor authentication,
tighter controls, and maybesome actual funding and support.
Because let's face it, if a singlePhish developer can turn 30 million
downloads into a malware dropper,we've all got bigger problems.
Now, let's turn to anotherstory about social engineering.
(05:09):
Just when you thought the AI hypecycle couldn't get any weirder.
Now we've got large languagemodels helping deliver malware.
Ukraine's, computer EmergencyResponse Team Cert UA is warning
about a new phishing campaigntied to none other than APT28.
APT 28 is a Russian state-sponsoredhacking group with a long wrap sheet.
(05:30):
The malware in question is a Pythonbased payload called Lame Hug, and
the twist, it taps into Q1 2.5.
Coder 32 B instruct a large languagemodel from Alibaba Cloud to dynamically
generate and execute commandsbased on plain English prompts.
That's right, it's malwarenow with a chatbot sidekick.
(05:54):
On July 10th, Ukrainian officials startedseeing spoofed emails that looked like
they came from government officials.
Inside was a zip file loadedwith three suspicious payloads.
The files contain the lame hug malware,which uses Hugging Face's API to talk
to the LLM and generate commands like
Gathering system info scanninguser folders for text and
(06:16):
PDF files, and sending stolendata via SF FTP or HTTP post.
It's not clear how successfulthis campaign was, but it's the
methodology that grabs our attention.
This is a new era for command andcontrol, and it's one that was highlighted
earlier this spring by researchers whotalked about compromising large language
(06:40):
model safeguards to do similar things.
In this case, it's aboutexfiltration of information.
In previous examples, it was about commandand control for a self propagating worm.
Now by blending into legitimateAI infrastructure, like Hugging
Face, attackers are doingwhat they've always done best.
They're hiding in plain sightjust like they've abused Dropbox,
(07:02):
Google Docs, or GitHub before.
Now they're slipping past defenses underthe cover of machine learning APIs.
And this isn't just a one-offCheckpoint recently uncovered another
piece of malware called Skynet.
that tried to trick AI-based securitytools using prompt injection.
Basically telling the AI toignore its rules and pretend
(07:22):
it's a calculator instead.
It didn't work this time, butyou can bet money these kinds
of attacks are to get better.
Checkpoint had an interestingquote in their report wanna
give them credit for this quote.
First we had the sandbox that ledto hundreds of evasion techniques.
(07:42):
Now we've got AI malware auditors.
Naturally, that means hundreds of AIaudit evasion techniques are coming.
What does this mean for all of us?
Let's connect some dots Statesponsored groups are experimenting
with large language models tocreate adaptive stealthy malware.
Open source AI models and public APIs arebeing hijacked for malicious use, and AI
(08:04):
based defenses are part of the attackerthreat model, This is the beginning of
the AI versus AI era in cybersecurity.
Organizations need to ask toughquestions about how AI tools are
integrated into their environment.
Whether threat detection systemscan spot the abuse of legitimate
cloud services and how much trustwe're putting into automation.
(08:25):
That can be tricked with a cleverlycrafted sentence You don't need zero
days when you can trick a chat bot intorunning or ignoring malicious code.
And once again.
All of this starts with a fish.
Now, when I first saw this next headlineSaturday morning, while I can't share in
(08:45):
a family friendly program, what exactlythe first sentence that ran through my
head was, I mean, it did start with whatthe, but anyways, the next thought for
the headline went something like this.
. Microsoft has confirmed it's no longerusing engineers based in China to support
US Department of Defense Cloud Systems.
(09:06):
following a bombshellinvestigation by ProPublica that
exposed a deeply flawed setup.
For years Microsoft used US-basedquote, digital escorts, end
quote, contractors with securityclearances to act as Go-Betweens.
The real technical work came fromMicrosoft engineers in places like
(09:28):
China, India, and the EU, who toldescorts what commands to run on
the Pentagon's cloud infrastructuresometimes with barely any oversight.
And those digital escorts, well, theyweren't always trained to thoroughly
review what was being provided to them.
They were often outgunned andunder-prepared, told to copy and paste
(09:50):
instructions from foreign-based engineersdirectly into the US Federal Cloud,
with no clear way of verifying whetherthe commands were safe or malicious.
and this news comes as Salt Typhoon.
The Chinese ace APT that ran throughglobal telco networks was revealed
to have compromised the US militarynetworks, particularly the National Guard.
(10:12):
the DOD issued an alert to all militarynetworks to assume breach and to
start doing deep investigation work.
Since 2011, the US government has requiredthat people working with federal data have
the right authorizations, US citizens orpermit residents with background checks.
Microsoft chasing Cloud contracts builtto work around using US escorts to front
(10:37):
for more technically skilled, but foreignbased engineers and China based engineers.
Including some working from knownadversary territory or feeding commands
into Department of Defense systemsindirectly, but potentially with impact.
And the fallout came fast last week.
Microsoft's Frank Shaw posted Fridaythat quote, no China-based engineering
(11:01):
teams end quote would be allowed tosupport DOD cloud services going forward.
US Defense Secretary Pete Hegseth,responded bluntly on x quote, foreign
engineers from any country, including ofcourse, China should all caps never be
allowed to maintain or access DOD systems.
(11:21):
End quote.
US Senator Tom Cotton called foran investigation citing China as
one of the most aggressive anddangerous threats, to US critical
infrastructure and supply chains.
Now, let's be clear, when a companyentrusted with safeguarding National
Defense System takes a just trust usapproach to foreign access, that's a
failure of leadership, not just logistics.
(11:43):
Microsoft Defense digitalescorts were trained and cleared.
Engineers had no direct access tothe data and internal controls like
lockbox would flag bad requests.
Here's the reality.
If the digital escorts were just copyand pasting stuff from foreign engineers
that they didn't understand, they werethe equivalent of Kermit the Frog.
(12:06):
Somebody else was doing the talking.
I mean, coding, if you're copying andpasting commands from a nation state,
adversary, you've already lost the plot.
there is a dangerous myth incybersecurity that only advanced zero
day exploits are what we should fear,and the truth is it's people not
payloads that are the real targets.
(12:28):
Two groups prove it.
Scattered Spider, the financiallymotivated crew of mostly young native
English speakers and Iranian state backedthreat actors who've made a habit of
punching far above their technical weight.
And what do they have in common?
They're masters of social engineering.
And in today's threat landscape,that's worth more than a dozen O days.
(12:51):
Take Iran's 2020 attack onIsraeli insurer Shirbit.
They didn't use cutting edgetools or NSA great exploits.
They tricked their way in, stolehighly sensitive data, including
info tied to Israel's defenseministry, and then blasted out online
for maximum psychological impact.
That was their wind condition, not justthe breach, but the humiliation of one
(13:14):
of the world's most elite militaries andcertainly one of the most sophisticated
countries when it comes to cyber defense.
They wanted fear, chaos.
It wasn't about breaking systems,it was about breaking confidence.
And now thanks to generative ai, thekind of social engineering used by
Iran, by Scattered Spider and othersis cheaper and more scalable than ever.
(13:39):
Ariel Parez, former unit 8,200,officer Israel's elite cyber unit said
this to the register quote, this iswhat worries me more than zero days.
He's not wrong.
Now, AI isn't the enemy, but it isgiving attackers a serious upgrade.
Today's attackers can use LLMs togenerate personalized phishing campaigns,
(14:03):
For the phishing campaigns, theycan improve fake resumes, spoof
LinkedIn accounts, convincingemails, entire websites.
And they can do this work in seconds.
Forget weeks of manual reconnaissance.
Just point an AI and a target.
Social media and outcomes, a dossier,friends and coworkers, hobbies and
organizations, likely hooks thatmight lure them in language tone
(14:24):
and even emojis that could be used.
It's not theory.
Google has seen Iranian hackers usingGemini for this exact purpose, It's about
understanding human behavior, exploitingtrust and weaponizing communication.
And right now no one's doing itbetter than scattered spider.
This is a crew that successfully breachedmajor US and UK retailers, insurers,
(14:48):
and more using their fluency in theEnglish language, cultural awareness,
a bit of research on their targets,and well practiced social engineering
In some cases, they're teaming up.
Iranian threat actors are already addingransomware and influence ops to their
toolkit, and they're collaborating withgroups like Alfi, Black Cat buying stolen
(15:10):
credentials from crews like ScatteredSpider and expanding what they can
do with limited technical resources.
Neither Iran nor scattered Spiderhave the most advanced cyber weapons,
but maybe they don't need them.
when you can get inside a network justby being convincing, you don't need to
spend years developing exotic exploits.
(15:31):
You need a bit of intel, somecharm and an AI that can write
better emails than most people.
Some of this is about morethan just stealing data, it's
about psychological impact.
Take the Iranian runs at various USwater utilities and fuel systems.
They haven't been that successful,but they've generated a lot of fear,
(15:52):
and if they were successful, thepsychological impact, not to mention
the safety impact would be huge.
We need to make sure that we'rebuilding resiliency to social
engineering through education.
Done frequently enough to keep peopleaware that yes, they could be a
target, they can fall victim, andhere's what they need to do when they
(16:15):
fall victim, tell somebody about it.
We need to teach employees notjust how to spot fishing, but
why they're being targeted.
And we need to focus on identity,access, and culture as much as
firewalls and patch management.
Let's stop obsessing over O days and startfocusing on zero trust for human behavior,
(16:36):
because that's where the fight is heading.
as always, stay skepticaland stay patched.
if you like the show, tell others.
Maybe give us a rating or leave a reviewon your favorite podcast platform.
We'd love to grow our audienceeven more, and we need your help.
I've been your host, David ShipleyJim Love will be back on Wednesday.
(16:57):
As always, thanks for listening.