All Episodes

June 11, 2025 19 mins

69% of enterprises cite AI data leaks as their top concern, yet 47% have no security controls. This isn't just a gap—it's organizational cognitive dissonance at enterprise scale.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the deep dive. We take the sources you've

(00:02):
gathered, articles, researchnotes, and really try to cut
through the noise to find theknowledge that matters.

Speaker 2 (00:08):
Exactly. Our goal is to give you that shortcut,
uncover some surprising facts,connect the dots so you can feel
genuinely well informed onthese, often complex topics.

Speaker 1 (00:17):
And today we are diving deep into attention
that's really defining thecurrent enterprise landscape.
It's this paradox of, well,incredibly rapid AI adoption
bumping up against maybe asurprising lack of basic
security readiness.

Speaker 2 (00:34):
Yeah. You absolutely see this disconnect highlighted
across the sources you broughtin. It paints a pretty clear
picture of the challenge, thatbusinesses are grappling with
right now.

Speaker 1 (00:43):
So let's maybe set the scene a bit. On one hand,
the adoption rate is just it'swarp speed. Right? OpenAI
recently announced they've hit3,000,000 paying business users.

Speaker 2 (00:52):
Wow, 3,000,000.

Speaker 1 (00:53):
Yeah, up from 2,000,000 not that long ago.
That's a huge number ofcompanies and it includes those
in really highly regulatedsectors, think finance,
integrating AI tools into theirday to day.

Speaker 2 (01:03):
And look, that growth is fantastic from an innovation
standpoint, productivity too.But the other side of the story,
the one told by reports like,the BigID AI Risk and Readiness
Report, well, it's a starkcontrast.

Speaker 1 (01:16):
Right. This is where the paradox really bites, isn't
it? You've got companies rushingto embrace AI for all its
potential, but are they actuallyready for the risks that come
with it?

Speaker 2 (01:25):
The data from that report is pretty eye opening,
actually. There are top securityconcerns cited for 2025. It's AI
powered data leaks feared by amassive, 69% of organizations
they surveyed.

Speaker 1 (01:37):
Okay. 69. So companies are very aware that
their data is on the line withAI. That sounds like a good
thing. Right?
They see the problem.

Speaker 2 (01:44):
You'd think so, wouldn't you? But then, the very
same report reveals that despitethis, you know, near unanimous
concern about data leaks, almosthalf of these organizations,
47%, have zero AI specificsecurity controls currently in
place.

Speaker 1 (01:58):
Wait. Half? Half the companies worried sick about AI
leaks have absolutely nospecific defenses. That feels
like stepping onto a highwaywhile you're busy looking at the
map.

Speaker 2 (02:06):
It really does. And it brings up a fundamental
question that, you know, thisdeep dive is going to tackle.
How do you square that circle,that high level of risk
awareness with such a, frankly,low level of protective action?

Speaker 1 (02:18):
So, okay, our mission today is clear. We need to use
your sources to really dissectthis paradox. We'll look at why
this gap is so wide, examinesome real world examples where
things have gone wrong.

Speaker 2 (02:31):
Right. Understand the specific threats targeting the
AI systems themselves.

Speaker 1 (02:35):
And explore what companies can actually do about
it. How can they bridge thissecurity chasm?

Speaker 2 (02:40):
Okay. So let's dig a bit deeper into this
preparedness gap. Maybe usingthat big ID report as our guide
again. That 47% figure, the oneabout zero specific controls,
that's just one symptom.

Speaker 1 (02:51):
Okay. So what else does the report tell us about
where organizations are fallingshort?

Speaker 2 (02:55):
Well, visibility is a massive issue. Nearly two thirds
of organizations, that's 64%,they lack full visibility into
their AI risks. They often don'teven know where AI is being used
or, crucially, what sensitivedata it might be touching.

Speaker 1 (03:09):
And if you don't know where the risk is, you certainly
can't protect against it. Thatseems pretty basic.

Speaker 2 (03:14):
Exactly. And compounding that, almost 40% of
organizations flat out admitthey don't have the tools they
need to protect the data thattheir AI systems can access.

Speaker 1 (03:25):
So it's like a perfect storm.

Speaker 2 (03:26):
It really is. You lack the visibility and even if
you had it, you might lack themeans to actually act on it.

Speaker 1 (03:33):
That really paints a picture of a, well, a real lack
of maturity in AI securityacross the board, doesn't it?

Speaker 2 (03:39):
It absolutely does. The report suggests that only a
tiny fraction, just 6%, havewhat you'd call an advanced AI
security strategy or haveimplemented a recognized
framework like, AI Trism.

Speaker 1 (03:51):
AI Trisma.

Speaker 2 (03:52):
Yeah. It stands for AI Trust, Risk and Security
Management. So the point is mostorganizations are basically just
improvising or maybe just hopingfor the best.

Speaker 1 (03:59):
And we haven't even touched on the AI they might not
even know is being used insidethe company, that whole shadow
AI thing.

Speaker 2 (04:05):
Oh, that's a huge factor contributing to the risk
landscape. And yeah, it'smentioned in your sources.
Employees quite understandablyare eager to use these new AI
tools to boost their ownproductivity.

Speaker 1 (04:16):
Sure. Makes sense.

Speaker 2 (04:17):
But when they use public models or maybe
unapproved tools and potentiallyfeed them confidential company
data, Well, that just bypassesall the traditional security and
IT oversight completely.

Speaker 1 (04:28):
So why? Why this mad rush without the guardrails?
What's driving companies todeploy AI so fast when they seem
to know the risks but admit theyaren't prepared?

Speaker 2 (04:39):
It mostly boils down to intense competitive pressure.
Yeah. The need for speed.Businesses see this immense
potential for productivitygains, for enhancing all sorts
of functions, even the treasuryreport. You know, it highlights
AI's potential to actuallyimprove internal cybersecurity
and anti fraud capabilitieswithin financial services.
So the drive for efficiency, forinnovation, that's clearly the

(05:02):
primary engine here, andsecurity. It's just struggling
to keep pace with that velocity.Look, this isn't just some
theoretical problem that mighthappen down the road. Your
sources detail quite a few realworld incidents that really
illustrate the consequences ofthis security gap.

Speaker 1 (05:17):
Okay. This is where it gets really interesting and
maybe a bit concerning. Like,take the Samsung data leak back
in May 2023, employees usingChatGPT.

Speaker 2 (05:27):
Right. To help with tasks involving confidential
source code, meeting notes,things like that.

Speaker 1 (05:31):
Exactly. And without realizing it, they were
potentially sending thatsensitive data straight to an
external model. That led to asignificant leak, didn't it? And
a company wide ban on usingthose kinds of tools.

Speaker 2 (05:43):
Yeah. That was one of the early very public wake up
calls about unintentional dataleakage through just employee
use of these widely availableAIs. Yeah. It really showed how
easily sensitive info could justwalk out the door.

Speaker 1 (05:54):
Then we have those instances of chatbot
manipulation, particularly withthe large language models, the
LLMs used for customerinteraction.

Speaker 2 (06:02):
Yes. These really show the vulnerability to things
like prompt injection, theChevrolet chatbot incident. That
was wild, tricked into agreeingto sell a car for a dollar back
in December 2023.

Speaker 1 (06:14):
A dollar. And what about Air Canada?

Speaker 2 (06:16):
Right. February 2024. Their chatbot misinterpreted a
policy, promised a bereavementfair refund that Air Canada
later argued it wasn't obligatedto give. But guess what? A
tribunal actually upheld thechatbot's promise the company
had to pay.

Speaker 1 (06:32):
So the chatbot's mistake cost them real money.
And there was that DPD deliverychatbot too. Right? January
2024?

Speaker 2 (06:38):
Yeah. Prompted to go completely off script. It even
wrote a poem about how bad thecompany service was. Humorous?
Maybe.
But it highlights the risks whenthese models interact directly
with the public without reallyrobust controls in place.

Speaker 1 (06:50):
But the issues go deeper than just customer
service chats, don't they? Thereare risks around internal data,
even the potential for companydata to influence the AI models
themselves.

Speaker 2 (06:58):
Definitely. Amazon employees were reportedly warned
back in January 2023 aboutsharing confidential info with
ChatGPT. Why? Because theystarted noticing the LLM's
responses seemed oddly similarto their internal data.

Speaker 1 (07:13):
Uh-oh, suggesting it might have been used in the
training.

Speaker 2 (07:16):
Exactly. One estimate put the potential loss or risk
from that single incident atover a million dollars. It
reportedly contributed to amassive drop in Alphabet's stock
price. We're talking over a$100,000,000,000 just
illustrating the sheer financialimpact of inaccurate AI output.

Speaker 1 (07:32):
A $100,000,000,000 from one piece of incorrect AI
information. The ripple effectis huge.

Speaker 2 (07:39):
It really is. And then you have instances like
Snapchat's My AI back in August2023, where the chatbot
apparently gave potentiallyharmful advice or just really
unusual weird responses. Thishighlights critical issues
around safety, reliability, andcontrolling AI behavior,
especially when it interactswith maybe more vulnerable
users. Now let's shift gearsslightly. We've talked about

(08:00):
incidents from accidental use orsimple manipulation.
But AI is also becoming apowerful tool for attackers.
It's creating entirely new typesof sophisticated fraud. The
Treasury report really digs intothis, focusing on finance, but
the implications are muchbroader.

Speaker 1 (08:13):
Okay, advanced AI enabled fraud. What does that
actually look like in practice?

Speaker 2 (08:17):
Well, identity impersonation is a major area of
concern. Deepfakes, these highlyrealistic but totally fake audio
or video clips powered by AI aremaking traditional ID
verification methods much, muchharder to trust.

Speaker 1 (08:29):
Like using someone's voice, maybe a CEO's voice to
authorize a money transfer.

Speaker 2 (08:34):
Precisely. That CEO voice scam resulted in a
$243,000 loss. That's a chillingreal world example, but it's
evolving incredibly fast.Remember the Hong Kong CFO
incident?

Speaker 1 (08:47):
Vaguely. Refresh my memory.

Speaker 2 (08:49):
An employee transferred

Speaker 1 (08:59):
25,000,000 based on a fake video call. That's
terrifying. The visual, theauditory cues we instinctively
rely on are being fundamentallycompromised by AI.

Speaker 2 (09:09):
They really are. And then there's something called
synthetic identity fraud. Thisis where criminals create
entirely fake identities. Theyoften use a mix of real stolen
data fragments and completelyfabricated information.

Speaker 1 (09:21):
And they use these fake identities for what?

Speaker 2 (09:23):
To open bank accounts, apply for credit
cards, secure loans, and thenjust disappear.

Speaker 1 (09:27):
And AI makes it easier to create these fake
identities, more convincingones.

Speaker 2 (09:31):
Potentially, yes. It can enhance the ability to
generate realistic soundingpersonal details or maybe
combine disparate data points inways that just slip past
traditional fraud detectionmethods. Your sources mentioned
that synthetic identity fraudcosts financial institutions
billions every year. Oneanalysis suggested
$6,000,000,000 annually. And therecent report showed an 87%

(09:55):
increase over just two years insurveyed companies admitting
they extended credit to thesesynthetic customers.
Overall synthetic identity fraudis up by like 17%.

Speaker 1 (10:04):
That's a massive growing problem and AI is
essentially pouring gasoline onthe fire.

Speaker 2 (10:09):
It kind of lowers the barrier to entry for criminals.
It reduces the cost, thecomplexity, and the time needed
to pull off these reallydamaging types of fraud.

Speaker 1 (10:17):
Okay. So we've covered incidents from security
gaps and AI being usedmaliciously by attackers. But
what about attacks on the AIsystems themselves? Are the AI
models vulnerable?

Speaker 2 (10:27):
Oh, absolutely. That's a critical point the
sources make. AI systems aredefinitely not these
impenetrable black boxes. Infact, because they rely so
heavily on data both fortraining and for operation, they
present really uniquevulnerabilities, new attack
vectors. The Treasury reportoutlines several specific
categories of threats thattarget AI systems directly.

Speaker 1 (10:49):
Alright. Let's break these down then. What kinds of
attacks are we talking abouthere?

Speaker 2 (10:52):
Okay. up is data poisoning. This is where
attackers deliberately corruptthe data that's used to train or
fine tune an AI model, or theymight even directly manipulate
the model's internal weights.

Speaker 1 (11:03):
And the goal is?

Speaker 2 (11:04):
To impair the model's performance, make it less
accurate, or maybe subtlymanipulate its outputs in a way
that benefits the attacker.Think about say injecting biased
or incorrect information intothe data set that a hiring AI
uses.

Speaker 1 (11:16):
Or slipping nasty phrases into chatbot training
data.

Speaker 2 (11:20):
Exactly. It's like giving the AI a faulty education
so it learns all the wrongthings. Yeah. Or biased things.
Okay.
is data leakage. Duringinference, this is where an
attacker crafts specific queriesor inputs to an already deployed
AI model. Okay. To try andreveal confidential information
that the model learned duringits training phase.

Speaker 1 (11:39):
So basically, can trick the AI into spilling the
secrets it was trained on justby asking the right questions.

Speaker 2 (11:45):
Potentially, yes. You can be coaxed into divulging
information. It reallyshouldn't. is evasion attacks.
This involves crafting inputsspecifically designed to trick
the model into producing adesired but incorrect output
causing it to evade its intendedfunction.

Speaker 1 (12:01):
Ah, like those chatbot examples we talked about
earlier. Getting the AI to saysomething totally absurd or even
malicious using a clever prompt.

Speaker 2 (12:09):
Yes. Those are often forms of evasion, usually via
prompt injection techniques. Andfinally, there's model
extraction. This is where anattacker essentially steals the
AI model. Itself.

Speaker 1 (12:19):
Steals the model? How?

Speaker 2 (12:21):
By repeatedly querying it and then using the
responses to reconstruct afunctionally equivalent model.
Basically reverse engineeringthe AI based on its observable
behavior.

Speaker 1 (12:30):
Wow. And why would they do that?

Speaker 2 (12:31):
Well, it allows attackers to steal valuable
intellectual property. The modelitself might be worth a lot. Or
they could potentially use thestolen model to figure out
weaknesses and craft moreeffective attacks against the
original system. And it's, worthnoting that trusted insiders,
unfortunately, can also pose asignificant threat here given
their privileged access.

Speaker 1 (12:51):
Okay. This all sounds pretty challenging, frankly.
With AI adoption justaccelerating like this, how do
businesses even begin to catchup on the security side?

Speaker 2 (13:00):
The urgency is definitely there and it's
highlighted in your sources,especially around concepts
discussed during things likedata privacy week. Yeah. You
know, the average cost of a databreach is now substantial, I
think $4,880,000. Ouch. Yeah.
And customer trust. It'sincredibly fragile. One survey
found 75% of consumers said theyjust wouldn't buy from a company
they don't trust with theirdata.

Speaker 1 (13:21):
Mhmm.

Speaker 2 (13:22):
So proactive security isn't just nice to have. It's
essential for protecting boththe bottom reputation.

Speaker 1 (13:27):
Which really points to the importance of building
security in right from the verybeginning, doesn't it? Rather
than trying to, you know, boltit on later as an afterthought.

Speaker 2 (13:36):
Precisely. The sources really emphasize
integrating privacy and securitymeasures right into the initial
design phase, sometimes calledprivacy by design. It means
embedding security controlsthroughout the entire software
development life cycle, theSTLC, not just checking a box at
the very end.

Speaker 1 (13:55):
So not just when the shiny new AI tool is ready to
deploy, but from the moment youeven start thinking about
building it or bringing it in.

Speaker 2 (14:02):
Exactly right. And this requires using the right
tools for visibility, forsecurity, for governance across
all the AI tools being usedwithin the organization,
including that shadow AI wementioned. It means implementing
robust governance frameworksspecifically designed for AI
systems and having real timedetection mechanisms to spot
suspicious activity quickly.

Speaker 1 (14:22):
The Treasury report really seemed to underscore the
central role of data in all ofthis.

Speaker 2 (14:26):
It does. And it's absolutely fundamental. Data
governance is crucial. You needto understand and map the
complex data supply chain thatfeeds these AI models. Data is
the foundation of AI.
Right? Mhmm. So securing thatfoundation has to be the top
priority.

Speaker 1 (14:40):
Which naturally implies securing the software
supply chain too, especiallywhen you're bringing in party AI
tools, which most companies are.

Speaker 2 (14:47):
Oh, absolutely. Due diligence on party vendors is
critical. Organizations need tobe asking really tough questions
about their security practices.How do they protect your data
when it's used by their models?What measures do they have
against the kinds of attackswe've just discussed like data
poisoning or model extraction?

Speaker 1 (15:06):
And just going back to basics, strengthening
authentication methods seemslike vital step even if it
sounds simple.

Speaker 2 (15:12):
It is vital. The sources point out that relying
on simple passwords or even someolder multi factor
authentication methods like SMScodes, well, it's just not
sufficient anymore againstmodern threats. Organizations
really should adopt stronger,maybe risk based tiered
authentication and be cautiousabout disabling helpful security

(15:33):
factors like geolocation ordevice fingerprinting, which add
extra layers of verification.

Speaker 1 (15:38):
What about the whole regulatory environment? Is that
providing any clarity here ormaybe driving action?

Speaker 2 (15:44):
Well the regulatory landscape is evolving,
definitely, but it's complex andfrankly it's still playing catch
up. You have things like the EUAI act which is a major
development and bodies like theFSOC, the Financial Stability
Oversight Council, they identifyAI as a potential vulnerability
in the financial system, whichhighlights that regulatory
concern is there.

Speaker 1 (16:03):
But it's lagging behind the tech.

Speaker 2 (16:04):
Often. Yes. Regulators are trying to apply
existing risk managementprinciples to AI, but applying
those old rules to these novel,incredibly complex AI systems is
really challenging. And thesheer pace of change just makes
it hard for regulations to keepup effectively.

Speaker 1 (16:18):
And just to add another layer of complexity, the
sources mentioned somethingsurprising, a language barrier.

Speaker 2 (16:25):
Yeah. This is a really interesting point that
came up in the Treasury report.It seems there isn't yet uniform
agreement on even basic AIterminology. Terms like
artificial intelligence itselfor generative AI can mean
different things to differentpeople, different vendors. This
lack of a common lexicon, ashared dictionary, really

(16:45):
hinders clear communicationabout the risks, the
capabilities, the securityrequirements.

Speaker 1 (16:50):
So even just discussing the problem
effectively becomes difficultbecause everyone might be using
slightly different definitionswithout realizing it.

Speaker 2 (16:57):
Precisely. They even mentioned that terms we hear all
the time like hallucination forwhen an AI gives a false output
can actually be misleading. Itsort of implies the AI has some
form of consciousness or intent,which it doesn't. Yeah. There's
a clear need for a standardizedlexicon just to have effective,
productive discussions about AIrisks and responsible adoption.

Speaker 1 (17:16):
So let's try and wrap this up. The core paradox seems
really stark. We're seeing thisunprecedented speed in AI
adoption, and it's driven bythis undeniable promise of
productivity of innovation.

Speaker 2 (17:28):
But that incredible speed is just dramatically
outstripping the implementationof fundamental security controls
and strategies. It's leavingthis wide vulnerable gap.

Speaker 1 (17:38):
And crucially, this isn't some hypothetical future
problem we're discussing. Asyour sources clearly show, real
world incidents are alreadyhappening. Everything from
accidental data leaks.

Speaker 2 (17:48):
And easily manipulated chatbots.

Speaker 1 (17:50):
To incredibly sophisticated deepfake enabled
fraud, and even direct attackson the AI models themselves.

Speaker 2 (17:56):
And these incidents carry tangible costs, not just
financially, you know, in termsof the cost of breaches and
fraud, but also significantly interms of lost customer trust and
damaged company reputation.

Speaker 1 (18:08):
So understanding this paradox and actively working to
bridge that security gap seemsabsolutely critical for anyone
involved in enterprisetechnology today. Secure AI
implementation isn't just aboutticking compliance boxes or
avoiding disaster anymore.

Speaker 2 (18:22):
No, it's really essential for protecting the
very innovation that companiesare chasing in the place and
ensuring sustainable,trustworthy growth for the
future. It's about enabling theincredible power of AI without
letting the inherent risksundermine all that potential.

Speaker 1 (18:38):
Which leads us perfectly into our final thought
for you, the listener, toconsider. Given that the sources
we've explored todayconsistently highlight this
fundamental dependency of AIsystems on data for training,
for operation, for basicallyeverything.

Speaker 2 (18:53):
And given the increasing complexity of the
data supply chain that feedsthese systems.

Speaker 1 (18:57):
Plus the potential for sophisticated attacks like
data poisoning happening rightat the source, corrupting the
data before the AI even learnsfrom it. Ask yourself this: If
the very foundation of AI isdata, and that data is
manipulation, increasinglydifficult to trace or even
verify, can we truly trust theAI systems built upon it without

(19:18):
completely rethinking datasecurity from the ground up?

Speaker 2 (19:20):
How do we

Speaker 1 (19:35):
up.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.