Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Deep fakes hit seniorUS government officials.
AI connects to enterprise systems.
Ready or not?
An update on Ingram Micro and Google'sconfusing Gemini release for Android.
This is Cybersecurity today.
I'm your host, Jim Love.
(00:23):
Voice cloning and deep fakes have hitthe highest levels of the US government.
Someone used artificialintelligence to copy.
Secretary of State, Marco Rubio's voiceand fooled three foreign ministers, a
US governor and a member of Congress.
The State Department caughtthe scam in mid-June.
The faker created a Signal accountwith the name Marco rubio@state.gov
(00:48):
and left AI voice messages for targets.
The State Department cable said theperson left voicemails on Signal for at
least two targeted individuals, and inone instance sent a text message inviting
the individual to communicate on Signal.
This shows just how easy AI voice cloninghas become you need just 15 to 20 seconds
(01:09):
of audio of the person, which is easy.
In Marco Rubio's case, you uploadit to any number of services.
Click a button that says, I havepermission to use this person's voice.
And then you type whatyou want 'em to say.
The attack reveals a bigproblem also with Signal.
The encrypted messaging app that theTrump administration uses heavily Signal
(01:30):
protects your messages, but it can'tstop someone from pretending to be you.
And this isn't the first time this hashappened This spring someone made a
fake video of Rubio saying he wanted tocut off Ukraine's Star Link Internet.
In May, someone hacked the WhiteHouse Chief of Staff Susie Wiles'
phone and pretended to be her.
(01:51):
when calling Senators and Governors.
The bigger picture is scary.
Government officials everywhereare sitting ducks because AI,
voice cloning got so easy.
While security protocols stayed the same,
anyone with public audio can befaked in minutes for businesses.
This is also a huge problem.
Many corporate executives voicesare easily available as well,
(02:15):
and companies can train people tospot fake emails, but would your.
Processes spot an instruction that wassent on one channel, like a voicemail and
validated on another, such as an email.
It's a question to ask, and it's timefor governments and companies to develop
protocols and training to addressthe inevitable use of deep fakes.
(02:41):
Companies are racing to build toolsthat connect AI models and agents
directly to enterprise systems and data,and these connections are inevitable,
even with critical enterprise systems.
The question isn't whether thiswill happen, It's how much time we
have to safely make this happen.
Three recent developments of the past weekshow the promise and peril of this trend.
(03:05):
Google open source tools that let AIagents query databases with minimal code.
The Linux Foundation launched aprotocol, so different AI agents
can communicate across platforms.
And meanwhile, researchers havefound critical security flaws in
Anthropics AI development tools.
(03:26):
And for those who might have missed it,anthropic is the AI company that created
MCP, the new protocol that allows AIto connect directly with applications.
The good news is that structuredand standard methods for AI
enterprise connections are emerging.
Google's database toolkit letsdevelopers integrate databases with
(03:48):
AI agents using a configurationdriven setup where they simply define
their database type and environment,and the toolbox handles the rest.
Instead of hacks and workarounds,companies get standardized approaches.
And standardized approaches cantheoretically at least be made safer.
The Linux Foundation's A2Aprotocol takes this further.
(04:11):
It creates a common language for AI agentsfrom different companies to discover
each other and collaborate automatically.
And over 100 tech companies now supportthis protocol suggesting that the industry
recognizes the need for these standards.
But these systems are new.
And new systems create new attack vectors.
(04:32):
Researchers found that Anthropics MCPinspector carried a critical vulnerability
with a 9.4 out of 10 severity score.
Attackers could run arbitrarycommands on a developer's machine
by creating malicious websites.
So here's the reality.
AI agents will connect to your enterprisesystems whether you plan for it or not.
(04:55):
Employees will find ways tohook AI tools into databases,
applications, and workflows.
The smart move is in getting ahead ofthis trend with proper security frameworks
and . It is going to be a challenge.
Think about what's coming.
Instead of isolated AI tools, you'llhave networks of AI agents that can
communicate across platforms, accessdatabases, and coordinate actions.
(05:20):
Each connection point becomes apotential attack vector, but also a
potential productivity multiplier.
And when productivity meets risk.
Productivity wins.
The emergence of structuredprotocols is encouraging.
It means the industry is starting to thinkabout interoperability and security that's
(05:40):
built in rather than bolted on later.
For those who want to get startedfinding out more information about this.
I put a couple of links in the show notes.
On Monday's show, we covered thefact that IT distributor Ingram Micro
suffered a safe pay ransomware attacklast Thursday, and it knocked out their
websites and ordering systems worldwide.
(06:01):
As one of the world's largestdistributors, the impact on
their thousands of partnersand customers was severe.
We've been looking atthe Ingram Microsite.
Can't find any updates there, butbleeping computer has managed to get
some, and we'll pass these on to you.
The July 7th update was that Palo AltoNetworks responded to reports that
the attackers breached through IngramMicro's Global Protect VPN platform.
(06:26):
Palo Alto reportedly said to BleepingComputer that they are currently
investigating these claims andsaid that threat actors routinely
attempt to exploit stolen credentialsor network misconfigurations to
gain access through VPN gateways.
On July 8th, Ingram Microsaid that they were starting
to bring systems back online.
(06:47):
The quote was that today we madeimportant progress on restoring
our transactional business.
Subscription orders, including renewalsand modifications are available globally
and are being processed centrallyvia Ingram Support organization.
So that would mean that they can processorders by phone or email, and they
(07:07):
say that they can do this for the UK,Germany, France, Italy, Spain, Brazil,
India, China, Portugal, and the Nordics.
However, some limitations still existwith hardware and other technology
orders, which would be clarified.
As the orders are placed,
Google is forcing Android usersinto a confusing privacy maze with
(07:31):
new changes that let its GeminiAI access third party apps, even
if the users previously said no.
The rollout started July 7th and thecommunication has been anything but clear.
According to ArsTechnica, Google sentusers an email saying, Gemini will
now interact with third party appslike WhatsApp, regardless of previous
(07:55):
settings that blocked such access.
The email links to a notificationsaying that human reviewers,
including service providers read.
Annotate and process the dataGemini accesses, and from there
it gets even more confusing.
The email, according to ArsTechnicaprovides no useful guidance for
(08:16):
preventing changes from taking effect.
Users are told they can blockapps, but even in those cases,
data is stored for 72 hours.
Even more troubling, the email neverexplains how users can fully extricate
Gemini from their Android devicesand seems to contradict itself on
how or whether this is even possible.
(08:40):
Google's official statementtries to reassure users saying,
if you've already turned thesefeatures off, they will remain off.
But multiple sources report that userswho previously disabled these integrations
are finding their settings overridden.
The privacy implications are significant.
Gemini can now access call and messagelogs, contacts installed, apps like.
(09:03):
Clock language preferences and screencontent, and this data flows to human
reviewers who can read, annotate, andprocess your Gemini Apps conversations.
Another problematic element isthe automatic opt-in approach.
Many users are reporting difficultyin finding clear instructions on
how to disable these features.
(09:23):
With some saying, Google's owndocumentation seems unclear about
whether full opt-out is even possible.
The rollout appears inconsistentacross devices and regions adding
to increased user confusion.
Some Android users report not receivingthe notification email at all.
(09:44):
I confess that we're iPhone users andI haven't had the time before we went
to air to track down a knowledgeableAndroid user to validate this personally.
But if this process was sounclear to another tech writer,
in this case ArsTechnica, howwould an individual user fare.
And of course for those who are usingtheir Android phone, and it matters for
(10:09):
those who are using their Android phonesfor work, how much data is being exposed.
It's time to dig a little deeper.
And it's also a heads up that as we talkedabout, enterprise level integration and
in our earlier story, we're also gonna bedealing with continuing integration of AI
on the desktop and now on our phones just.
(10:31):
One more attack vector to cover.
Facilitate comprehensive cognitivebehavioral framework utilization
through systematic algorithmicimplementations within interdisciplinary
research paradigms for advancedcomputational methodologies.
Oh God, we've all had to struggle tostay awake through some presentation
(10:52):
or read some report full of highsounding phrases and looking for a fact.
We can hang on to.
But then it's just corporate bs, right?
Ignore it.
Nod and smile.
It'll go away, at least until next time.
But someone has actually founda use for this horse hockey.
It turns out that researchers discoveredyou can trick AI, chatbots like ChatGPT,
(11:17):
into giving dangerous information Ifyou just make your question sound.
Academic enough.
, A team from Intel Boise Stateand the University of Illinois
created a method called Info Flood.
It takes banned requests, things that AIshould flag and refuse, and wraps these
in jargon and fake research citations.
(11:40):
Instead of asking, howdo I hack an ATM, which.
A good AI will refuse to tell you.
You flood the AI with academicsounding language and fake
paper references and voila.
That's fancy talk for, yeah, here it is.
It works because AI systems thinkthat if something sounds scholarly,
(12:02):
it must be legitimate research.
So instead of guardrails,catching keywords and responding
with that familiar, sorry.
As an AI language model, you just addenough impressive jargon and the safety
systems apparently get confused andthe researchers claim they achieved
near perfect success rates on multipleFrontier LLMs using this technique.
(12:26):
So it turns out the old saying is right.
Bullshit does baffle brains, inthis case, even artificial ones.
And that's our show for today.
Love to hear your thoughts.
You can reach us on our new improvedwebsite@technewsday.ca or.com.
Use the contact us form, or ifyou're watching this on YouTube,
(12:46):
drop us a note under the video.
I am your host, Jim Love.
Thanks for listening.