Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back to
Inspire AI, where we explore how
artificial intelligence istransforming our world for
better and sometimes for worse.
I'm your host, jason McGinty,and today we're tackling a
serious but crucial topic how AIis powering a new era of scams.
Imagine getting a late nightcall from a loved one, their
(00:24):
voice trembling with fear, onlyto learn it wasn't them but an
AI-generated clone crafted byscammers.
Welcome to the new reality offraud, where AI gives criminals
shockingly convincing tools todeceive tools to deceive.
(00:49):
In this episode, we'll revealhow AI scams have evolved, far
beyond old school phishingemails.
You'll hear stats showing theexplosive rise of these crimes,
expert insights into why they'reso effective and chilling case
studies, from clone voicesdemanding ransom to deep fake
videos authorizing milliondollar transfers.
But don't worry, we'll alsoshare some practical tips to
(01:14):
help you protect yourself andyour organization as we dive in.
Just remember, in the age of AI, even familiar voices deserve a
second check.
Here's a case study in 2023.
(01:37):
Arizona mom Jennifer DeStefanogot a call that made her heart
stop Her 15-year-old daughter'svoice sobbing for help, claiming
she'd been kidnapped.
A man then demanded $1 millionransom, threatening horrific
harm if police were contacted.
(01:58):
For several agonizing minutes,jennifer believed it was real
until she reached her daughter,who was safe on a ski trip.
It was all an AI-generatedvoice clone built from a short
recording of her daughter.
Jennifer later learned somesoftware can create a realistic
(02:19):
voice with just three seconds ofaudio.
Just three seconds of audio.
A McAfee survey found 70% ofpeople can't reliably tell a
fake voice from a real one, andlaw enforcement warns.
Similar scams have targetedgrandparents with urgent calls
from grandkids needing bailmoney.
(02:39):
As AI expert Hani Faridexplains, the bad guy can fail
99% of the time and still getrich Because just a few
successes pay off.
Even tech-savvy victims can betricked when they hear a loved
one's voice in crisis.
(03:00):
In early 2024, a Hong KongBinance employee joined what
looked like a normal video callwith several colleagues,
including the CFO, who urgentlyinstructed a $25 million
transfer for an acquisition.
But the entire meeting was adeepfake.
The CFO and all theparticipants were AI-generated
(03:21):
videos.
By the time the companyrealized the deception, the
money was gone One of the mostsophisticated scams reported to
date.
Other incidents show how commonthis tactic is becoming.
In the UK, scammers used adeepfake of WPP CEO Mark Reed to
set up a fake video meeting andtried to steal funds.
(03:44):
The plot failed only because anemployee grew suspicious.
Even insiders have used AIfakes.
In 2021, an Aussie mediaexecutive impersonated a YouTube
rep with AI-generated voice tosecure a $40 million investment,
a scam that was eventuallyexposed.
(04:05):
These real cases show howscammers exploit trust in voices
and videos.
No-transcript, they remind usthat AI scams are happening now
to families and businesses alike.
(04:25):
Ai gives scammers powerful toolsfor a variety of frauds.
Here's a few that are makingheadlines.
You already heard about thevoice cloning, where criminals
clone voices of loved ones orexecutives to demand money
urgently.
Of loved ones or executives todemand money urgently.
This includes fake hostage andgrandparent scams, where victims
(04:45):
hear a realistic voice beggingfor help.
And of course, we have thevideo deepfakes, where
AI-generated videos placesomeone's face in a clip saying
or doing things they never did.
Scammers use deepfake videos toimpersonate CEOs or celebrities
, and these fake videos can alsoauthorize fraudulent wire
(05:08):
transfers, exploiting our trustin video evidence.
Here's one you might not havecome across as much AI enhanced
phishing and chatbots.
Ai writes flawless phishingemails or chats that read like
legitimate business messages.
These easily avoid oldgrammatical red flags.
(05:30):
Reports show a 1200% plus surgein malicious phishing emails
since the generative AI toolswent mainstream.
Since the generative AI toolswent mainstream, ai chatbots can
also convincingly pose as acustomer service agent or
romantic interests, adaptingresponses in real time to keep
(05:51):
victims hooked.
Speaking of romantic interests,here's some scams from AI
lovers Romance scams alreadycosting Americans $1.3 billion
in 2022.
These scams now leverageAI-generated profiles, deepfake
photos or chatbots to build fakerelationships.
(06:15):
Some scammers use deepfakevideos on calls to prove they're
real, making it easier to askfor emergency funds, as I
alluded to a minute ago.
There's also fake customerservice and tech support.
This is where AI-poweredimposter hotlines or website
chatbots mimic real company reps, tricking victims into giving
(06:39):
up sensitive info.
Tricking victims into giving upsensitive info For instance, an
AI voice answering a fakesupport line can capture
customer numbers directly Alwaysconfirm you're on official
websites or calling officialnumbers.
And then, finally, there'sAI-generated misinformation.
(06:59):
Beyond direct theft, ai createdfake documents, videos or
announcements.
It can manipulate stock prices,ruin reputations or spread
political disinformation.
A deepfake of a public figurecan cause chaos and criminals
(07:20):
can use AI-made fake IDs or bankstatements for fraud or money
laundering.
Each of these scams attacks ourability to trust what we see,
hear or read, showing how AI istransforming fraud into
something faster, more scalableand alarmingly believable.
(07:43):
Knowing these tactics is thefirst step in spotting something
off and staying ahead.
Ai scams don't just harmindividual victims.
They threaten trust acrosssociety.
We've long relied on voices andvideos to confirm reality, but
with AI-generated fakes, evenfamiliar calls and videos can be
(08:06):
deceptive.
This fuels what experts callthe liar's dividend.
Real evidence can be dismissedas fake and fakes can be
mistaken for truth.
Just take a look at X.
Imagine a politician caught onvideo claiming that's a deep
(08:27):
fake or doubting your boss'surgent call because you can't
tell it's real no-transcript.
Even when victims don't losemoney, the trauma can leave
lasting scars and make peoplemore guarded or paranoid,
(08:50):
straining personal andprofessional relationships.
For businesses, ai scams bringserious reputational and
financial risks.
A deep fake of a CEO makingoutrageous statements could tank
a company's stock overnight.
Even attempted scams can forcecostly investigations, worse
(09:11):
success stories inspire copycatsand law enforcement warns.
Ai tools let scammers scaleglobally, overwhelming the
resources.
Beyond fraud, ai fakes canspread disinformation and
defamation, undermining publictrust in information itself.
As the internet calls the infoapocalypse or infocalypse, where
(09:36):
we can't tell real from fake.
This confusion helps scammersthrive.
The good news Awareness isgrowing and researchers are
developing detection tools andauthentication systems, but
technology and legislation arestill catching up.
As regulators push for strongerpenalties and anti-scam tech,
(10:01):
we all need to stay alertBecause, for now, the bad actors
have a head start Alright.
So how can we stay ahead of theAI-powered fraud?
First, verify.
Verify through multiplechannels.
Don't just trust urgentrequests via single call, email
or chat.
Always confirm unexpected moneyor info requests by calling
(10:23):
back on a known number.
Money or info requests bycalling back on a known number.
Scammers rely on a panic andsecrecy.
Break that by double checking.
You can also use code words foryour family or your workplace,
where only they know what thecode word is.
A quick code check can exposeimposters and prevent rushed
(10:47):
mistakes and be wary of unusualpayments.
If someone demands wiretransfers, crypto or gift cards
under pressure, it's almostcertainly a scam.
Legitimate businesses don't askfor secret, untraceable
payments.
Slow down and think Scammerswant you to act.
Before thinking, take a momentto ask does this make sense?
(11:12):
Odd requests late at night orout of character should raise
red flags.
Protect your data and yourvoice.
Limit what you share onlinePublic voice recordings Uh-oh,
that's mine, I'm in trouble.
Or oversharing personal detailscan feed scammers.
(11:33):
Ai tools Adjust privacy settingsand avoid giving voice samples
to unknown callers.
Education Educate others.
Share what you learn withfamily and coworkers.
Training sessions orconversations about AI scams can
prepare others and reduce risks.
(11:53):
And use the tech defenses thatare there.
Turn on spam call filters, keepsoftware updated and use
multi-factor authentication fortransactions.
These extra layers make it muchharder for scammers to succeed.
Remember awareness, caution andverification are your best
(12:16):
defenses in the age of AI scams.
Last piece of advice here trustyour instincts.
If something feels off thetiming, the tone, the request
don't ignore that feeling.
It's better to take an extraminute to verify than to rush
into a costly mistake.
Law enforcement agenciesencourage people to report
(12:39):
attempted scams, even if youdidn't fall for it.
This helps them track trendsand warn others.
So stay alert and remember.
In an age of AI magic, a healthydose of skepticism is not
cynicism, it's just savvy.
All right, today we uncoveredhow AI has transformed fraud
(13:02):
into something faster, moreconvincing and harder to detect.
From clone voices to deepfakemeetings, these scams exploit
our trust in what we see andhear.
But knowing the signs gives youpower.
Remember, stay alert, questionthe unexpected and verify before
(13:27):
you act and share what you'velearned with others so they
don't fall victim.
Informed communities are thestrongest defense.
Last thought, as the old sayinggoes, on the internet, nobody
knows you're a dog In 2025,nobody knows if you're a
(13:49):
deepfake.
So treat every unexpected calland message with healthy
skepticism.
Awareness is your superpower.
So thanks for listening toInspire AI.
Until next time, stay curious,stay cautious and stay inspired.