All Episodes

July 7, 2025 14 mins

Send us a text

Those three seconds of your voice on social media? That's all scammers need to clone it perfectly. That routine video call with your CFO requesting an urgent transfer? It might be entirely fabricated by AI.

Artificial intelligence has supercharged fraud, creating a frightening new reality where familiar voices and faces can no longer be trusted. In this eye-opening episode, we explore the alarming rise of AI-powered scams that exploit our most basic human instinct – trust.

Through chilling real-world examples, we reveal how these attacks unfold. You'll hear about Jennifer DeStefano, the mother who received a terrifying call from her "kidnapped" daughter begging for help, only to discover it was an AI voice clone. We examine the sophisticated $25 million corporate heist where an entire video meeting – participants and all – was completely fabricated. These aren't futuristic scenarios; they're happening right now.

The scope of these deceptions extends far beyond financial damage. They create what experts call the "liar's dividend" – a world where real evidence can be dismissed as fake while fabrications gain credibility. This erosion of trust threatens relationships, businesses, and even our grasp on shared reality.

But knowledge is power. We provide practical, actionable strategies to protect yourself and your organization: verification techniques, technological safeguards, and the critical importance of slowing down when faced with urgent requests. As we navigate this new landscape, remember that awareness and healthy skepticism aren't cynicism – they're your best defense in a world where seeing is no longer believing.

Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back to Inspire AI, where we explore how
artificial intelligence istransforming our world for
better and sometimes for worse.
I'm your host, jason McGinty,and today we're tackling a
serious but crucial topic how AIis powering a new era of scams.
Imagine getting a late nightcall from a loved one, their

(00:24):
voice trembling with fear, onlyto learn it wasn't them but an
AI-generated clone crafted byscammers.
Welcome to the new reality offraud, where AI gives criminals
shockingly convincing tools todeceive tools to deceive.

(00:49):
In this episode, we'll revealhow AI scams have evolved, far
beyond old school phishingemails.
You'll hear stats showing theexplosive rise of these crimes,
expert insights into why they'reso effective and chilling case
studies, from clone voicesdemanding ransom to deep fake
videos authorizing milliondollar transfers.
But don't worry, we'll alsoshare some practical tips to

(01:14):
help you protect yourself andyour organization as we dive in.
Just remember, in the age of AI, even familiar voices deserve a
second check.
Here's a case study in 2023.

(01:37):
Arizona mom Jennifer DeStefanogot a call that made her heart
stop Her 15-year-old daughter'svoice sobbing for help, claiming
she'd been kidnapped.
A man then demanded $1 millionransom, threatening horrific
harm if police were contacted.

(01:58):
For several agonizing minutes,jennifer believed it was real
until she reached her daughter,who was safe on a ski trip.
It was all an AI-generatedvoice clone built from a short
recording of her daughter.
Jennifer later learned somesoftware can create a realistic

(02:19):
voice with just three seconds ofaudio.
Just three seconds of audio.
A McAfee survey found 70% ofpeople can't reliably tell a
fake voice from a real one, andlaw enforcement warns.
Similar scams have targetedgrandparents with urgent calls
from grandkids needing bailmoney.

(02:39):
As AI expert Hani Faridexplains, the bad guy can fail
99% of the time and still getrich Because just a few
successes pay off.
Even tech-savvy victims can betricked when they hear a loved
one's voice in crisis.

(03:00):
In early 2024, a Hong KongBinance employee joined what
looked like a normal video callwith several colleagues,
including the CFO, who urgentlyinstructed a $25 million
transfer for an acquisition.
But the entire meeting was adeepfake.
The CFO and all theparticipants were AI-generated

(03:21):
videos.
By the time the companyrealized the deception, the
money was gone One of the mostsophisticated scams reported to
date.
Other incidents show how commonthis tactic is becoming.
In the UK, scammers used adeepfake of WPP CEO Mark Reed to
set up a fake video meeting andtried to steal funds.

(03:44):
The plot failed only because anemployee grew suspicious.
Even insiders have used AIfakes.
In 2021, an Aussie mediaexecutive impersonated a YouTube
rep with AI-generated voice tosecure a $40 million investment,
a scam that was eventuallyexposed.

(04:05):
These real cases show howscammers exploit trust in voices
and videos.
No-transcript, they remind usthat AI scams are happening now
to families and businesses alike.

(04:25):
Ai gives scammers powerful toolsfor a variety of frauds.
Here's a few that are makingheadlines.
You already heard about thevoice cloning, where criminals
clone voices of loved ones orexecutives to demand money
urgently.
Of loved ones or executives todemand money urgently.
This includes fake hostage andgrandparent scams, where victims

(04:45):
hear a realistic voice beggingfor help.
And of course, we have thevideo deepfakes, where
AI-generated videos placesomeone's face in a clip saying
or doing things they never did.
Scammers use deepfake videos toimpersonate CEOs or celebrities
, and these fake videos can alsoauthorize fraudulent wire

(05:08):
transfers, exploiting our trustin video evidence.
Here's one you might not havecome across as much AI enhanced
phishing and chatbots.
Ai writes flawless phishingemails or chats that read like
legitimate business messages.
These easily avoid oldgrammatical red flags.

(05:30):
Reports show a 1200% plus surgein malicious phishing emails
since the generative AI toolswent mainstream.
Since the generative AI toolswent mainstream, ai chatbots can
also convincingly pose as acustomer service agent or
romantic interests, adaptingresponses in real time to keep

(05:51):
victims hooked.
Speaking of romantic interests,here's some scams from AI
lovers Romance scams alreadycosting Americans $1.3 billion
in 2022.
These scams now leverageAI-generated profiles, deepfake
photos or chatbots to build fakerelationships.

(06:15):
Some scammers use deepfakevideos on calls to prove they're
real, making it easier to askfor emergency funds, as I
alluded to a minute ago.
There's also fake customerservice and tech support.
This is where AI-poweredimposter hotlines or website
chatbots mimic real company reps, tricking victims into giving

(06:39):
up sensitive info.
Tricking victims into giving upsensitive info For instance, an
AI voice answering a fakesupport line can capture
customer numbers directly Alwaysconfirm you're on official
websites or calling officialnumbers.
And then, finally, there'sAI-generated misinformation.

(06:59):
Beyond direct theft, ai createdfake documents, videos or
announcements.
It can manipulate stock prices,ruin reputations or spread
political disinformation.
A deepfake of a public figurecan cause chaos and criminals

(07:20):
can use AI-made fake IDs or bankstatements for fraud or money
laundering.
Each of these scams attacks ourability to trust what we see,
hear or read, showing how AI istransforming fraud into
something faster, more scalableand alarmingly believable.

(07:43):
Knowing these tactics is thefirst step in spotting something
off and staying ahead.
Ai scams don't just harmindividual victims.
They threaten trust acrosssociety.
We've long relied on voices andvideos to confirm reality, but
with AI-generated fakes, evenfamiliar calls and videos can be

(08:06):
deceptive.
This fuels what experts callthe liar's dividend.
Real evidence can be dismissedas fake and fakes can be
mistaken for truth.
Just take a look at X.
Imagine a politician caught onvideo claiming that's a deep

(08:27):
fake or doubting your boss'surgent call because you can't
tell it's real no-transcript.
Even when victims don't losemoney, the trauma can leave
lasting scars and make peoplemore guarded or paranoid,

(08:50):
straining personal andprofessional relationships.
For businesses, ai scams bringserious reputational and
financial risks.
A deep fake of a CEO makingoutrageous statements could tank
a company's stock overnight.
Even attempted scams can forcecostly investigations, worse

(09:11):
success stories inspire copycatsand law enforcement warns.
Ai tools let scammers scaleglobally, overwhelming the
resources.
Beyond fraud, ai fakes canspread disinformation and
defamation, undermining publictrust in information itself.
As the internet calls the infoapocalypse or infocalypse, where

(09:36):
we can't tell real from fake.
This confusion helps scammersthrive.
The good news Awareness isgrowing and researchers are
developing detection tools andauthentication systems, but
technology and legislation arestill catching up.
As regulators push for strongerpenalties and anti-scam tech,

(10:01):
we all need to stay alertBecause, for now, the bad actors
have a head start Alright.
So how can we stay ahead of theAI-powered fraud?
First, verify.
Verify through multiplechannels.
Don't just trust urgentrequests via single call, email
or chat.
Always confirm unexpected moneyor info requests by calling

(10:23):
back on a known number.
Money or info requests bycalling back on a known number.
Scammers rely on a panic andsecrecy.
Break that by double checking.
You can also use code words foryour family or your workplace,
where only they know what thecode word is.
A quick code check can exposeimposters and prevent rushed

(10:47):
mistakes and be wary of unusualpayments.
If someone demands wiretransfers, crypto or gift cards
under pressure, it's almostcertainly a scam.
Legitimate businesses don't askfor secret, untraceable
payments.
Slow down and think Scammerswant you to act.
Before thinking, take a momentto ask does this make sense?

(11:12):
Odd requests late at night orout of character should raise
red flags.
Protect your data and yourvoice.
Limit what you share onlinePublic voice recordings Uh-oh,
that's mine, I'm in trouble.
Or oversharing personal detailscan feed scammers.

(11:33):
Ai tools Adjust privacy settingsand avoid giving voice samples
to unknown callers.
Education Educate others.
Share what you learn withfamily and coworkers.
Training sessions orconversations about AI scams can
prepare others and reduce risks.

(11:53):
And use the tech defenses thatare there.
Turn on spam call filters, keepsoftware updated and use
multi-factor authentication fortransactions.
These extra layers make it muchharder for scammers to succeed.
Remember awareness, caution andverification are your best

(12:16):
defenses in the age of AI scams.
Last piece of advice here trustyour instincts.
If something feels off thetiming, the tone, the request
don't ignore that feeling.
It's better to take an extraminute to verify than to rush
into a costly mistake.
Law enforcement agenciesencourage people to report

(12:39):
attempted scams, even if youdidn't fall for it.
This helps them track trendsand warn others.
So stay alert and remember.
In an age of AI magic, a healthydose of skepticism is not
cynicism, it's just savvy.
All right, today we uncoveredhow AI has transformed fraud

(13:02):
into something faster, moreconvincing and harder to detect.
From clone voices to deepfakemeetings, these scams exploit
our trust in what we see andhear.
But knowing the signs gives youpower.
Remember, stay alert, questionthe unexpected and verify before

(13:27):
you act and share what you'velearned with others so they
don't fall victim.
Informed communities are thestrongest defense.
Last thought, as the old sayinggoes, on the internet, nobody
knows you're a dog In 2025,nobody knows if you're a

(13:49):
deepfake.
So treat every unexpected calland message with healthy
skepticism.
Awareness is your superpower.
So thanks for listening toInspire AI.
Until next time, stay curious,stay cautious and stay inspired.
Advertise With Us

Popular Podcasts

Stuff You Should Know
New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.