All Episodes

April 30, 2025 42 mins

Can deepfakes go from dangerous to delightful? In this episode of the AI Proving Ground Podcast, Adam Dumey and Chris Roberts explore one of the more surprising — and controversial — topics in generative AI: the use of synthetic media and deepfakes in personalized customer experiences. You'll learn: what deepfakes really are and why they're no longer just a security threat; how AI-powered personalization is reshaping the customer experience; and the ethical and technical challenges of using AI-generated content responsibly.

Learn more about this weeks guests: 

Adam Dumey is a tech executive with 20+ years of experience leading AI, Autonomous Retail, and Cloud initiatives across sectors. He advises Boards and C-suites on digital transformation, driving major efficiency and revenue gains. At WWT, he leads Global Retail Sales, helping clients grow revenue and optimize operations by aligning strategy, tech, and partners like Nvidia, Dell, Crowdstrike, and Cisco.

Adam's top pick: Navigating the Future: Three Emerging Trends in the QSR Industry

Chris Roberts is a technology expert at World Wide Technology with extensive experience across aerospace, AI, adversarial AI, deepfakes, cryptography, and deception technologies. He has founded or collaborated with multiple organizations in human research, data intelligence, and security. Today, he focuses on advancing risk management, maturity models, and industry-wide collaboration and communication.

Chris's top pick: Deepfake Deception: Can You Trust What You See and Hear?

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Today on the AI Proving Ground podcast deepfakes
, deception and user experience.
You've heard the horror storiessynthetic voices, stealing
identities, videos that rewritereality.
But what if this sametechnology could be used not to
trick us, but delight us?
On today's show, I'm joined byAI veteran and global VP of

(00:21):
retail for WWT, adam Dume, aswell as AI and deepfake expert,
chris Roberts, to explore thedouble-edged sword of deepfakes
in the age of personalization,from AI's early days, locked
behind proprietary walls, totoday's mass market
accessibility.
Adam takes us through theevolution and Chris, well, he's
here to remind us just how realthe risks still are.

(00:43):
But here's the twist what ifseeing your own face in online
ads or even in a dressing roommirror wasn't creepy but
compelling?
Could synthetic media actuallybecome your most powerful
marketing tool?
Stick around, because what youhear today might just reshape
how you think about identity,influence and the future of
AI-driven commerce.

(01:04):
Chris, our first repeat guest,congratulations.
You and your beard made it backfor round two.

Speaker 2 (01:20):
Yeah, I appreciate it .
I got let back in the studioagain, and this time obviously
thanks to Adam.
Much appreciated sir.

Speaker 1 (01:25):
Yeah, and Adam, first time, welcome to the show,
thank you.
Thank you.
We're going to be talking aboutdeepfakes and AI and retail
today.
We'll get into the meat of theconversation here in a bit.
But, adam, I'm just curiousfrom your perspective.
You've been doing AI for morethan a decade, from what I can
tell, back to the IBM Watsondays in a prior career of yours.
What have you seen from thatpoint to today and how AI has

(01:48):
shifted, not only in thetechnological advancements but
just how the industry and thepublic looks to consume it?

Speaker 3 (01:54):
Yeah, it's interesting.
So back when I started,everything in AI was proprietary
, so forget the software, butthe hardware it had to run on
this exact hardware and so thecost of that was so high it
limited to just a certain numberof riders across the world, so
the adoption wasn't fantasticand great.
So the great thing about thepast two years in particular is

(02:14):
all those challenges havemitigated.
And also AI when I started wasreally driven from an enterprise
level and now it's a consumerlevel, so the accessibility is
much, much greater and so thatunlocks a lot of different use
cases and creativity, and sojust the presence and
availability and awareness ofwhat AI can do every day just
continues to increase and deepen.

Speaker 1 (02:35):
Yeah, absolutely.
You talk about some of thosechallenges mitigated.
One of the challenges notmitigated, perhaps even
increasing, Chris, deepfakeslevel set with us, before we get
into the retail components here, Um, you know what are
deepfakes by your definition andand where are they at right now
in terms of the landscape.

Speaker 2 (02:52):
So I think you've got a couple of different areas to
look at.
If you look at the stuff thatwe see a lot more of, it's the
video deepfakes.
So it's it's me taking a videoof Adam uh giving maybe a
keynote speech somewhere andchanging the context of it
completely.
So instead of him giving akeynote, like I've done with a
couple of folks inside here, heresigns from WWT and go raises

(03:14):
koalas in Tasmania and it looksreal, it sniffs real, it smells
real and all this other goodstuff.
So that's one.
The audio deepfakes is whatwe're seeing a lot of from the
attacks against corporations,which is, again, I change my
voice characteristics, I can usemy only words, but my voice
characteristics mask say maybethe CFO, I get you to transfer
money, and then obviously wehave text, we have social media

(03:37):
and all the other stuff thatgoes with it.

Speaker 1 (03:38):
Yeah, so that's the state of deepfakes.
Deepfakes can get that scaryrap.
We hear about all the badthings that happen.
Deep fakes, deep fakes can getthat scary rap.
We hear about all the badthings that happen, but, adam,
you see, opportunity here,specifically as it relates to
retail.

Speaker 3 (03:54):
Yeah, If you think of retail, the holy grail is
personalization, sounderstanding what Brian wants,
when Brian wants it, how hecould serve him up, and to do
that you need a lot of data,right?
And so once you have that data,the question is what do you do
with it?
And so back 20 years ago, itwas OK.
Here's Brian's zip code.
Here are the attributesassociated with it.
Now we're moving to a worldwhere we're not too far off of a
deepfake being part of acampaign, so you can imagine

(04:14):
what's your favorite airline,let's say.

Speaker 1 (04:17):
I'm a Southwest frequent flyer.

Speaker 3 (04:18):
Perfect.
Imagine you got served aSouthwest ad that had your face
and likeness in the preferredseat on a Southwest plane,
leaving you at a certaindestination which you
particularly enjoy and theexcursions and the activities.
That creates a very compellingemotional connection that should
drive conversion.
And so to what Chris mentionedearlier, that ability of taking

(04:39):
a synthetic image of someonewith high fidelity but using it
more for good to have themvisualize an experience.
We're having conversation ofthat with customers right now
and so we're not quite there.
There's still kind of a datafoundation that across the
retail lens isn't quite checked,but we're not very far away.

Speaker 1 (04:57):
Chris, what are you seeing?
You know, deep, fake for good.
Might be a little bit of a 180view for some.
Are you seeing that starting to?

Speaker 2 (05:03):
take place?
Yeah, absolutely.
I mean it's the same thing Adamand I have talked about.
It's the same thing walkinginto a store.
I mean, if you think about it,nowadays, when you walk into a
store, typically you're going infor reasons wise wherefores or
logic.
You've had an influence onlinethrough media or something else.
But again, taking the idea fromAdam, I walk into a 5.11 store,
I walk into an REI or somewherelike that, and I see an image

(05:25):
of me we were working onsomething for RSA and a couple
of other conferences where youwalk up to literally a dark
mirror and you see an image ofyourself and what we were going
to do and have some fun with itwas change it to be a male
version, a female version, putsome ethnicity in there, have
some fun with it, but also showall of their social media.

(05:46):
So how much of this data makesthis person up?
Because obviously we've put somuch out there and, to Adam's
point, so much of it's beingused now to basically profile
you.

Speaker 3 (05:55):
Yeah, and just think of the data presence.
So you know, retail, there's aphysical and there's a digital
component.
So Chris, hit on the digital,the physical side.
So imagine you go into a mall,right, the second you park, the
second you go in a store, thesecond you go in a hallway, the
second you interact with aproduct, you hover in an aisle,
that data is being captured.
And so now you layer that ontop of the digital footprint

(06:24):
which traces everything from thesecond you put on your computer
to the website, the sequence,the information, so now there's
this really rich, rich pictureand that data now, with some of
the technology we're seeing indeep fake space, is being used
to extend the reach of what AIand data can do.
So I think it's fascinating.
But it also is interesting asyou think about new technology.
Everyone's fearful, right, ifyou go back to the car, it was
called the devil wagon for awhile, right, you think of the
printing press and the issues ofCatholic church.

(06:44):
There's an issue here about theconcern.
I think it originates from justlack of control, right?

Speaker 2 (06:51):
Yeah, it's lack of knowledge, lack of control, but
I think the other part of itcomes into as well.
To your point with the car andall the other stuff it was
tangible, that's right, whereasall of our data is so intangible
.

Speaker 3 (07:01):
And at the core of deepfake it is you right, it is
Chris Roberts, and so losing thesense of ownership of that is a
dangerous idea.
So I worked at an intelligenceagency years ago and earlier in
my career and I received a callfour years ago from someone and
he told me Adam, take thatvoicemail down on my mobile

(07:22):
phone, because it just had 18seconds.
That was enough to take thatand create a synthetic image,
and it started getting methinking what's mine right?
Back then, four years ago, Ilost my voice.
Now I could lose everythingelse, right.
So, fundamentally, I thinkthere's something intrinsic
about deepfake that concerns andscares people, and I get it.

Speaker 2 (07:41):
But I think, like any technology, there's another
side, Thing that adds to thatwhen you think about it.
This is why we can't get rid ofpasswords.
You know, no matter when wetalk about passwordless and
everything else, we can't getrid of them because if we lose
them, we can easily replace them, just change the thing again,

(08:13):
put different two factors andeverything else.

Speaker 3 (08:16):
But if we lose our identity, it's gone.
The fingerprint goes, it's gone.
We can't easily and readilyreplicate these things and get
new ones.
So I think that was breached.
Everyone was upset, but to yourpoint it was just a swab.
That's someplace in California,but seeing your image and your
likeness and combined with ahigh fidelity voice
characteristic that is, that isthreatening.

Speaker 1 (08:37):
Yeah Well, chris, I think that probably speaks to
just the need for secure,securing the entire solution to
make sure those important datapoints are safe and sound from a
consumer's perspective, tooffer it trust.

Speaker 2 (08:49):
Yes but and that's what the yes.
But I think that's where partof the challenge comes in,
because on one hand, you want tosecure that data but on the
other hand, if I'm that retailer, obviously I'm going to
maximize my investment by usinga perfect example.
If you ever look, I put aLinkedIn post out a couple of
days ago and I'd gone onto awebsite and it said hi, sign

(09:12):
here to share your informationwith 924 of our partners, no
more, no less, no less.
It was literally.
It had the number, it was inlittle yellow and it was 900.
That is, if I had said yes tothat, that, or even if I said no
to that stuff, they're stillgoing to share that data with so
many different people.
So I think that's part of theproblem is, I don't mind sharing
something if I know thateverybody is going to treat it

(09:33):
the same way, I should share it.
Or if it's going to get used ina for good reason.
But if it starts and this iswhere it is really interesting
with the retail sector I don'tmind somebody telling me
something that maybe I like,like you know these things, the
hoodies, I like them, I enjoythem, but if I go from a size
large hoodie to a size extralarge hoodie.
I don't want that informationgoing to my healthcare provider

(09:55):
going hey, chris has put on acouple of extra pounds.
You maybe want to do somethingabout that.
So that's where I think youhave to be really really careful
with that data.

Speaker 3 (10:03):
Yeah, and your question.
Obviously the data has to besecure, but we're having
conversations with retailers nowasking do you need that data?
So we have conversations.
We do a lot of work atWorldwide in the QSR space and
so we're having a conversationwith one QSR asking do you
really need all of this data?
Some of it is old, some isoutdated, some of it's third
party.
What's the data strategy youshould invoke to get more

(10:25):
relevant first party data andjust the right amount?
Because, again to the idea ofpersonalization, you want to get
the outcome and historicallyit's just getting as much data
as possible.
But when you have that data,your attack factor increases.
You now have to store that datawhere there's a cost and you
have to secure it.
So we're having conversationsto start pivoting a little bit
about the basis of what you needfor that personalization.

Speaker 1 (10:57):
Well, you're talking a lot about a physical
environment here in terms ofpersonalization, and then
earlier you mentioned malls andAI in the same sense.
That was impressive, perhapsthe newest, most innovative
technology with with the stateof malls these days.
So kudos there.
But is are these retail deepfakes?
Is it only an environment of aphysical store or is it extend,

(11:17):
you know, omni channel, acrosseverywhere a retailer wants to
be?

Speaker 3 (11:20):
So it's, it's, everything is everywhere.
Now I mean deep fake goes back2018, where everyone's favorite
supermodel, cara Delevingne, was, to my knowledge, part of the
first ever deep fake campaign.
So it was a European retailerthat was looking to get
penetration across certain areasof remote uh, remote regions,

(11:41):
and so they leveraged her andher image and likeness.
Back then it was a voicecomponent to Chris's point about
the emergence of audio, and sothat campaign generated 100
million impressions, 180 million40% of those were viral free
and 54% of order increases, andso this thing has been around
and this is really more of adigital manifestation Nowadays.

(12:03):
Again, the prevalence of datameans that wherever that
deepfake is articulated just hashigher fidelity.
Right, and we've done deepfakesbefore.
Chris has that.
It takes 30 minutes and $12.
Right, it's just super cheap todo it, and so it doesn't
require an expert.
That accessibility we talkedabout earlier, combined with the
data and all these differentchannels, and, candidly,

(12:24):
people's impressions of data nowand people's impressions and
trust in certain venues are solow that the consumption of that
and acceptance is at a levelthat I think is dangerous.

Speaker 2 (12:36):
To your point on the acceptance.
You think about it.
For the last several yearswe've trained people, regular,
normal people, with our phonesphones.
We've trained them to add extrafilters.
We've trained them to changevoices.
You go back to the days when wehad little gps systems.
My old gps system used to havemonty python on it.
You know we can do that withalexa, we can do that with all

(12:57):
the googles and the voices andall the other stuff.
We can have them speakdifferent languages, different
phrases, we can put differentfaces on.
So we've trained people to dothis and now, unfortunately, on
one hand it's used, I wouldn'tsay for good, but it's used for
betterment.
But unfortunately we're alsoseeing a huge amount of
undertones of it being used forbad and people aren't able to
discern the good from bad thesedays.

Speaker 1 (13:18):
Yeah Well, and what can?
What can retailers, or anyorganization for that matter, do
to better protect against thegood versus the bad?

Speaker 2 (13:27):
there's an element of education, but I think it's
only a small element ofeducation.
You can only tell certainpeople so many times hey, guess
what?
For the last 20, 30, 40 yearswe've been telling you this
stuff.
Now you can't believe anything.
That doesn't work.
And that's, I think, where whatwe're trying to do a lot of
inside wwt is work with a lot ofthe organizations to help them
better understand the signalscoming in, help them better

(13:48):
process them, put the sentimentanalysis behind them and a whole
bunch of other things to try tohelp them and the consumers
understand, or even maybe makeit so the consumers don't have
to know that it's happeningbehind the scenes.
I'm basically protecting them.

Speaker 3 (14:02):
And going back to your question or your intro
about my history in AI, there'sgoing to be a long runway before
deepfakes are at a scalableposition.
So started AI in 2013,.
It's 2025, I'm enterprises arestill implementing chatbots, so
AI has come a long way.
At the same time, from ascalable global enterprise

(14:24):
perspective, that pace doesn'tmatch, I think, what the
consumer believes.
And so, as we think aboutdeepfake, to me it's a
progression, and the progressionof deepfake from retail lens is
about hyper-personalization,and so, to get there, there are
other elements ofpersonalization that will drive
more of an acceptance.
So seeing on your phone, at theright time, when you're ready,

(14:45):
an offer that's compelling.
Eventually, once that starts totake more of a prevalence, all
of a sudden, now you'reaccepting of using your data in
a way that's permissible andeventually that's going to lead
itself to more intrusive, morepersonalized manifestations of
that.

Speaker 2 (15:02):
Another good example.
I came out of the aviation sideof things before I came here.
We were looking at using AImodeling, not just inside the
safety of the avionics, but alsothe passenger experience,
everything from as the passengerwalks up to the airport,
recognizing it's that passenger.
How do I personalize thatexperience with the passenger
walks up to the airport,recognizing it's that passenger.
How do I personalize thatexperience with the passenger?

(15:22):
As the passenger gets onto theplane, how do I make sure that
I'm greeting them effectively?
How do I make sure, as thescreen comes down, they've got
their favorites on, they'vepulled their favorites from
either Apple or any of the othersystems out there and said, hey
, this is your environment,you're here for an extended
period of time, we got youcovered, but then you can also
use it for the healthcare sideof things as well.

(15:43):
Not everybody likes flying.
Everybody's used to flying.
Maybe somebody's not feelingwell.
So if you've got a camerathat's able to sense temperature
or moisture or liveliness, canthat system then talk to
somebody inside the actual uh,the cabin crew and say, hey,
passengers so and so in thisarea looking a little bit too
hot, go check on them?

Speaker 1 (16:01):
Yeah Well, we've mentioned QSRs, quick serve
restaurants.
We've mentioned outfitters.
Where else might this applyacross the retail landscape?
Is it everybody could benefitfrom doing this once it's
implemented at scale?
Can we even think beyond that,broader than retail?

Speaker 3 (16:19):
Sure To answer your good thing about it.
Any segment Within retail,though, we're having some
customers or some retailers toleverage deep fakes more as a
new revenue generation tool.
So you can imagine going to amovie theater, for example, and
instead of ordering popcorn fromBrian now there's a digital
avatar that would take that, andso the benefit from a new

(16:39):
revenue generation perspective,that avatar could be a local NBA
player, it could be someonethat's starring in the movie
that you're about to go watch,right, and so that will drive
sponsorship funds that theydon't have right now.
And so, as we look across thelandscape of retail, it doesn't
have to be on a purse level, itcould be a character, right,
think of all the cartoons andanimated films, um, it could be

(17:00):
a local presence, and so allthese provide more compelling
and localized experiences thatwe think again, it doesn't just
cut across a qsr or a fitnesschain or movie theater.
It cuts across, uh, the broaderretail umbrella, and then
financial services and healthcare, the same way he described
I think about it as well.

Speaker 2 (17:17):
I mean, we were down south a couple of weeks ago now
talking to one of the box whatdo you call it?
Yeah, home retailers, home,whatever the home improvement.
We're talking to a bunch of thefolks down there.
Imagine walking into thoseplaces with your own home plans
or a couple of pictures.
You're already able to see someof this.
But imagine actually putting anAI architecture in place that

(17:38):
builds it out for you, thatmakes those suggestions for you.
You augment that with some ofthe reality glasses All of a
sudden.
Now you can go hey, I need A, b, c and D.
Here's the list, here'severything I want and here's how
it looks.
That's there and that ability.
Then that's huge.

Speaker 1 (17:54):
So you're talking about how that's there.
Where are we at in terms ofreadiness standpoint?
We've talked a lot about data.
Are retailers or any of theclients that we engage with, are
they ready to put these thingsinto action, and what's the
progression for starting to getto the point where they can
offer these to their clients?

Speaker 3 (18:12):
They're still struggling with stovepipe
systems, kind of disparate datalocalized around, and so really
aggregating that and creatingthat data foundation that's
required, that's underway and,like I mentioned earlier,
personalization on the digitalside is that first step and
eventually, from a deepfakeperspective.
But I do also think theconsumer isn't ready, for the

(18:34):
reasons we mentioned earlier.
Right, there's a high level ofdistrust and unfamiliarity and
also just confusion as to whyI'm giving you my data to use,
right, and all it takes is onebad actor, one inadvertent
misstep, right, and so nowretailers are using deep fakes
in very sporadic ways, but morefor fun.
So you think of the Super Bowl,right, gronk?

(18:56):
Right, avocados from Mexico.
Do you guys see this?

Speaker 1 (18:59):
Yeah, yeah, yeah.

Speaker 3 (19:02):
So he was a deep fake and so it was really well done,
very controlled.
But you'd call a number andGronk would come up and and give
you a script about how to makeavocados and guacamole, and so
that's how they're doing.
It now is more for fun, morefor brands, instead of a true
one-on-one engagement.
So we're, we're, we're quite aways away from that level.

Speaker 1 (19:25):
What other technical aspects might an organization
have to think about to startputting these in place?
We've talked about data already, but how do they integrate
these systems into their currentIT stack?
What about where they'rerunning these workloads, whether
it's on?

Speaker 3 (19:36):
cloud?
Yeah, and that's one of thethings we do.
Amazing here is that wholeassessment, the kind of
architectural assessment of whatyour current state is.
Oftentimes our customers inretail don't know that.
They don't know the legacysystems that are still active,
they don't know the data storesthat are still not
interconnected.
And so, from an assessmentperspective, that is how we
approach it and usually when aresome pretty interesting,

(19:58):
sometimes awkward findings, butit gets them to a position of
knowledge we can startidentifying one is this the
appropriate architecture,whether it's on-prem or on cloud
or some hybrid?
And then also, getting back tomy earlier comment, is this the
right data assortment that youneed to achieve your objectives?
Is this the optimal set?

Speaker 2 (20:14):
I think the other one to add on to that one is
definitely seeing, especiallywithin some of the telco and
some of the carriers, is they'rewilling to meet the consumer
where the consumer is.
That's right.
There's been some fantasticconversations around that one
specifically around protection.
You know so many of themalready have their own
application stores andeverything else.
What they're looking to do inmany cases is how can they

(20:35):
elevate and enhance that.
So we're looking a lot of thetimes at the solutions of you
know how can we twin incapabilities on those systems
but also have you know cloudnecessary wherever it is.
So yeah, some goodconversations, very good, very
good conversations.

Speaker 3 (20:50):
This episode is supported by Akamai's GuardiCore
.
Akamai's GuardiCore offersadvanced micro-segmentation
solutions to protect criticalassets from cyber threats.
Secure your enterprise withAkamai's innovative security
platform.

Speaker 1 (21:05):
What about ROI?
Everything with AI seems toalways go back to ROI.
Adam, do we have anunderstanding yet of what type
of ROI these types of solutionscould lead to?

Speaker 3 (21:15):
So we talked about our favorite supermodel earlier,
so that was amazing.
So we talked about our favoritesupermodel earlier, so that was
amazing.
Roi, your favorite supermodel,my favorite supermodel I'm out,
it happened.
And so, with regard to deepfake, it's too nascent.
There's no yet ROI for that.
The ultimate ROI again is goingto be that one-on-one
conversion, and to get there, westill have some of those
prerequisites we need to check.

Speaker 2 (21:36):
Yeah, no-transcript.

(22:01):
So what we've now seen is, ifwe start looking at bolting in
protections earlier in the callsequence, even before it gets to
the call handler, we'restarting to see huge gains in
that one.
So there's some fantastic stuffin there.

Speaker 3 (22:15):
And that's just the cost of time.
Oh yeah, and then if itactually gets to Chris and you
fall for it exponentially grows.
And so, in the retail side,we're working with several
retailers on this idea of threatintelligence and so the ROI
it's kind of like insurance.
Why do you buy insurance?
What's the ROI every year?
It's nothing, but when it hits,it hits.
And so, as you think aboutthreat intelligence,
understanding what people aresaying about your brand across

(22:38):
social media, across the darkweb, and understanding how
quickly those messages arestarting to come up and rise to
the surface Is it a bot?
Is it human?
Is it a text oriented?
Is it a video?
Is it a deep fake?
If it's a deep fake, how longdoes it take you to identify the
deep fake before it's out there?
And what damage to your brandis suffering during that period,
is suffering during that period?

(23:02):
And so very much view this asan insurance-like model, but
from a pure ROI, from a retailperspective, like a
personalization, still not quitethere.

Speaker 1 (23:10):
So understanding that a lot of organizations want to
have that pathway to ROI, is itjust start small, build momentum
, keep going, be iterative andthen eventually you'll get to a
state where you're ready todeploy this in more scalability.

Speaker 3 (23:25):
Yeah, I think it's all about setting the
foundations and getting thoserights.
Like we talked about, it'sabout getting a strong data
foundation to propel you to doother things, and so that's the
first piece.
And again, the on ramp to moreof a deep fake is more the
digital, something that's not astangible, something that you
can just look at and consume andsee the value of, and so moving
from that and eventually intosome target deep fake and then

(23:48):
more of a kind of a broad scaleapproach, because once the deep
fake engine is on earth, it'snot going to be cheap, right,
and so you really want to makesure, from a retailer
perspective, that you'reimplementing it at the right
time for the right use case.
To draw that, and that's goingto be just like all this new
technology is going to be alittle bit of a learning curve,
a little bit of trial and error.

Speaker 2 (24:05):
Yeah, the nice thing about it is, as well as we've
been able to sit down with anumber of clients and literally
go through workshops.
You know, I just came back fromone up in Canada.
We had 80 people, I think.
Actually, you end up sendingthe data out to Within one
organization.
The 40 or 50 attended theworkshop and it was fantastic,
it was collaborative.
We did three or four hours.

(24:26):
We were up there collaboratingon where they are, where they're
going, where they want to be,how they're going to get there,
and we talked about everythingfrom, like, the data side of it,
the identity side of it, theprotection side of it, all sorts
, and I think that's the nicething about it.
The conversations in that spaceare just so collaborative.

Speaker 3 (24:42):
And to your point about the idea of this being
tangible, deepfake really givesretailers and other folks in
other industries somethingaspirational that they get.
They understand right.
They see an image on the screenthat's there, as they can
appreciate what the end state isversus something that's a bit
more abstract.
Screen that's there is they canappreciate what the end state
is versus something that's a bitmore abstract.

(25:02):
And so, from a on-ramp and froma learning perspective, the
benefit of this is it reallyprovides this North Star that
shines very bright.

Speaker 1 (25:08):
Yeah, adam, help me understand this a little bit
better.
If it's hyper-personalizationand there's a deep fake of my
likeness say, and I'm walkinginto an outfitter and, chris to
your example, I'm going from alarge hoodie to an extra large
hoodie how do I see that on thedeep vacant under and you know,
really trust that it's going tobe the right fit?

(25:29):
Is that like a latency thing oris that a just a data ingestion
?

Speaker 3 (25:34):
Well, first, in your example, it would be conveyed,
usually through a mobile app,right, you would have the
conversation with some entitythat's representative of you.
So it's capturing not just yoursize, but it's probably have
other indicators how oftenyou've been to the gym, what's
your step level, where are youeating right.
So it gives you some highfidelity of what it is saying.
And then, once you actuallymove there, it can potentially

(25:55):
be your outfit or it can ask youquestions about what the fit is
.
It could advise you to put yourarms out and kind of take a
little measurements of how thecloth folds and hangs, and so a
lot of it is going to be thisinteractive element.
It's not going to be the staticimpression of a thing.
It's going to be this thingthat knows you very, very well,
based on all the data it'scollected.

Speaker 1 (26:15):
And will that thing, you think, follow me around from
brand to brand to airline to.

Speaker 3 (26:23):
It should conceptually.
I mean if you think, follow mearound from brand to brand to
airline to it, it should.
Conceptually.
I mean if you think of all thechallenges of the omniverse,
right, the big concerns about asuccessful omniverse to do
exactly what you're saying.
But what happened?
There are 50 omniverses, yeah,and so that interconnection is
going to be a problem.
So eventually this deep fakething is going to have to have
wide, widespread acceptance onthe consumer side versus the
retailer side.

Speaker 1 (26:42):
Chris, you're laughing a little evil-ishly.

Speaker 2 (26:44):
Oh, 100%, because that's where my brain goes.
Yeah, I mean that's where mybrain goes.
But I love the idea of thatBecause, again, if you think
about it, we all carry phonesaround with us.
We all carry, basically, ourfingerprints, our digital
fingerprints with us, ourfingerprints, our digital
fingerprints with us.
So, as you walk into store tostore to store the ability for
my phone as long as I give itpermission to then talk to the

(27:06):
system, to then go hey, this isChris, this is who Chris is, and
whether that turns up on aboard, whether I see myself in
something, whether I, whateverit might be.
That's going to be.
The interesting part about itis how much of that am I willing
to hand over, and all of thatside of it.
But yeah, I mean, obviouslythere's room for all sorts of
interesting areas to go through.

Speaker 1 (27:26):
Yeah, I think we can all understand the value of
hyper-personalization for aretailer, adam.
Are we doing any of this typeof work right now and if so,
even if it's in nascent stages,what are the lessons learned?

Speaker 3 (27:39):
So we are doing personalization across multiple
retailers and what we'relearning again is the importance
and criticality of that datafoundation.
Number one we are increasinglyhaving conversations again about
that assortment of data andwhether it's appropriate or not.
And the third piece we'rehaving are these retailers are
still struggling with how tomess as a utility of this data
right In order to achieve acertain outcome.

(28:02):
On your behalf, I'm going toneed X, y and Z, so getting the
trust, and although I think Ilove your take, I suspect that
consumers' reluctance to givedata through the years has kind
of oh it's changed, it's changed, it's absolutely.

Speaker 2 (28:17):
I mean, you think about it, even in the breach
world is a perfect example ofthis one.
You go back almost 20 yearswhen one of the first breaches
occurred.
I mean it happened and theconsumers wanted to tar and
feather every single person thatwas there.
I mean it's terrible, that'sright.
Fast forward to now number one.
I don't know what the latestbreach was.
I have no flipping clue, that'sright.
I find the databases Normalizeyeah, yeah, every day.

(28:38):
Now, yeah.
So at that point in time, Ithink people have also gone to
the point where they know theyhave to hand things over.
They unknowingly do it.
We trained and I'll be honesteven on my side of things, where
the skeptic I am if I couldwalk into somewhere and go, okay
, I actually like that on me,rather than having to go through
the flipping hassle ofgathering it up, getting into

(29:00):
the changing room, faffingaround in the changing room.
If I can literally walk in andgo, how's that clicked?
Oh, you know what I like.
That I'm done, I'm sold.
Take it away, I'm willing tohand over.
And again it comes back to arisk probability thing.
If I trust that retailer, thenthis is again.
This is back to the normalthing.
I value certain retailers overothers.

(29:21):
I know who's going to sell mydata.
Same thing with any of the appswe put on our phones.
Some apps sell every singlething they possibly can.
Some are like, hey, we're goingto be very careful with the
data.
So I think that's the otherpart of this is, retailers are
going to have to be very, verycareful how they message out,
how they're going to use thatdata, who they're going to share
it with, how well they'll dotheir best efforts to protect it

(29:42):
.

Speaker 3 (29:53):
And that last part of our protection is kind of the
fourth bucket we're learning isthe level of measures put in
place may not be as rigorous asrequired.
And so, again, worldwide wehave a very strong cyber
practice and we also haverelationships across not only
the industry but also kind ofwithin global intelligence
organizations, and so it's beenvery obvious that over the past
18 months or so, the number ofattacks on marquee US companies
have shot through the roof.
And it's not necessarily tocollect that data it certainly
is but it's also to do branddamage.

(30:14):
You take a marquee US brand andyou take it down.
You take a marquee AI productand you have another product
superseded.
There is something to be saidabout that on a political stage,
and so the conversation we'rehaving with our customers is
about that security side, and wehave had multiple conversations
over the past three weeks wherewe found very concerning gaps,

(30:35):
extremely concerning gaps on theretail side.
That would absolutely impactoperations but also really
damage a brand.
Where are the gaps at?

Speaker 2 (30:44):
It's the basics, can't tell you that.
It's some of the basics.
It's like anything If you walkinto an organization and you ask
them where their assets are, bethey physical assets or digital
assets.
Most organizations have an ideawhere all their assets are, but
very, very few of them,unfortunately, could put their
you know, put their hand ontheir part, put their hand on

(31:05):
the table and go.
We know where everything is.
That's just simply data sprawl,system sprawl sprawl and the
fact that maybe organizationshaven't actually implemented
everything as well as theyshould have done.
So fan a few holes in a fewareas and given a few people
some things to think about.

Speaker 1 (31:31):
So you mentioned back to the basics.
What else can organizations do,knowing that this is likely and
probably inevitable to comedown the line?
What are they doing to makesure that this is a secure
solution that provides value butalso is secure?

Speaker 2 (31:45):
I think the biggest part is you know, the nice thing
about working here is we havethat team that can come in and
go, hey, what do you want tobuild?
How do you want to build it?
Now, as you're building it, youneed to trust that data.
You need to go hey, I'm goingto train the model.
To look at Adam and go, hey,this is Adam and this is Chris.
That data has to be asimmutable as possible.
So now we get to data handlingstandards.

(32:07):
Now we get to data safety andsecurity.
Now we get to how is itimmutable?
Who has access to it?
So we start talking aboutidentity access, management,
control.
And once you have thoseconversations, organizations
start to understand it's.
Unless they have good datagovernance, handling, management
, sanitization, they're notgoing to get the results they
really want to actually see.
So again to Adam's pointearlier, a lot of it comes down

(32:34):
to data, and then it's just bestpractices.
Practices.
It's also the considerationsfor who's built the model, what
model of it?
You know we talked about thisoff camera when you know, we're
talking about all the questionspeople should ask and again,
that's the nice thing about whatwe do we can go in and be the
bad guys that ask all theawkward questions to go hey,
you've got this learning model.
How?
How is it learning?
Where's it learning?
Where's it pulling from?
How often is it learning?
When it makes a mistake and itputs my hoodie on Adam, or vice

(32:58):
versa, how is it going to know Imade a mistake and how do I
retrain it?

Speaker 3 (33:03):
And we've already established my favorite model is
Cara Delevingne.
So my favorite boxer is MikeTyson and he's famous for the
quote of everyone.
Tyson, and he's famous for thequoted quote of everyone has a
plan.
So they get punched in the face.
So no plans foolproof.
So you also talk about in theevent that a customer does have
that vulnerability, that isattacked.

(33:23):
Yeah, what are you seeing?
You know what kind ofcapabilities are we showing here
that we have for that recoverypiece.

Speaker 2 (33:26):
We've got some.
I mean, there's some reallycool stuff that we're building.
It's a ton of fun.
So we've got I do and I don'tlike the digital twin word.
It's got some good connotations, but it's also got some
challenging ones.
But I'm going to use it what itis.
It is the digital equivalent ofme and if you think about it as
a human being, I like pressingbuttons.
I want to click on the nextemail, I want to click on the

(33:48):
next button, I want to downloadthe next thing.
But if I had my twin that didit for me and it fell into the
pit or it got its backsidehanded to it or whatever else it
might be, I'm okay with thatbecause it hasn't impacted me.
So we're building out somefantastic stuff at the moment um
, some really really cool techwhere it makes those decisions.
So you almost get into, youtake it from a very reactive

(34:10):
situation and you start gettingvery proactive and predictive,
and the nice thing about the AIworld is you can build some
really really good predictivemodeling.

Speaker 1 (34:19):
Yeah, it's interesting that you mentioned
digital twin.
Are we talking digital twin anddeep fakes in the same kind of
ballpark here?
Should we come up with a newname for deep fake, so it's not
as necessarily scary when it'sdeep fake for good?

Speaker 2 (34:32):
Oh, good luck on that one.
Yeah, it's like hackers.
I am a hacker and yet I do thebest I can for good, but we tend
to get blamed for a lot ofthings.
You know, it's a tough one.
I I, at some point, somebodysomewhere is definitely going to
come up with a better word forit, because it's an individual
thing.
I think this is where it getsinteresting is, a lot of the
learning models are across apopulation or an area and

(34:55):
organization, but for me, whatwe're trying to build and what
we're doing some really funthings with a couple of the
retail, a couple of the areas,is very much an entity that
lives on our device, thatrepresents us and it pulls from
all those different data sourcesand it pulls all the
information and it can make verypredictive knowledge as to

(35:16):
basically what we're going to donext.

Speaker 3 (35:18):
And words do matter and if you look at some of the
legislation, that's popular.
So over half of the states havesome legislation on deepfakes,
and so if you look at thewording, you're starting to see
a consistent deepfake.
But now it's artificiallygenerated images or
synthetically generated, sothere's already, I'll say, some
softening of that.
But to his point, someone'sgoing to come up with a killer

(35:39):
word at some point.
I'm not that guy and it's notnow.

Speaker 1 (35:42):
Yeah, and I know we only have a few more minutes
left.
But you mentioned, you know,policy.
What else can we expect from apolicy and regulation standpoint
, understanding that all of thisis moving so quickly?

Speaker 3 (35:57):
Yeah, I can take a stab from a positive chuckle.
From a policy perspective, itreally falls into three buckets.
One is election interference,and so really notifying folks
that this image was generated,and we saw that in 2024 across
both parties at a state andlocal level using deep fakes for
their purposes, so that's achallenge.
The other one is with regard torevenge, pornography and
illicit images of children, andso that is the focus.

(36:18):
With regard to consumerprotections, it's not really
there yet.
It's likely covered under someother fraud language, but there
are other measures being taken,right from deepfake watermarking
, which is kind of sort of iffy.

Speaker 2 (36:31):
I think there's 110 or 120 or so things going
through legislative review atthe moment, covering deepfakes
in all sorts of different areas,so it'll be interesting to see
what shakes out Some consumerfakers, some data fakers.
Now, obviously, with a changein leadership, we'll see what
actually goes through and whatdoesn't.
I think, no matter whatlegislation't, I think, no

(36:53):
matter what legislation says, Ithink the best we can probably
do is not just educate theorganizations, but definitely do
what we can to educate theconsumers and go hey, here's
what's coming down the line.
And this is what I love aboutdoing things like this is we get
to put this out and go hey,listen to this.

Speaker 3 (37:08):
And there's also an element of the actual technology
providers, right and so listingout the responsibilities of
those that houses information ordistribute it, that there is
responsibility there.
And so, just yesterday, Ibelieve, the first lady put
together a deep fake celebrationon Washington and it was passed

(37:29):
in the Senate and PresidentTrump indicated and so it is
specifically around those threeuse cases election, child
pornography, and reverse andrevenge pornography the
implementers or housers have arequirement within 48 hours to
pull off their sites.
And so now, similar we saw, youknow, a couple of years ago
with regard to contentmoderation, we're starting to
see a similar practice rear upwhich, to Chris's point it is,

(37:52):
there are many folks andentities that are going to have
a role in containing this.
Yeah, big time.

Speaker 1 (37:57):
Any lingering questions that you think
organizations, whether it beretailers or anybody across the
board from from any industry,that they should be asking
themselves now as we head intothis future of deep fakes and
using it for good.

Speaker 2 (38:13):
Ooh, man, that's a broad one.
That's a broad one, I think,for me.
I think the first one is one ofthe ones we typically ask is
why?
Why do you want to use it?
Are you using it because you'rechasing everybody else, or what
are you trying to gain from it?
Because we've seen this toooften in technology Everybody
jumps on the technology but itdoesn't really have a good
business case, which is why ittends to fall by the wayside

(38:38):
Again.
I think that's where it's funsitting down and talking with
you, with the retailers, is weget to do the workshops.
We get to sit down, we get tonot just dig into the tech, but
the business.
What's driving it?
You get to out and said whereare you going to be?
And I think it's nice becausethat drives the conversation to
a success.

(38:58):
It drives it to deliverablesand metrics, that you can
actually sit there and go hey,we, actually we did make a
difference.

Speaker 3 (39:06):
If you look at just capital allocation within an
organization take Gen AI theydidn't grow their IT budget
exponentially.
They had to pull from otherparts of the organization, so a
massive reallocation andrepriorization had to happen.
It's the same thing withdeepfakes or any other
technologies.
The why?
Because that money is likelygoing to come from somewhere
else, and so you have to makethe case, and so it's not always

(39:26):
necessary for every use case.

Speaker 2 (39:37):
I think, the other one thinking about this as well.
This is where, again, it andsecurity are going to have to
come out of their silos.
When you think about it, this,so much of our technology is
behind the scenes, but when youlook at this, this is
interactive with everybody else.
So we have to go talk to thesales and marketing teams.
We have to go talk to legal andcompliance.
We have to have thoseconversations with the business
to understand where they'regoing with it, which is going to
force a set of communication,collaboration functions that

(39:59):
we're not the best at doingright you add.

Speaker 3 (40:01):
You add the like you mentioned.
There's a technology, there's abusiness, there's a marketing,
there's a crisis control,there's a product.
I mean all of these,particularly for the fake thing.
All have to work together toget to a point and then react
should adverse events happen.

Speaker 2 (40:14):
Yeah, great point.

Speaker 1 (40:16):
Well, lots of work ahead of us To the two of you.
We're running out of time here,so thank you so much for
joining Adam for his time, andChris look forward to having you
on a third time for theHatchery, maybe sometime soon.
Thanks again, we appreciate it.
Thank you, thanks for having us.
Okay, appreciate it, thank you,thanks for having us.
Okay, that's a wrap on thisepisode of the AI Proving Ground
podcast.
Of course, big thanks to Adamand Chris for a conversation

(40:38):
that challenged how we thinkabout synthetic personalization
and AI's evolving role in ourlives.
Here are three key takeawaysfrom today's discussion.
First, ai isn't justaccelerating, it's becoming
ambient, from personalized adsto interactive store displays.
We're entering a new era whereartificial intelligence plays
seamlessly into everyday life,shaping decisions in ways we may
not even notice.

(40:59):
Second, that democratization ofAI has opened new doors.
What was once enterprise onlyis now consumer grade, and that
shift is unlocking creative andcommercial potential across all
sectors, just like retail.
And third, technology isn'tinherently good or bad.
It's about how we use it.
Deep fakes for deception aredangerous, but deep fakes for

(41:22):
personalization, well.
That might be the next frontierin customer experience, giving
brands the power to createmoments that feel deeply
individual and emotionallyresonant.
Thanks again for tuning in.
If you found today'sconversation insightful, don't
forget to subscribe, leave areview and share with someone
exploring the intersection of AIand innovation.
This episode of the AI ProvingGround podcast was co-produced

(41:45):
by Naz Baker, cara Kuhn, mallorySchaffran, stephanie Hammond
and Marissa Reed.
Our audio and video engineer isjohn knobloch and my name is
brian felt.
We'll see you next time.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.