All Episodes

May 20, 2025 52 mins

In this episode, hosts Justin Shelley and Mario Zaki welcome Dr. Ilkay Damir, a deep fake detection expert who developed FakeCatcher at Intel. Dr. Damir explains how deepfakes threaten businesses through financial scams, political misinformation, and reputation damage. She discusses her groundbreaking technology that detects fake videos by analyzing blood flow signals in faces with 96% accuracy. The conversation covers practical protection strategies for business owners, including using the Content Provenance and Authenticity (C2PA) standard and protective technologies like "My Art My Choice" that prevent AI from successfully replicating content. Dr. Damir's message: we're not doomed - businesses can protect their digital identity by being proactive and leveraging emerging authentication technologies.


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Justin (00:15):
Welcome, everybody, to episode 52 of Unhacked. Guys, we
are gonna talk about somethingthat has had me terrified for a
a hot minute now. Deepfakes.Deepfakes are coming for your
business and what to do aboutit. I am Justin Shelley, CEO of
Phoenix IT Advisors, and I helpmy clients get lots of money

(00:37):
leveraging technology, and thenI help them protect it from the
evil Russian hackers, Also, thegovernment fines and penalties
that if you don't comply, theycome after you.
And if you do all that right orwrong and you screw something
up, then the attorneys are gonnacome and sue you and take
everything that's left. Sonobody wants that. That's what I
do is I prevent that stuff fromhappening. I am here as always

(01:00):
with my loyal, faithful, trustedcohost, Mario Zaki. Mario, tell
everybody who you are, what youdo, and who you do it for.

Mario (01:06):
Yeah. Mario Zaki, CEO of Mastech IT, located in New
Jersey, right side right outsideof Manhattan. Been in business
over twenty one years now, andwe work with small to medium
sized businesses to help withtheir IT needs, keep them
protected. And we specialize inkeeping the business owner, give

(01:26):
them the ability to sleep betterat night knowing that his
business is safe.

Justin (01:30):
All right, guys. And I am also doubling for Brian
LaChappelle today who was withus and then had to jump off for
some technical difficulties. Thejoke that never gets old if only
he knew a good IT company. Solisten, if he comes back, we're
gonna let him in. We will beratehim and make fun of him, but
hopefully he can jump back inhere in a minute.

(01:52):
I don't know, we just got a notefrom him that there's a regional
outage, so it's not his fault.That's good to know. Bad news is
that might mean he's not goingto join us today. So we're going
to go on without him, but I amreally excited to introduce our
guest today. Like I said, we'rediving into the world of
deepfakes and digital deception.
Could not think of a betterguest to have on the show, guide

(02:15):
us through this journey thanDoctor. Ilkay Damir. Doctor.
Jesus. Doctor Damir, say hi andthank you for being here.

Ilke (02:24):
Hello. Hello. Thank you for the invitation and thank you
for all your efforts for sayingmy name correctly.

Justin (02:30):
I do my best. Names are important and I slaughter them
regularly. So that's why I don'thave very many friends. Doctor.
Damir, you like reading throughyour bio.
I'm not going to lie. I getintimidated. That's been
happening more and more on theshow as we bring in high caliber
guests like yourself. So I'mjust gonna read a little bit
about you and feel free tocorrect me if I mess anything

(02:53):
up. But you built a trustedmedia program at Intel.
Is this like the chipmanufacturing company we're
talking about? The Intel?

Ilke (03:02):
Yes. The Intel. The trusted media in the Intel.

Justin (03:07):
Okay. And this included research and productization
teams for building a trusteddigital future. Can you just
tell me just briefly what thatall means?

Ilke (03:18):
Yeah. You know, AI is coming very fast and very strong
and very furious, baby. So weare trying to build this digital
future where, the actual humanvalues like trust and ownership
and, like, ethicalconsiderations and all of this
is not lost. It's still in,like, core to whatever we do.

(03:40):
It's still core to the contentand how we can actually ensure
that technologically that, like,what we see is correct, what we
see is trustable, what we createis protected.
We know who created, how it wascreated, etcetera. So all this
realm of digital content trustis within trusted media.

Justin (03:57):
Which is huge because honestly, I don't believe jack
shit anymore. If if I read itonline, it's probably not true.
That's just kind of the thefeeling I've had for a while. We
know that social media feeds uswhat they want us to see, Google
results, all of this stuff. Wedon't know what we can trust
anymore.
So I I really applaud yourefforts and thank you for for
this work. Now it doesn't stopthere. You're the mastermind

(04:19):
behind FakeCatcher,groundbreaking tool that detects
deep fakes in real time byanalyzing subtle biological
signals. And we're talking likeyou can detect blood flow
analyzing pixels and video. Isthat right?
Yes. Oh my god. Tell me moreabout that.

Ilke (04:36):
Yeah. When your heart pumps blood, it goes to your
veins and your veins changecolor. That color change is not
computational, is not visible toyour eyes so I cannot just like
look at the your video and tryto understand your heart rate.
But that's computationallyvisible and it is called
photophthalates tomography, PPGfor short. So we actually

(04:57):
collect those signals fromeverywhere on your face, try to
correlate them, try to seewhether they are those heart
rate monitors, you know, likebeep beep stuff.
We look at their signals and weactually build a deep learning
model on top of that tounderstand whether they are
authentic signals that representyour authenticity or they are
fake and everywhere and not likea real heart or stuff.

Justin (05:19):
And I'm I'm we're a little off topic cause I'm gonna
dive into this where, you know,before we even get into the meat
of the episode today. But howlong will it be before AI can
then create the stuff thatyou're using to detect AI?

Ilke (05:33):
So FakeCatcher is not new at all. Like, I think it has
been more than five years thatwe actually first published
FakeCatcher. And it's still nottricked because the signals that
we are using in FakeCatcher arenot so I will go a little bit
technical here. Are notdifferentiable. That
differentiable means that thoseare very nonlinear complex

(05:54):
equations that you cannot justtry to learn.
Because they are notdifferentiable, you cannot just
try to learn those signals withgenerative AI because you need
to back prop those signals topropagate the error that you
learn. When you propagate theerror back in the network you
actually change the weights ofthe network and that's how the

(06:14):
learning happens. Deep learningone on one hello! So because
those relations that you aretrying to learn are super
complicated non linear relationsyou cannot try to exactly learn
the PPG signals that I mentionedthe blood flow signals that I
mentioned and because of thatFake Catcher is not yet. As a

(06:37):
scientist I cannot say that itwill never be tricked it's like
forever like that of course notbut yet.

Justin (06:43):
Right right but at this point this is a solid reliable
and again thank you. Because, II mean, honestly, in in our
industry, this is this is kindawhat we combat all the time.
And, you know, at, like, at somepoint, when do we just throw our
hands up in the air and say, Idon't know. They won. Right?
So I love having people like youwho are fighting the good fight.

(07:04):
We have to be more proactive. Wehave to be more on offense
instead of playing this world ofdefense that we do a lot of
times in in cybersecurity. SoDoctor,

Mario (07:14):
is there certain hardware that get has to be involved when
when with this or can this be,you know, pretty universal?

Ilke (07:23):
It's pretty universal. It can run on everything. It can
run on CPU. It can run on GPU.Based on the benchmarks, we have
shown that, we can run it realtime only using like we can run
it real time on CPU GPU systemsetcetera but even if you only
have CPUs we have shown thatlike we can run two zero eight

(07:47):
video streams at the same timeon one high end server CPU,
which is basically running realtime detection on 200 videos at
the same time, which is verynice.
And the reason that it is thatmuch parallelizable and that
much efficient is that thenetwork part, the deep learning
part, the neural network partthat we use is a very

(08:08):
lightweight neural network andthe whole emphasis and the whole
strength of the algorithm comesfrom the biological signals,
which is not that very heavymemory and time consuming
network part. Because of that,we can actually, run it very
efficiently and in real time.And we actually release it as
the very first real time deepdetection platform.

Mario (08:28):
Oh, very nice.

Justin (08:30):
And I I don't even need to say this at this point, but
obviously you have a PhD incomputer science from Purdue
University. We're not eventhrough the introduction yet,
and I like can't stop askingquestions. Impressive stint in
your career at Facebook, Pixar.We've already mentioned Intel.
Well, Studios.

(08:51):
Is that the same thing or isthat different? That's the same
thing.

Ilke (08:53):
That's Intel, but that is a different part of Intel that
was doing three d movies, AI VRproductions that are going to
field festivals and stuff. Sovery, very cool things.

Justin (09:03):
That makes sense, and that is cool stuff. I mean, god,
you you bridge the gap betweencutting edge technology and
human centric novel algorithms.Tell me what that means. I have
to, like, ask you whateverything means.

Ilke (09:16):
Yeah. You know, like, everyone is talking right now,
like, oh, like, the bestgenerative AI model, the AI
algorithms, they're all there.Cut and edge edge technology,
etcetera. But our mindset is alittle bit different in the
sense that we want everything tobe human centric. We don't want
those like black hole algorithmsin the sense that you want to

(09:39):
create something and you givevery little input, which may be
a text input or some imagereference, etcetera, then you
don't have control for the restof the algorithm.
We don't want it to look likethat. We want to bridge the
traditional creation workflowswith the cutting edge technology
as much as possible. Justimagine you are doing, let's

(10:00):
say, three d modeling, right? Inthree d modeling, there has been
so many tools that areempowering the creativity of the
modelers, of the VFX artists, ofthe Lightning teams, etcetera.
But if you look at the novel AIalgorithms, it is mostly about

(10:20):
how automated everything iswithout much control, without
much guidance, etcetera.
So we are trying to still createso much novel, cutting edge
technology, but we want thehuman part of it, not just for
editing and control, but like,know, the human values of it to
still persist as if it exists inthe traditional systems.

Justin (10:42):
Gotcha. Amazing. So it's beyond this, beyond all this
technical stuff, you're also anACM distinguished speaker. Tell
me a little bit about that.

Ilke (10:53):
Yeah. So that's a very cool program. And ACM is
Association for ComputingMissionary. That's one of the
biggest, like, computer scienceindustry associations, not
computer science, but, like,general engineering. Like, you
know, most of the conferencesthat we go or pop, or, like,
journals that we publish in areactually ACM conferences and ACM

(11:13):
journals.
So ACM has this program calledDistinguished Speakers, where if
you want to have some lecture orkeynote or panel or talk to be
given on a particular topic, youcan actually request that from
ACM, and ACM sends us there.Like ACM says, okay, this is a
very impactful lecture, you cango and talk about defects in

(11:36):
Montana University, which is atrue story, I went to Montana,
yes it was very cold, but it wasa brilliant crowd of students
there. Anyway, so yeah, whoeverwants, not just me, there are
many industry experts there fromacademia, industry like a mix of
them, just look at the topicsthat you are interested in and
ask from ACM about sending thoseexperts to your institution,

(11:59):
your university, and they willbe there. And the university or
the institution only handles thelocal accommodations, the
travel, etcetera. All ACMhandles it.
That's the beauty of

Justin (12:12):
it. Gotcha.

Mario (12:13):
How many of those have you done?

Justin (12:15):
I was just gonna

Ilke (12:15):
ask that. How many. I have been an ACM distinguished
speaker for three years. I didmost than 10. I think, like, if
I count the ones that I did onZoom during COVID, it's even
more than 20.

Justin (12:29):
Wow.

Ilke (12:29):
Yeah. So it's a it's a really like, it's also a good
way good. Sometimes it is likefor very early researchers, very
early students, and it's notonly about the technical topic,
but it's about your lifejourney, you know, like how you
actually become who you are, orwhat are the current challenges

(12:52):
that I can guide them on, etc.And of course, like the wide
range of topics that we talkabout, like the three d movies,
etc, that's one of the lectures,like traditional generative
models, that's another lecture.Defects and defect detection,
that's another lecture.
So it's really a good universalit's not just US, by the way.

(13:12):
You can I even, like, gave talksin Africa through ACM, etcetera?
So it's like everywhere aroundthe globe. If you are looking
for experts to share theirjourney, share their technical
expertise, etcetera, you canjust request from ACM,
basically. Nice.

Justin (13:30):
You know, this introduction has it's not a
normal introduction, right?Because usually when I'm
introducing a guest, I just readstuff. And that's because I
usually understand it. I am inthat perfect scenario where I'm
in a room of smarter people thanme, which is where I always want
to be. So I've got one more lineof your intro I'm going to read

(13:51):
and it's about your passion.
You're passionate about theadvocating for the bright side
of AI content providence,actively working to protect
creators rights in the digitalage. This is huge. Right? So
with everything I've got, thankyou not only for being here and
talking to us today, but thankyou for what you do. This is

(14:11):
something that the worlddesperately needs and I greatly
appreciate it sincerely.
So with that, Doctor. Damir, weare going to get started as
we're now fifteen minutes in. Sowe're we're gonna start by just
understanding what is deepfake.I I don't know that I'm sure

(14:33):
everybody's heard about it, butlet's just for a second pretend
this is brand new. I'm I'mignorant.
I lived under a rock my wholelife, and I have no idea what
you're even talking about whenyou say deepfake. What does it
mean?

Ilke (14:46):
So deepfakes are this fake content can be image, video,
audio, where the actor or theaction of the actor is not real.
So me saying things that I havenever said or taking Justin's
face and putting it on my faceso like, the things that I say

(15:06):
is coming from Justin, etcetera.All these different algorithms
that are creating those, notreal content are called
deepfakes. Well, can't we justcall them fakes? Yes.
They're legit.

Justin (15:18):
I asked that actually. No. Thank you. But

Ilke (15:24):
those are mostly with deep learning algorithms and complex
neural networks. So that's whythey are actually rebranded as
deepfakes. So if you just smudgesomething in Photoshop, it's not
a deepfake. Some people call itshallowfake, which is also a new
term because now we havedeepfakes. If we didn't have
deepfakes, they would be justfakes.

(15:45):
Now they are shallow fakes.Anyway, but, like, deepfakes are
usually, deep learningalgorithms that are used at. And
there are several, main,networks that are creating
deepfakes. So autoencoders orvariational autoencoders is one
of them. Genetic microsatellitenetworks, GANs is another one.
Like, right now, like diffusionmodels is another one that is

(16:10):
coming, you know, like stablediffusion, mid journeys, and all
of them are using the thesediffusion models that is also
being used for the creation. Andall of these are creating these
very highly believable incorrectvideo, like unreal videos.

Justin (16:24):
So as a business owner, I mean, why why do I care? So
and and more specifically, like,do you have any real world
examples of how this hasimpacted

Mario (16:36):
businesses?

Justin (16:38):
Small businesses, that's who we're normally talking
about. But business at large,what's happening out there?

Ilke (16:45):
Yeah. So small or large, there has been many instances
that companies faced financialreputation wise, political,
employee wise, informationcentric, etcetera, and damage

(17:06):
due to deepfakes. Like one, Ithink, many people heard about
it is that, there was a CEO orCFO in Hong Kong that, with a
voice deepfake, someone saidthat, okay. Like, I'm a in
investor or employee or someone.You need to actually, transfer
this much money to that place.

(17:28):
And because the voice was superbelievable, they actually sent,
like, millions of dollars tothat bank account. And suddenly,
they are done.

Justin (17:36):
Which can't you can't get that back. In most cases
that that money is gone forever.Right?

Ilke (17:40):
Yep. Yeah. Yeah. Because it's with with your own, like,
no one actually force you. Youjust leave something.
It's basically scams and frauds.Right?

Justin (17:48):
Sure. Yeah.

Ilke (17:49):
Another way is political misinformation and, like, try to
change the public opinion. Yep.I don't know whether I should
say it, but, like, you know,like, right now, even, like, the
credible sources, like the WhiteHouse is tweeting some photos,
etcetera. They can be debased,obviously or not. So we actually

(18:11):
work with different human rightsorganizations.
One of them that we work with,they are collecting all these
high risk, high vulnerabilitycases around the world from many
journalists, many individualjournalists or like many news
organizations, and they arebringing it to us to find
whether they are real or fake.So there has been elections in

(18:33):
Georgia, there has beenelections in some countries in
South Africa, there has beensome cases in India, and all of
these deep fakes are activelybeing used for either defamation
of the other candidates or likemaking some public figures
accomplishments, like nonexisting accomplishments

(18:55):
happening as they have happened,etc. So that climate is another
one. And unfortunately, thebiggest part, and it has been
like that since the beginning,like since 02/2019, think that
was the very first, marketresearch that people were doing
about deepfakes is adultcontent. So I think in 2019,

(19:20):
they found out that 97% of alldeepfakes were towards adult
content.
Now it is like that too. Thereare many legislations that are
passing right now, like Take ItDown is one of them. I don't
know whether you heard about it,but Take It Down basically,
punishes and takes it downimmediately if there is like non

(19:40):
consensual adult, imagery orvideos done with deepfakes and
stuff. And that's very new. Ithink they passed it last week
or something.

Justin (19:49):
Oh, really?

Ilke (19:50):
Yeah. No. They passed it from the house, I think, last
week, and now it's going to thepresident, something like

Justin (19:55):
that.

Ilke (19:56):
Yeah so unfortunately that's like the one of the
biggest use and there are manyother research out there about
looking at the percentages likehow much of the defects are in
that domain, what is the genderbalance of those in existing
everywhere of course it's mostlywomen then unfortunately there's

(20:18):
also children which is very veryconcerning. Anyway so yeah
that's like another industry andspecifically for small
businesses you know your VPs,your executives, your like see
suites can be defect veryeasily. Essentially like one of
the whenever we we are, like,facing a new customer, new

(20:40):
company, the our our demoincludes deepfaking their CEO or
someone, and that's why I justgonna ask He can find it.

Justin (20:48):
I shoulda had you do a deep fake on me that we can have
used as an example. Cool.Honestly, I might I'm gonna make
that request now. I couldbecause I can I can put stuff in
maybe at the end or I'll spliceit in? But if you could do that,
if you have the if I could askthat of you, you can say no.
Yes.

Ilke (21:06):
Me just one photo and I will do it and I will send it to
you. Just one photo is enough.

Justin (21:11):
Absolutely. We'll we'll we'll splice that in right now,
Mark. Okay. Backs back on trackhere. That's that's this
phenomenal information and itscares the shit out of me.
I'm not gonna lie.

Mario (21:23):
Now doctor Damir, I know you led the development of, fake
catcher. Can can you tell us alittle bit of how how you were
able to identify deep fakes withthis?

Ilke (21:34):
Yeah. Of course. So, as I said in the beginning,
FakeCatcher is looking at theblood flow, and, that signal
that we use calledphotoflatteismography or remote
photoflatteismography is verysimilar to to what you have in
your Apple Watch actually. SoApple Watch is physically
looking at your skin and tryingto understand the blood flow.
Removed photoflattismography,which is what PetCatcher uses,

(21:56):
is looking at the video andtrying to understand that.
Now if we were to just find out,okay, is like 75, now like, 78,
etcetera, your heart rate, thatis a harder problem and that is
like more noisy. But what we aretrying to do is correlate those
signals throughout your face sothat the blood flow is coming

(22:19):
from one heart. It's not likemultiple hearts popping
everywhere

Justin (22:22):
on your Okay.

Ilke (22:24):
And we do it in spatial, temporal, and spectral domains,
which is looking at allproperties of those signals. And
the neural network on top ofthat is to make it more robust.
I think you can see there aresome color or brightness changes
when I move from the camera, sothat actually affects those
signals. Or if there's someocclusions, know, like the

(22:46):
heartbeat on my on my hand and,face may be slightly different
because of the color change,etcetera. So to be more robust
and there's compression, it'snever four ks or videos can be
very compressed and stuff.
So to be robust against all ofthose, are actually strobing a
neural network on top of that,so we actually accommodate for
those inconsistencies. And atthe end, we find out whether it

(23:10):
is real or fake with 96%accuracy on Phase one, six plus
plus datasets. And there aremany other datasets and
benchmarks that we have donewith FakeCatcher to find out
that so it is, robust against,different skin colors. It is
robust against differentcompression levels. It is robust

(23:31):
against, like many changes thatcan happen basically for a
video.
Wow.

Mario (23:36):
And and and how did you guys come up with this idea that
to use this to identify it?Like, that's genius. How did you
come up

Justin (23:45):
with it?

Ilke (23:46):
That's my idea. Sorry. Yeah. Like, TechEater is my
baby, so I'm, like, super proudof that. So I I was working in
generative models before it wascool.
So I have been working ongenerative models, like, for
fifteen years or so, which,like, at that time, it's mostly
traditional generative models,but still trying to understand

(24:07):
the priors within data, to fitcontext free grammars or L
systems into the data so thatthey are language like and
procedural so we can create moreand more and more data, which is
generative models, right? Soanyway, so I always had that eye
to look at the priors in data orlike rules in the data or
systematic procedures thatcreates that data. Now when

(24:28):
defects were first coming out,it is the output of a generative
model. So there should be someauthenticity, some priors, some
hidden signals that are insidethat data that we can still dig
and depend on. And at that timeI saw that paper from MIT from
Bill Freeman's group aboutremote photophalismography,

(24:50):
basically, the PPG paper.
Up until that moment, it wasbeing used on remote patient
monitoring, medicalapplications, to see if the main
video that they do is likethere's a baby and baby is
changing color due to its bloodflow and you can see that it's

(25:12):
actually breathing based on thatcolor change etc. So like that
MIT PPG paper was like momentfor me saying that well yes this
is a signal that we can dependon this is very hard to
replicate etc. Then I called mycolleague Doctor. Umer Chifchi
who is an expert on PPG signalsand said like I've a few

(25:34):
experiments on this I'm supercurious let me do that and he
was very collaborative and heactually said like yes yes
that's my domain I can actuallydo that then even our very first
experiments that are not usingdeep learning algorithms but
just using like SVM supportvector machines etc to
understand the feature space ofthose signals was very

(25:54):
promising. We have seen like ifwe use PPG signals in a correct
way, can actually find like over97% accuracy that it is actually
saying that it's real or fakefor pairwise separation problem.
Anyway, and then we said likethis is great. Let's just
evaluate the hell out of it.That's like, create something

(26:15):
that is useful, not just rightnow for current defects, but for
future too. Because at thatpoint, defects were just
emerging and everyone waslooking at what is wrong with
those videos. Like, is thereboundary artifacts?
Is there symmetry artifacts?What artifacts exist so that we
can find it? Looking from anauthenticity perspective,

(26:37):
looking about what is real inthat video was really under
explored. It's still underexplored in my opinion. And when
you look for something that isnot replicatable in the
authentic video, are actuallymaking it much more
generalizable for future futurefuture generations that instead
of, like, trying to fix what isfixable, trying to depend on

(26:57):
what is fixable, you aredepending on something that is
not replicatable, which is,like, the next generative model,
the next, GAN, the nextdiffusion model cannot still be
used.

Mario (27:09):
Okay. And with like, so day to day, like a business
owner, like myself, like, whatcould I do to somewhat do
something to kinda protectmyself? Like, you know, like, we
we do these podcasts all thetime. So, like, my face, my
video, my voice, you know, it'son there. It's on the web.
You know? What can something abusiness owner do to to kinda

(27:33):
protect themselves at least alittle bit?

Ilke (27:35):
Yeah. I can easily take one of your podcasts and create
you, like, saying things, oh,Academia was the best guest we
had ever had.

Mario (27:43):
I'll say that anyway. Simple.

Justin (27:45):
I have

Mario (27:45):
to say that anyway.

Justin (27:47):
You don't need to fake that. Right.

Ilke (27:51):
Yeah. So there are several protection mechanisms that you
can protect your content. Andyou can also always be aware of
what is real, what is fake.Looking at the video, like there
are some things that if it is alive call, for example, you can
make them, like, occlude theirface, you can make them deform

(28:14):
their face, you can make them,like, change the lightning and
stuff. And that's, like, one ofthe, like, three of the very
first things that you can do ona live call.
If it's not a live call, it'snot like live visual feedback,
you can always run one of thosedefect detection algorithms.
FlightCatcher is not the onlyone. We have eye gaze based

(28:36):
detection, we have motion baseddetection, we have some
multimodal detection algorithmsthat are looking at my motion
and my voice. When I'm moving ina certain way, my voice is
changing in a certain way. Sothat's actually giving you
really nice correlation outputfor us to understand something
is real or fake.
You can try to understand suchthings from your human manual

(29:00):
inspection or use thosedetectors. To proactively
protect, there are severalalgorithms that we developed.
For example, My Face My Choiceis one of them. If you don't
want to appear in a photo, canapply My Face My Choice and then
the expression what you say youknow like if you have tongue out

(29:23):
your tongue everything will bethere but your identity will not
be there we have my art mychoice which I think we can talk
about in detail but that isprotecting content from being
stolen from generative models.We have MyVoiceMyChoice, that if
anyone wants to replicate yourvoice using an audio clip, If

(29:45):
that audio clip is protected,the generative model output will
be super noisy, super bad, likeyou can't even listen it more
than like a couple of seconds,etcetera.
So all these proactivealgorithms also exist out there.
And we are not the only peoplethat are publishing those.
University of Chicago hasseveral such algorithms that are
protecting style or protectingconcept. Know, like you want to,

(30:08):
you say like create a cat, itcreates a dog. So if you don't
know what cat is, you can assumedog is a cat, anyway, that's
another topic to discuss.
But anyway, so these are,approaches, exist. And in
specific domain, you can also domore specific things or like
there are more inspectionmethods that I teach in my

(30:31):
trainings that I did severaltimes in different countries for
different demographics inVietnam, in Korea, in US. Like,
I give such strengths too. So ifyou have a a bad butt problem
sorry. Bad deep fake problem,here is the person.

Mario (30:51):
Okay. And then tell me what is content authenticity and
providence? I guess also knownas c two p a.

Justin (31:03):
Yes.

Mario (31:03):
Is is that correct?

Ilke (31:04):
Yeah. So that is c two p a is coalition for content
provenance and authenticity. AndC2PA is where the brightest
minds in this sector is comingtogether to create standards for
provenance. So when I sayprovenance sometimes people are
like pro pro pro So provenancewhich I say very fast

(31:26):
unfortunately for some reason isthe origin and the source of any
content. So let's say yourfavorite cousin is sending you a
video that you have no idea whocreated, how it was created, was
it created with content, what,like, which tools are used to

(31:47):
create, is it a combination ofmy video and Mario's video, etc.
So all of this is actuallycreating the provenance
information. The moment acontact is created, maybe image,
voice, or video, text, font,three d model, you know, like

(32:07):
any kind of data. The moment itis created, there's a creator,
there's a way it was created.There is like ingredients if
they exist or like there aresome pre or post processing
approaches that are applied toit. The the provenance manifest
that is containing all thatinformation is what the
standards are built for by C2PA.

(32:29):
So it's a coalition withMicrosoft, Google, Tropic, BBC,
Sony, many steering committeemembers for CTPA. And very
recently, actually the newversion of CTPA is out there, I
believe, or about to be outthere, or by the way that this

(32:51):
podcast is out, it will be out.That timeline. And, like, you
can check it from c2pa.org. Andif you are doing anything with
content, doesn't need to be likeyou are a content creation
company, it's like if there'sany content that you put or you
ingest or it's a platform thatis with a like upload function

(33:14):
upload content functionalitythen it's good that you check
CCPA because it's not just likesome standard tool that is
building this trusted featurethat is building this trust in
online content digital contentbut it is also hopefully
becoming a part of, like,legislations and laws, etcetera,
so that everyone will know howcontent is created, basically.

Mario (33:38):
Okay. Now you mentioned one of the companies that you
mentioned was, Truepic. Mhmm. II have a very good friend of
mine that actually works forTrue Pick and, you know, every
time I I I see him, we, youknow, we have a beer and and
discuss the stuff and it's wejust nerd out a little bit. Tell
me what you what you know aboutthem.

(33:58):
Like, I know they they they'recoming out or they've come out
with like a a special chip thatcan be installed in cameras,
like iPhone cameras or Androidcameras that can help with
identifying, you know you know,if if video was authentic or
not. Can you tell me a littlemore about that?

Ilke (34:17):
Yeah. So, they are one of the first adopters of c two p a
for, cameras and manufacturing,visual content manufacturers in
the sense that the moment thatphotons from an object or from a
scene hits the camera lens,that's where the content capture
is starting. And from them thereon it may be there is some post

(34:41):
processing in the camera, know,like denoising or auto
brightness or something likethat then like the camera lens
is maybe a special lens and thencamera owner and the camera ID
you know like the camerahardware ID etc all of those a
part of are a part of theprovenance that's like in the

(35:03):
very edge case that you want touse that photo as a court
evidence, all of this origininformation will prove that it
is authentic because thatcamera, it's that lens, it's
that preprocessing, etcetera. SoTropeak is the very first, like
one of the first or very firstcompany that actually
implemented c two p a on deviceso that you can have that
provenance information for anyonly content that you capture

(35:26):
with with with cameras.

Mario (35:28):
And do you think this is gonna be just be you're gonna
eventually be a standard onevery smartphone and and device
that that is produced, or do youthink the industry is looking
for something to do this?

Ilke (35:40):
That's that's that's the motivation. That's the idea.
Several governments areactually, supporting c two p a
in the sense that, like, yes.This should be the proven
standard. We should follow thisway.
Several, companies alreadyimplemented c two p a. It's not
just for camera, it's like forJTPA methods too. It's not
probably the whole ownershipinformation, whole provenance

(36:05):
information, there are bits andpieces that are coming. So for
example, if you upload image orvideo with provenance
information with contentcredentials to Facebook, it will
show that content creation, thecontent credentials, sorry. Same
for LinkedIn.
So if you upload an image orvideo with content credentials,

(36:27):
you will see that. Adobe is oneof the main contributors of
C2PA. So most of the Adobetools, creator tools, Creator
Suite is fully supporting C2PAin the sense that whatever you
do with the content, whateveryou do with the image, maybe
you're a digital artist, whatyou create is completely within
the tool, etc. It actually helpsyou write those update manifests

(36:51):
or create those c2pm manifestsso it is fully known what
happened to the content. It'sit's just like a, you know, tree
of life for content, basically.

Mario (37:02):
Okay. And what what again, what that what could a
small business owner do to kindof keep their content, like,
authentic? Is there anything youYeah,

Ilke (37:12):
so I would say implementing c p a is absolutely
a part of it, especially if youare creating high value content,
like if you are a studio, if youare a Hollywood Producer or
something like having that c2pamanifest for every piece not
just for the end product notjust for the movie but you know

(37:34):
there are voice artists thatwant to protect their voice with
c2pa add the provenanceinformation in c2pa or if you
have specific subtitles forsomething specific that is your
work you can actually have itc2pa manifest for it like you
know all pieces of those If youare somehow creating content,
owning content, just check c twop a, try to have content

(37:57):
credentials for your content sothat whoever is consuming your
content will know that it comesfrom you.

Justin (38:04):
Can you just walk me through that? I I browsed the c
two p a website. If I wanted togo through this process, is it
complicated? Is it difficult?

Mario (38:16):
What does that look like? Expensive.

Ilke (38:18):
Yeah. It's trying to like C2PA wants to be as inclusive
and as accessible as possible.So there are some
implementations that are bakedin in the software. That doesn't
mean that you must use thatsoftware to have content
credentials there are like freewebsites sites that you want to
create or read out the contentcredentials and then you can

(38:40):
actually do that there's alsocompanies that are coming up
that will be supporting c2PAcreation and protections. Just
be on the lookout for those.
Also wants to c2PA wants to beas accessible as possible. So

(39:02):
the protection of those c2pimanifests are mostly, like,
cryptographically signed, and,the manifest itself lives in
blockchain. However, if you arein a part of the world that has
no connectivity and you aremostly offline, you can still
sign your content with CGPA,still have that cert button

(39:24):
manifest that is embedded as apart of the content, so embedded
inside the content. When youhave connectivity, you can
actually make it a part of thechain again, or if you never
have connectivity, you can stillhave it signed and embedded in
the content so that you haveC2PA manifest and you still can
prove it that it's your contentbasically.

Justin (39:45):
And is this similar to or is it connected to in some
way? You mentioned before the myart, my choice, my face, my
choice. Are these overlapping oris that something completely
different?

Ilke (39:56):
The motivation overlaps. So C2PA wants to enforce and
make it super transparent thatthe piece of content belongs to
you, which is provenance. And MyArt My Choice, My Face My
Choice, My Voice My Choice theseare protecting the content
technologically so that ifsomeone wants to some generative

(40:18):
AI model wants to steal it orrecreate it then they cannot
because it breaks down. So CTPAis the written transparent way
that it says, okay this videobelongs to Justin and My Art My
Choice learns to create adifferent version of the video

(40:39):
so that when I upload that videoto, let's say, Stable Division,
saying that, okay, create thisvideo in a way that Justin says
this and that, The output isvery broken, very noisy, doesn't
look like you at all, doesn'tlook like any plausible video at
all. So one of them isprotecting the provenance

(40:59):
information through, like,cryptographically signing and
storing it in, like, very securemechanisms.
The other one is creating thecontent itself so it cannot be
replicated.

Justin (41:12):
Okay. So I'm gonna maybe expose my ignorance on all of
this stuff. But I have thiswildly popular podcast called
Unhacked. And I'm really worriedabout people taking my face, my
video, my voice, and Mario's,and Brian's, and yours, and and
exploiting it somehow. Whatshould I do?
Like, what's step one? Whatright now today, when we get off

(41:34):
of this, what do I need to go doto protect this this podcast?

Ilke (41:39):
Take my art my choice. Apply my art my choice to the
video. So whoever is crawlingthe web for that video, they
cannot use it for creating aderivative or getting one part
and changing that part orstealing my face or my voice
from that video and creatingsomething else, etc. So my act

(42:00):
my choice is an adversarialattack on generative models,
which means taking the content.It's a generative model itself
by the way.
My act my choice is a generativemodel.

Justin (42:11):
That's what it sounded like. Yeah.

Ilke (42:13):
Yeah. Yeah. So it it takes the content. It learns a
protected version of the contentOkay. Which looks very similar
when we look at it, we listen toit, when we, like, interact with
it.
It's it's almost the same, likevery negligible changes. But
when the protected version isgiven to be replicated or
deepfaked or diffusion modelslike go work on it, etc, it

(42:36):
breaks those models. So it'sactually an adversarial attack
on generative models that theoutput doesn't look like us. Our
output is not replicating ourstyle. Our it's not replicating
our contact, etcetera.

Justin (42:49):
I love the way you word that.

Mario (42:51):
Use this can can't go around this. I'm sorry, Justin.
I'll cut you off.

Justin (42:55):
No. Go. Hackers.

Mario (42:55):
Hackers can't like I mean, the hackers don't obey by
the law by the rules. Right?

Justin (43:02):
But it breaks it is what she's saying. That's what I
love. So it I was gonna say it'salmost like encryption, but I
don't know that that really fitskind of. But it, like, does
something so that it knows howthe generative model is going to
try to fake it and breaks it.Right?
That's what you're saying?

Ilke (43:18):
Exactly. Exactly. It is embedding signals in the video
so that when generative modeltries to learn to generate it or
edit it or move things, etc,it's actually moving it towards
pushes it away from the originalcontent as much as possible. And
that push away is controlled bypushing away the style, pushing

(43:39):
away the content, and increasingthe noise as much as possible.
So the output, we want it to besuper noisy.
That's what I mean by likebreaking the model.

Justin (43:48):
Yeah. So what this would look like is before I publish
this to YouTube, I download it,I make all the edits, I do what
I want, I I get the finishedproduct and I feed it through
their system. Is that right?Then it spits out

Ilke (44:00):
a Yeah.

Justin (44:01):
And then it spits out a different video, not the one
that I would normally downloadoff of this platform. It's
basically and I I I wanna sayit's an encrypted video, but
it's really not. Yeah. It's gotland mines in it. Right?
Yes. It's really where I'm we'reembedding land mines into the
video that the generative modelsare just gonna hit and explode.

Ilke (44:22):
Yep. Exactly.

Justin (44:23):
Is that a good way to explain it? Okay. I I have some
homework, doctor Damir. And Igot blown away. This is easily
the most

Mario (44:34):
I kinda wanna see how it looks after it's broken. You
know?

Justin (44:37):
I'm gonna do it. I'm gonna do it. I I'm gonna do it
and I want you to, you know, ifif you will, if you'll take do
some deep bake stuff like wetalked about before, I'm gonna
try to go through this processif I can figure it out because
like I said before, I am not thesmartest person in the room. But
I'm gonna I'm gonna try to runthrough this process and and see
if I can create a broken videoto to feed. Maybe on a, like, a

(44:58):
b side episode of of an actor.

Ilke (45:03):
So the name of the algorithm is and the paper is
published. So if you look for MyArt, My not you, but everyone
listening can look for My Art,My Choice, you will actually see
the examples that are brokenthere already.

Justin (45:14):
Oh, perfect.

Ilke (45:15):
We have many examples on the on the papers.

Justin (45:18):
You just saved some homework.

Mario (45:19):
Yeah. If we can if we can actually probably have a couple
links or have the link

Justin (45:24):
I will. Yes. Sure. Yeah.

Ilke (45:25):
Yeah. Of course.

Justin (45:26):
So I it just yeah. So on that note, I when when listeners
go to unhacked out live, ourwebsite, they there will be a
section that talks about you,Doctor. Demer, and it has some
of your links and stuff. Butthen I also embed it into the
show notes. If you're listeningon Spotify or Apple or wherever,
the summary will have some linksin there as well.

(45:48):
And I'll I'll put as much ofthis stuff in as possible
because wildly fascinatingstuff.

Ilke (45:53):
Thank you.

Justin (45:53):
I I I'm it's rare that I'm speechless, but I'm
speechless. I'm I'm terrified.I'm excited.

Ilke (46:01):
Embedding some speeches with deep ex. Oh,

Justin (46:05):
god. And she's a comedian on top of being
brilliant. But so true. Wow. I Iyeah.
Like, I really I legitimately amspeechless, but we're gonna go
ahead and and wrap this upbecause honestly, I could geek
out on this all day long. Ireally just gonna kinda follow
you around and watch you workfor the next five or ten years

(46:26):
so that maybe I understand anyof it. But I'll say again, thank
you for for what you do. This isI love that there are people
that have this passion, thisdrive, this mission to to
protect us because like thisthis world's crazy and it's only
getting crazier. With that,guys, we're gonna wrap up like
we do at the end of most of ourshows.

(46:48):
I wanna kinda go around the roomand just tell me what your
primary key takeaway is. Ifsomebody listened to only this
part of this episode, what wouldyou want them to know? And
Mario, I'm gonna start with you,and then doctor Damir, if you'll
go ahead and and give me yours,then we'll wrap up.

Mario (47:04):
Yeah. I mean, for me, it's the one thing a key
takeaway for me with this islike just like every other
episode that we've talked aboutin Unhacked, you have to take
the proper precautions toprotect yourself, protect your
company, protect your identity,and not just think that it will
not happen to you. You have tobe proactive in protecting

(47:29):
yourself, you know, using someof the the the sites that doctor
Damir mentioned. And if you havecontent, if you have information
that you need to protect, beproactive and take a couple
steps and do it because youcan't just think that it's never
going to happen to you. Not justprotecting your actual hardware

(47:52):
and network, but protecting youridentity, protecting your online
presence.
This wraps perfectly into whatwe've been saying week in and
week out for a year plus nowwith our podcast. So Doctor.
Damir, not only do I wanna thankyou for the work that you are
doing for us, but I wanna thankyou for joining us and taking

(48:16):
time to really educate us andour audience on this stuff.
Because prior to this, I thoughtit was we were just all doomed.
Same.
But Yeah. Yeah. So I thank youvery much for for for spending
the time to educate us.

Ilke (48:33):
Yeah. Thank you for the invitation. And it was, like,
really nice to talk with you.And, like, now that, like, I
know what you are doing, then,it really overlaps a lot with,
like, not just hardware ornetwork or software protection,
but also your identity, yourface, your voice, your image,
your likeness, everything,should be protected. So if they

(48:54):
are only listening this portionof it, the first thing that I
would like to say is we are notdoomed.
There is so much work in thegood side of AI, as much in the
bad side of AI or unresponsiblepart of AI. The second thing is
if you are doing anythingrelated with content, out C2PA,

(49:14):
check out how you can add theprovenance information to your
data so that everyone consumingyour data your media your
content will know that it camefrom you and maybe the third
part is that do not believeeverything you see online you
hear online Try to use your ownjudgment of context, own

(49:35):
judgment of visual inspection,if you have control over how it
is being acted, like make themdeform their faces or their
lightning or like their pose orsomething. And if you don't have
access to those interactivesituations, you can always use
technical helpers likeFakeCatcher, like gaze based
detection, like model detection,like all of other detection

(49:59):
models that we built. And if youare also a content creator, in
addition to CTPA, also protectyour content, as Mario said,
proactively using my art mychoice, my face my choice, my
voice my choice, my body mychoice.

Mario (50:13):
Yeah. No. Nobody wants my body. Nobody's gonna duplicate
that.

Justin (50:19):
No comment. I I Kerrigan's kinda speechless. I I
have so much that I could say.Talking directly to our
audience, which is smallbusiness owners. The theme that
I have week in and week out iswe just have to be aware of

(50:40):
what's out there and it ischanging all the time.
Find a way, dear business owner,to stay involved. I would love
you to listen to our podcastevery week because I do think we
do a pretty good job of justshowing you what's out there,
giving you some key takeaways.Find smart people like Doctor.
Damir, follow them, understandthem. LinkedIn.

(51:00):
She's on other social mediaplatforms. But we we cannot live
with this head in the sand. It'snot gonna happen to me mindset.
The ones that do that absolutelywill get hacked. And as I like
to say, once you've been hacked,you you cannot, in fact, get
unhacked.
On that note, guys, go tounhacked.live. You'll see all of

(51:22):
our episodes linked to doctorIlkay Damir, all of her content,
all of her social mediacontacts. Thank you for being
here, Mario and Doctor. Damir.Guys, we're gonna sign off,
we'll see you next week.

Mario (51:36):
Thanks, guys. Take care.

Ilke (51:37):
Thank you.

Justin (51:38):
Bye bye.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Boysober

Boysober

Have you ever wondered what life might be like if you stopped worrying about being wanted, and focused on understanding what you actually want? That was the question Hope Woodard asked herself after a string of situationships inspired her to take a break from sex and dating. She went "boysober," a personal concept that sparked a global movement among women looking to prioritize themselves over men. Now, Hope is looking to expand the ways we explore our relationship to relationships. Taking a bold, unfiltered look into modern love, romance, and self-discovery, Boysober will dive into messy stories about dating, sex, love, friendship, and breaking generational patterns—all with humor, vulnerability, and a fresh perspective.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.