Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to Mediascape
insights from digital
changemakers, a speaker seriesand podcast brought to you by
USC Annenberg's Digital MediaManagement Program.
Join us as we unlock thesecrets to success in an
increasingly digital world.
Speaker 2 (00:22):
Anatoly Vidnitsky has
one of the most brilliant names
I've seen for a company AI ornot.
Literally that is the nametotally of your company, but
it's so simplistic but it tellsus so much.
So, first of all, thank you,fight On, fellow Trojan, here
(00:45):
I'm so grateful that your PRfirm reached out to put you on
the show.
And really, before we get intothe whole AI discussion which is
a huge one of course we have,whether it's in the world of
business with clients, orwhether it's with my students at
USC I'd love to hear about yourbackground, because you've been
at the forefront of some greatcompanies unicorn startups, amex
(01:08):
, Ventures so let's talk aboutthat.
What took you into this world?
Was that an area that you werealways interested?
Speaker 3 (01:18):
Yeah, one thanks for
having me on and fight on.
My journey into tech startedactually at USC, so I went to
grad school at Marshall Schoolof Business.
I had an amazing introductionthere from professors who were
VCs, who were entrepreneurs, soit was the exact background I
was looking for and it's theexact career that I had
(01:38):
post-grad for Marshall.
I've worked at largest creditbureau in the world and SoCal,
ended up at a unicorn startupand then, like you mentioned, a
VC at a leading fintech firm and.
But what I saw was this newworld of AI, specifically
generative AI, and given what Iknow about how fraudsters use
(01:59):
technology and all the previousstops that I had, I really
thought that something needed tobe done to really fight against
the dark side of generative AI.
So I started this company AI orNot, simple in name, hard in
what we do to try to protectexactly that.
And it's to keep the good ofgenerative AI for people who
want to use it, for people whowant to enhance their creativity
(02:22):
, but then stop the bad actorsfrom using it for the dark
things that they want to do.
So that's our mission at ARNOT.
Speaker 2 (02:29):
Yeah, that's such a
big mission and a complicated
issue, particularly with yourbackground.
One of the things we do talk toour students a lot about is
data privacy, the differentrestrictions and laws that are
around the globe, and how secureour data is or isn't.
We've seen a lot of breaches inthe world of fintech.
Even the companies like Equifax, that are supposed to be
(02:52):
holding our data very securelyand helping us understand our
credit scores and things likethat, have had breaches.
And then this new topic with AIis something even more complex,
perhaps another entry point forthose bad actors that you speak
of.
What are some of the lessonsthat you learned along the way
(03:14):
that you're taking into AI ornot?
Speaker 3 (03:17):
Yeah.
So for the record, I was notemployed at Equifax.
Speaker 2 (03:21):
That was just an
example.
Speaker 3 (03:23):
Listeners, don't be
mad at me.
I was one of the half the USpopulation who was affected by
this as well.
I got all the letters.
So the generative AI side ofour data is actually I hate to
say it in a lot of ways we didit to ourselves.
We've posted online all of thevideos that we've made, all the
(03:47):
pictures that we've shared andall the things that we've
written on forums.
All of those things are beingtrained to use these AI
technologies.
Like recently personal opinion,it's the best like AI video
generator that I've ever seen iscreated by TikTok and they even
say like hey, we used about 2million TikToks to train on this
.
So thank you, world right.
So on this in that way, on thegenerative side, the training
(04:10):
data is a lot of ways usergenerated content that we've
produced over the years as justactive internet users, active
social media users.
So this one is on us.
The credit bureau stuff, that'son them, for sure, but this
one's on us.
But data privacy still plays abig role.
There's a difference betweenusing it to train, but not using
(04:30):
it for harm, versus using it ona one-to-one basis of like
here's the things that Toli did,or here's the things that Anika
did, or here's the things thatcan identify a person on a
one-to-one basis, just like asocial security number does.
There's where you should drawthe line and, like a lot of the
you know, like DeepSeek, the newChinese open source model,
(04:54):
that's a lot of the concerns,because they're tracking you on
a one-to-one basis and sendingit to foreign countries versus a
lot of the policies that otherAI companies is.
it's used for general trainingbut not to track you on a
one-to-one basis and then at AIor Not.
We actually don't store or useany of our consumer content, so
any of our hundreds of thousandsof users that have run millions
and millions of checks with us,we actually delete it right on
(05:16):
the spot.
So I treat everything as if itdid have personally identifiable
information, even though mostof it is like anime and like AI
art and stuff like that but Itreat it as if it did and delete
all of it.
So some of it on the trainingside, we are a part of it for
better or for worse.
We are part of those.
Newer parts of us are in thoseneural networks in a lot of ways
(05:39):
.
But on the data privacy side,there are things that AI
companies could do, should doand hopefully are doing to make
us all safe.
Speaker 2 (05:47):
Yeah, and what you
just shared about the difference
between DeepSeek and otherchatbot type companies, that's
something that I hadn'tconsidered as a perspective yet.
So I appreciate you sharingthat because they have very
clear, they have their policy.
It's really easy to read whereothers.
You have to kind of maybe digthrough a little bit.
I do love things like Anthropicright Because they have their
(06:11):
constitution and I know thatthey're not going to use my data
where other platforms do, and alot of these platforms have
been using our data before weeven knew that they were using
it and, depending on whatcountry you're in, you can opt
in or opt out.
So those are all things that Iknow cause concern for me for a
lot of students that are in adigital media program, who might
(06:33):
still be new to digital mediafrom this perspective and aren't
comfortable using AI tools yet.
So can you walk us through aprocess that people use when
they come to AI or not?
Speaker 3 (06:47):
Yeah, sure.
So I think everything you saidyou're so accurate.
There's so many perspectives ofit and a lot of times, a lot of
the tools and forums andwhatever even like word
processors that we write in, areby default.
Our data is used for AItraining.
So you should, you know, check.
Even on social media and I'mnot just talking about like the
(07:07):
meta platforms, even likeLinkedIn, is AI training on by
default.
I'm not saying anythingcontroversial, it's just stating
facts and in the case ofDeepSeek, I did go through their
terms and conditions out ofcuriosity.
I still use it.
I would never put in like a taxquestion in there, but I still
used it.
I would never put in like atext question in there, but I
still use it for research.
I would never download it on myphone either, but they will
(07:29):
track you on a location basishow you type, the exact things
that you type, the prompts thatyou make, and they do track it
down to your location.
So you're right, there's a lotfor that.
At AI or Not, we have a fewhundred thousand customers about
over 250,000, includingbusinesses and even governments
who use us, and the use casesrange so broadly.
(07:51):
So a lot of people are justcurious in what they see online.
One of the slogans we have isquestion everything, and a lot
of people have that.
You know, we don't believeanything that you read was kind
of the slogan of the internet.
Now, don't believe anythingthat you read, hear, see video.
You know any, almost all thesenses.
(08:13):
You really have to questioneverything.
So it's users who want to makesure like, hey, is this actually
happening, is this real or isthis a scam?
So there's a lot of that.
A lot of artists, actually alot of people in the digital
media and creative space.
There's a lot of that and theywant to know is a piece of art,
ai or not?
And on the business side, it'sbusinesses protecting themselves
(08:37):
against risks coming fromgenerative AI, whether it's fake
IDs being generated withartificial intelligence, whether
it's fake IDs being generatedwith artificial intelligence,
whether it's deep fakes of theirCEOs or bosses trying to get a
wire from them or from anemployee, and the list goes on
and on.
So those are some of thebreadth of use cases that we see
, and we'd love to hear a littlebit more how your students are
(08:59):
thinking about it, and I'm happyto share anecdotes I have for
them 100%.
Speaker 2 (09:04):
Would love some
anecdotes.
That's always, I think, themost fun because it sounds like
there's such a variety ofclients that you work with.
A lot of the questions arearound security, right.
How private is my data going tobe?
We walk through in the classI'm teaching now how to go
through your LinkedIn settings.
We go through websites andcookie tracking and the
(09:25):
differences in websites indifferent countries and how you
really do need to pay attentioninstead of just clicking yes to
everything and letting it trackunless you don't care.
But most of us care at least alittle bit right.
Speaker 3 (09:37):
Yeah.
Speaker 2 (09:38):
And I think they have
concerns around people misusing
it, the difference betweenusing it as a tool and AI
potentially being a jobreplacement.
What jobs are going to beavailable?
And then I also talked to himabout the ethics behind.
Who are the people who areactually labeling the data?
from the Internet, even thestuff that we don't want to see
(10:01):
or hear, so that we don't haveto go onto a platform and see
really disturbing images and howmuch or how little they're paid
, what implications that has fortheir own mental health in many
countries in Africa where thepeople who are doing a lot of
the training yeah, so I get deepinto it because I think that I
(10:21):
want everybody to understand,from an ethical perspective,
these are things that are beingdone, but what can we do better?
How?
can we make the future ofgenerative AI better.
Speaker 3 (10:31):
Right, yeah, I think
all of that is really accurate.
There are some concerns.
I love that you take your classthrough some of the privacy and
terms that you sign up for.
In most cases, it's fine, itdoesn't matter.
In most cases, it's fine Likeit doesn't matter.
Like in most cases it doesn'tmatter.
But like for like I'll keeppicking on the DeepSeek example
it's the number one app in theapp store.
It's not fine.
(10:53):
Like if you start asking itpersonal things you start asking
about, like decisions that youwant to make, or like in the
case I had questions about taxes.
It's that time of year for allof us and I probably wouldn't
put any of that into a modellike that.
You know where the privacy,like you said, like you compare
and contrasted one versusanother, like that's where it
(11:13):
really, really matters.
As far as the ethics, I thinkthere's a lot to it on both
sides, like what training datais being used, but how is it
being spit back out?
Like, for example, in the worldof art, if you're an artist and
you have a very distinct styleof art, like I'm a huge fan of
(11:33):
Shepard Freire, I have his bookbehind me.
Actually, right here I have oneof his pieces of art behind me
on my desk wall.
He's in LA and if you're usinghis art as training and then
spitting it back out that sametype, the same exact art that's
not right.
If it's used, maybe mix in withothers and just like as
concepts, I can see it being OKto try to produce something net
(11:56):
new.
But if you're reproducing anartist that was trained on, I
think that's where the line, Ithink, kind of gets crossed a
little bit Like.
I'll use another example Inabout a week, christie's is
doing an AI one of the largestart dealers in the world and
they're doing an AI-only artauction.
Speaker 2 (12:16):
And.
Speaker 3 (12:16):
I think that's really
cool.
What a cool opportunity rightfor new emerging artists to
create net new art using theseamazing technologies, and all of
it is unique.
All of it is incrediblyinnovative, you know, using all
different techniques ofprompting, in some cases, robots
making the art, but it's reallyquite amazing.
But it's all net new.
(12:37):
Versus if an AI is justspitting back out, regurgitating
in a lot of ways, an art thatwas previously made by a
specific artist.
I think that's where we'rereally crossing a line on AI
ethics, if that makes sense.
Speaker 2 (12:51):
Yeah, and not to get
even more complicated, but is
this where things likeblockchain come in?
Speaker 3 (12:57):
I think there is a
possibility.
So, funny enough, you mentionedthat the Christie's AI art
auction is actually you'repurchasing an NFT, so that's the
way that they're doing it,because there is some funny
things with AR art, like, if youare, if you just use a prompt,
it's not considered copyright.
So, like what do you even own?
So, as an artist, I thinkthere's gonna be a lot of
(13:18):
changes.
Personally, I do think itshould be copyright, because I
think a prompt is like code,it's your IP, it's how unique
you are, but you're doing ablockchain and NFT on this
specific piece of content,because even an AI won't produce
the same exact thing twice.
You know, just like you writingsomething handwritten, you're
(13:38):
not going to write it exactlythe same every single time.
Same thing with an AI.
The same prompt will producetwo different pieces of art or
whatever you're trying toproduce.
So I do agree when you thinkabout like there's something
very unique.
I need a trustworthy.
It really does screamblockchain use case blockchain
use case Wonderful.
Speaker 2 (13:58):
So what are some of
the ways that people new to AI
students at USC or not rightwhat are some of the things that
they should feel comfortableusing AI for?
Or do you have recommendedtools that you love?
Because I know I have, you know, some that I love.
Speaker 3 (14:15):
Yeah.
So here's the good part we'reall new to it, because the whole
world is new, right?
This isn't something we're allplaying catch up in.
It's something we're alllearning as a whole.
So if anyone tells you, youknow they're an expert with like
five years, or like this worldit's been around for 12 months,
essentially, like this wholespace has been around for so
(14:37):
little amount of time.
And when you get into a lot ofthe newer use cases, it's even
less.
So the best way, I think, isactually just to dig in and just
to play with it, like use it asa partner.
Don't use it as a crux, but asa partner.
If you're thinking throughsomething, I have this funny
experience where just theprocess of prompting the AI
(14:58):
gives me ideas, because you haveto speak to the AI super
clearly, yes, so sometimes justwriting my own thoughts clearly
to a bot, it's like, oh, likeI'm almost, you know.
It's almost like the process ofwriting as, like clear thinking
actually helps itself, and thenyou have a partner to do it.
So the ones that I recommend, Icall it the big three for
(15:21):
general AI use cases and I'mhappy to go like which ones I
like for other ones.
So the big three for me areClaude Claude is an incredible
writer, thinker, ok, but as awriter really really great.
Perplexity I love as a researchtool.
It's essentially likeAI-enabled Google, so incredible
(15:42):
for research, like if I want toget smart on a topic or you
know some huge report, or likesomething that happened in the
news, perplexity is my go-to.
And then finally, the OGChatGPT.
It is really incredible atthinking and like the reasoning
models are best in class.
So if you're really trying tothink through problems, it is
(16:04):
the best way.
And actually I think they havea very underrated tool and it's
their audio, their audio tool.
So I've actually I have a, wehave an ARNOT podcast and I did
one with Chachi PT's audio tools, my co-host.
It was actually really a funexperience.
So those are the three big ones.
Some other use cases, dependingon where your students or people
(16:25):
in general are in their process.
For images, midjourney isincredible.
It's the most realistic.
I think, the best for art aswell.
If you're trying to be creativeand trying to create really
cool pieces, midjourney is thebest.
Ideogram, I think, is a quite anamazing image generator as well
, very realistic and it does doreal people.
(16:46):
So it can create some funnythings, but it can create some
not so funny things as well, butit is incredible at producing
realistic images and people andeven text in an image form.
So I recommend that one.
Some other use cases, I think,that are really emerging and
something that, even in AI ornot we are investing in heavily
(17:08):
is AI in software developmentand coding.
So we use Cursor at our company, which is an AI powered,
basically a text editor for code, ide as they call it, and funny
enough, it's actually Claude inthe background, anthropix, a
Claude model in the background,and there's a lot of other tools
in that space.
But I think that's one of thehighest value adds that my
(17:29):
business has seen for AI.
So sorry for the long answer,but hopefully that's helpful for
some of your users and students.
Speaker 2 (17:35):
Absolutely.
And when you were conceivingthis idea, how different was
this than being an investor atAmex Ventures, than being an
employee at Trulio, you know,and all the other things that
you did before?
Speaker 3 (17:49):
Completely different
because it's my company, for
better, for worse.
So every problem is mine.
You know, at Trulio I was anearly employee, but you know I
had a very specific set ofresponsibilities.
I covered a lot of the businessside, business development side
At Amex.
My job was to find amazingfounders to invest in.
So, also completely different,different line of work from
(18:15):
scratch is quite a differentjourney than showing up at, you
know, a Fortune 100 company, youknow, with HR and amazing
dinners and all that at astartup where you have, you know
, an incredible founding team,investors already back in the
company.
Also a completely differentversus you're starting from zero
, like you're starting startingfrom scratch.
(18:36):
Quite a different journey, forsure, so, but one that I was
really implored to do because Iwas seeing the things that was
happening with gendered AI, likewe're still in the world of
like six fingers, seven fingers,like really wonky things.
When I started this companylate 2023, I knew it was only a
matter of time.
If anything like the history oftechnology is, it changes so, so
(18:57):
fast.
Like we underestimate what canbe done in a month.
We overestimate what can bedone in a month, but
underestimate, like, whathappens in six months in a year
and AI, I think, has acceleratedall of those things.
So I knew it was only a matterof time before.
These tools are so good youcan't tell the difference and
because of that, it's going tobe used for bad and, frankly, I
did not think that there wasenough being done to try to
(19:20):
protect people and businessesfor that and given my experience
, like spending a decade andtrying to find, fight fraudsters
and KYC crimes and things likethat I was like this is the next
wave of this and I think Ishould do something about it.
Speaker 2 (19:34):
With your business
case?
Was it hard to find the rightpeople to work with you, the
right people to invest?
If you were taking earlyinvestment, the right clients to
start onboarding.
Speaker 3 (19:46):
Yeah.
So a lot of our employees arereally mission driven, so a lot
of the people that we have ateam of AI researchers and
they're really passionate aboutthe problem One.
It's a super challenging one.
So if for them it's likeincredibly interesting problem
set that we have and we domulti-modality, my co-founder is
someone I knew since thirdgrade, so it was amazing to
(20:09):
shout out Johnny John Nelson.
He's my co-founder.
So it's amazing to do, you knowto, to have a start a company
with a with a good friend ofmine, and then on the investor
side, early on, you know, thepre-seed round was definitely
investors, that who knew mepersonally, like ex-founders and
like seed stage VCs who'veknown me for a long, long time.
(20:29):
And then we recently it wasjust announced a couple of weeks
ago raised a $5 million likeinstitutional round.
And then you know, we showedprogress and also people who
knew me for quite a number ofyears.
We started to show like reallyincredible progress and like
here's the things that we'reworking on and building and the
roadmap is even more exciting.
So it was.
You know, having the advantageof having the network, like from
(20:53):
my background, was definitelyhelpful, but then you still you
got to do the work.
There's no way around it, yep,to continue to show progress and
continue to, you know, fightthe good fight.
So there's no way.
There's no way.
No amount of slides will saveyou from that, and that's the
fun part to me.
Speaker 2 (21:10):
Did you have a
particular customer avatar when
you started?
Did you know?
Okay, we think this kind ofbusiness and this kind of
business and maybe this otherone will really need our
services.
Speaker 3 (21:21):
I did and it was a
lot of the same types of groups
I used to sell into for majorityof my career, like risk fraud
compliance teams I did.
What surprised me the most isthe scope of the problem.
I underestimated actually likethe amount of avatars and almost
like the personas that I'd haveto work with.
Like, for example, this was aweek ago someone in the medical
(21:47):
field can't mention the company,but they're like we've been
getting AI generated x-rays forinsurance fraud and they tested.
I was like, well, I've neverpersonally tested x-rays in my
own product, but have at it.
And he's like, oh, you guysdetect it, which is a really
cool feeling.
Like you know, like fake x-raysfor insurance fraud, dental
x-rays too.
I'm like, okay, so just anexample of like the far reach of
(22:09):
the problems this has caused,like the devastating fires in LA
caused, like the Hollywood signbeing on fire, which was
completely AI generated, and Ican't imagine the amount of
insurance fraud that's beingcreated for AI.
So we've seen use cases likethat.
On the audio side, it's voicelike fake phone call scammers.
Like recreating someone's voice.
It takes roughly 20 to 30seconds of someone's voice, so
(22:34):
like you pick up the phone, hey,who is this?
I'm sorry I can't hear you.
Like it's over, like Do youalready have enough?
Just that, and they canrecreate your voice to say
anything, which is an incrediblyterrifying thought.
And then even AI music as well,protecting artists from that as
(22:55):
well and protecting people.
There's quite a bit of YouTubechannels and other content being
posted on the streamingplatforms of people claiming an
artist put out a new song whenit's 100% AI, and I don't think
that's fair on a number ofdifferent angles, like from
monetary, from brand and justyou know someone's likeliness.
It should not be impersonated.
So it's the scope of everythingfrom like kind of funny, kind
(23:16):
of silly, to yeah, that's reallyserious and not okay.
So, and everything in between.
Speaker 2 (23:22):
Yeah, wow, well, and
I just think about how, for
instance, we don't have enoughradiologists, so radiologists,
computer vision, helps scanx-rays for people to then find
right.
Or you know, find things inscans or say, oh, you're
actually clear, and then ofcourse the doctor is there to
(23:43):
codify it.
But so that's one use case, buton the other side, fake x-rays
for insurance fraud.
So the number of things peopledream up using this technology
are unlimited and unfathomablereally.
Speaker 3 (23:58):
I agree.
I agree.
I don't think it's specific toAI.
I think, like from all oftechnology, there's always a
good and bad side, like from thestart of the internet.
You know there's a good and badside, like the sharing of
information, but also thesharing of misinformation.
You know, when you get tostreaming like, you can stream
really cool, funny, entertainingcontent, but also the worst
things that you can possiblyimagine.
(24:21):
You know blockchain and thecryptocurrencies that are on it
has created new digital assetsand new ways to move those
digital assets, but also createda new way to launder money and
for fraudsters to deal with it.
So AI is no different.
It's a technology that someonein a digital media program will
be like wow, this would be socool for the work that I do, me
(24:55):
with the things that I do.
So it's really in the eye ofthe beholder and, you know, to
how the user sees it.
So AI is no different than anyother technology is going to be
a good and a bad.
I think the difference is ofjust how far ranging the scope
is and how impactful it can befor both.
Speaker 2 (25:06):
Yeah, thank you, and
totally.
You've given us some greatexamples and a couple of
anecdotes.
I'd love to hear what has beenthe most surprising use case for
your company.
Speaker 3 (25:18):
Most surprising use
case.
The X-ray one kind of took meby shock, but I have one.
Actually, within a couple ofmonths of me starting the
business I think it was likethree, four months and I had a
very active user and she startedemailing.
And I encourage all my users toemail me free, paid, whatever
the case, like I love speakingto my users and seeing what they
(25:39):
use it for.
And we got a phone call andthis woman bought a piece of art
for six figures and it turnedout to be AI.
Speaker 2 (25:48):
Oh.
Speaker 3 (25:49):
Yep, and she was
working with law enforcement and
we detect it as like, yes, itis AI, like here.
We have the proof right herefor you.
It is indeed AI.
However, like AI art itself,I'm not sure if it's a scam,
it's a misrepresentation, but AIart is still art and it's
definitely a misrepresentationof it.
(26:11):
But, you know, the legality ofit is a little weird and I
really felt for her becausethere was really no good outcome
for her, like law enforcementisn't going to do, you know,
isn't going to do much.
You know, we did confirm whather suspicions were, but this
one just showed me early on instarting this business, cause
I'm like, oh, I'm going to helpcompanies, I'm going to help
(26:32):
government, it's like no, earlyon in starting this business,
because I'm like, oh, I'm goingto help companies, I'm going to
help government.
It's like, no, I'm actuallygoing to help people too, like
just normal people, where it'slike our own lives can actually
be affected by thesetechnologies in a very negative
way, and that's one of our userslike, and it was actually
really eyeopening experience andyou keep seeing and it really
pains me like, keep seeing usecases like that.
(26:52):
Recently, there was anAI-generated Brad Pitt, who
scammed a woman out of $850,000and even had her divorce her
husband so her life is inabsolute shambles because of an
AI-generated Brad Pitt and voiceand video.
So that, to me, was the biggestsurprise, or biggest shock.
Just months into starting thebusiness and pursuing you know,
(27:14):
businesses, I'm like here,here's what we can protect you
against this new tech.
I'm like, oh like people canactually be very affected by
this and feel very hurt by thisas well, and that actually, you
know, made me very passionate tohelp that group as well.
Speaker 2 (27:29):
Yeah, wow, yeah.
I've been reading a lot latelyabout different fake profiles of
celebrities big celebrities andhow that has turned into a scam
, and dare I might even say thatI know somebody who thought
they were speaking to somebodyfamous and was scammed and said
well, this person knows aboutthis and this and this.
(27:51):
How would they know if they'renot?
You know really this personthat?
It just shows you the lengthsthat people can go to to you
know.
Maybe that person's phone wasbeing tracked without their
knowledge, maybe they have hadan assistant that you know.
There could be any number ofsituations.
What is and I don't know ifthis is something you can answer
or not, but I'm curious aboutwhen you were able to identify
(28:13):
that piece of artwork as AIartwork.
What were the markers that youlooked for, or were there things
that you could identify?
Was it pieces of code?
Speaker 3 (28:23):
Yeah, so none of the
reviews that we do are actually
manual, so it's nothing like Icouldn't tell you.
Like, if I look at a piece, Ilook at a picture, like
sometimes, honestly, I can'ttell anymore.
And I look at a piece, I lookat a picture, like sometimes,
honestly, I can't tell anymoreand I look at this stuff all day
what our computer vision modelsdo and we have dozens by now.
They look at pixel levelpatterns, like they really zoom
(28:44):
in to a level that a human eyecannot.
Just, like you mentioned, youranesthesiologist is using
computer vision for the samething.
We're just looking fordifferent.
He's looking for anomalies inthe x-rays or things that look
off that could indicatesomething.
We're looking in patterns thatindicate AI versus indicate.
(29:05):
No, this is an iPhone picture orthis is, you know, just a
standard Canon picture.
So very similar process ofreally, you know, chopping up
the image and I don't mean us, Imean the computer vision models
and really zooming in andidentifying the pixel level
patterns of what constitutes anAI generated image or a real one
(29:25):
, or if there's anomalies, ifthere's changes made that show
like hey, there's some thingsbeing flagged here.
And this all happens in seconds, like usually less than a
second.
So we do this all in near realtime.
But, to be honest, I look at it, I can't tell anymore.
Like the tools I mentioned forAI image generation, like I see
(29:46):
those tools, I'm like wow, LikeI can't tell, I can't tell.
Like I made this incredibleimage recently of the Rock as
Darth Vader.
I mean I thought it was reallycool.
It was like for the dark sideof AI and I made I was like wow,
it'd be really cool.
I was like if you told me thiswas the next cast thing, I would
have honestly believed you.
So it's really you need AI todetect AI nowadays as it is so
(30:09):
realistic.
I feel for your friend.
I really do Because, likeyou're reading something that
might be playing in youremotions, you're seeing
something that looks real.
Now you're hearing somethingthat sounds real and now you
know it's.
You almost need like safe wordsfor you know, outside of in
real life interaction.
Speaker 2 (30:28):
Wow.
And as we're moving to moredigital society, whether it's
digital currency, whether it's Idon't think I've had to show an
ID at an airport the lastseveral times I've gone.
It's all biometrics.
Is that going to be the nextlevel of concern when it comes
to fraud detection and peoplestealing your identities?
Speaker 3 (30:49):
Yeah, I think this is
the new synthetic ID.
I think recreating someone'slikeliness, you know, generating
fake IDs with that same personI think this is the new
synthetic ID.
You can either try toimpersonate someone who's
already in existence or create abrand new person you know
that's not in existence andthat's.
I think this is the new levelof synthetic ID.
(31:11):
You're right, we've gone verydigital of late, especially, you
know, post-pandemic.
Like everything is no personrequired.
I think there'll be morerequirements of in-person
examples, like I have this.
One of the scariest examples Ihave is a publicly traded
cybersecurity company was fooledinto hiring a North Korean
(31:32):
operative because it was anentirely remote process of
hiring and you know they createda fake ID, ai generated resume,
like all the things, and theynever met the person in person,
right, and they even got like acompany, issued laptop and
immediately tried to injectmalware onto this company's
network.
So you know, if you requiresome in-person things at times
(31:55):
it could solve for some of this,but it is quite difficult.
So you know, for your averageviewing experience, you might
want to just be with a, you know, watchful eye, question
everything.
If someone's speaking to you,you should really be like you
know, maybe ask a friend like,does this sound crazy?
And then, if there's actualmoney on the line, you really
(32:18):
want to take extra precautionsbefore doing anything.
Speaker 2 (32:22):
Yeah, yeah, I'm going
to.
I'm going to have to bring youback in six months to a year to
learn more about, okay, what'sthe next thing?
What's going on right now?
Because it's changing every day.
Speaker 3 (32:32):
Absolutely.
Speaker 2 (32:33):
Truly Now on your
website, which is, of course,
aiornotcom.
You have a freemium version,you have a base version and then
, of course, you have yourenterprise clients.
Well, I love this, first of allbecause this means this is a
tool I can share with studentsand other people I know who are
interested in AI and they cantest out and maybe take back to
(32:53):
their companies.
But how have you found becauseyou said you have, you know, a
few hundred thousand users?
Have you seen a lot oftransition from that freemium
model to, let's do, the basemodel?
And also, how the heck do youhave such a low base model price
?
Speaker 3 (33:11):
Yeah, I appreciate it
.
All good questions.
So, to be honest, I would loveto give even more away for free
for all consumers, becauseenterprises, like when they use
this, they really use this.
So that's actually my goal tocontinue to provide value for
everyone.
Like whether it's your friend,you know is like, hey, is this
person real?
Or someone going on a date, andit's like, is this person photo
(33:32):
, you know?
Is this person you know?
Do they even exist?
Or is this like a catfishingattempt?
Or someone trying to buy apiece of art, while businesses,
there's actually a lot of moneyon the line and there's no
reason why I feel like I shouldbe.
I should not be helping oneversus another.
So that's really the premise ofit.
So the free one I created justto you know for anyone, whether
(33:54):
you're a student, you know, andI used to be a student too I
wasn't exactly signing up forSaaS products either.
I completely get it To someonea little bit more serious and we
found that you know, about ahundred checks per month is like
what a user needs.
Like you know, you're not therechecking while a company is
going to do tens of thousands tomillions it can end up, but
(34:16):
always testing with it.
But the premise is still.
Early on, just a few monthsinto this business, I realized
what an impact I can have bothpeople and businesses and I was
like I'm never going to ignoreeither one.
The majority of my business isB2B.
I'm still not going to ignorethe consumer aspect.
I think long term from businessbuilding and brand building, I
(34:39):
think it'll play out well for me.
Speaker 2 (34:42):
I have no doubt.
I have no doubt and also itcomes through that you are so
mission critical and that peoplecan really put you know.
I completely trust you and Ijust met you that, based on this
conversation and the good thatI see you putting into this
world, where there's not a worldwithout AI, especially with
generative AI, you know,eventually a lot of AI experts
(35:05):
tell me we're going to have ourown, we're going to have agentic
AI, we'll have our own agentsfor helping do everything for us
, and who knows what that'sgoing to cause if agents are
talking to agents.
And then you know there are somany levels of complexity that I
can see on the horizon andwe're going to need products
like yours more than ever.
Speaker 3 (35:21):
That's why I started
this company, like in 2023,
which sounds like ages ago, whenyou know this before agentic
and before photorealism I waslike, oh, it's coming.
I don't know how and what shape,form or timeline, but I know
it's coming.
And exactly that I was like,maybe because I've seen so much
of bad actors throughout mycareer for the things that I
(35:43):
used to do, whether it wasclient side or vendor side or
whatever the case I was likethis is just going to be the
next tool and I think this mightbe the scariest one of all,
because you can mimic andimpersonate and do all of those
things.
So, whether you're, you know,just an average user on social
media seeing the next crazy newsstory and you're like, hey,
maybe that's not the case andwe've actually had news stories
(36:05):
taken down because we told them,like you're reported on AI
generated content and weactually did delete it.
Like because reporters are notquick to report, you know, on a
picture, so, let alone anindividual, like coming to
conclusions too quickly to abusiness, being like I really
cannot let in a syntheticidentity, I really cannot let in
(36:27):
an AI generated person, becauseof the harm they're going to do
to my business or my platform.
So, whatever the use case is,underlying theme is there
definitely needs to be a newlayer of trust and I hope to be
that, what they are or not.
Speaker 2 (36:41):
Yeah, fantastic Gosh.
We've covered so many topics.
I really appreciate you goingon this path with me and being
willing to speak about so manydifferent things when it comes
to the world of AI and whichtools you like best, and you
know what privacy concerns weneed to be thinking of and how
we can use your tool.
Is there one last message you'dlike to leave the audience with
(37:01):
today?
Speaker 3 (37:03):
Yeah, I do One, or
any students at USC or any
former students at USC.
Take advantage of everythingthat the amazing program has to
offer.
I'm wearing my USC shirt now.
I put it on right before this.
It was a really eye-openingexperience for me.
Take advantage of being astudent.
(37:23):
Take advantage of meeting thepeople that USC lets you offer.
Pick the professors that youthink have the most ability to
open your eyes eyes to new, tonew experience, like I'm sure
that you're doing.
How to go with all these newtools within ai?
I think that's really important.
As far as everyone else notaffiliated with usc, I think ai
is actually a really amazingtool and it's good.
(37:45):
I know I cover a lot of the badand that's kind of what I work
on, but I think it is an amazing, amazing tool.
I think we're in thisincredible period where we're
all beginners, so might as welljust get on board.
The worst thing you can do ishead in the sand, ignore it.
Oh, this isn't going to affectme, because it is.
I don't have to know what youdo to know there's going to be
(38:06):
some kind of effect, and I thinkwe're all deciders of whether
that effect is going to be apositive or a negative one,
because if you're the one whofigures out how to use these
things in a positive way forwhat you do and I mean that
generally, like whether it'spersonally or work related, or
creativity or whatever the caseyou will be ahead of many, many,
(38:26):
many people and companiesactually, and that's the
opportunity we have ahead of us.
So I do think, like what I workon from my business aside, I do
think we're in this incredibleperiod of innovation and we all
have a really great opportunityto be leaders and thought
leaders and innovators as well,using these tools with the
things that we know reallyclosely.
(38:46):
So that's what I love to leaveyour listeners with.
Speaker 2 (38:49):
Fantastic Well, Toli,
thank you so much for coming on
the podcast.
It's been a pleasure.
I always love speaking to AIexperts so that I can learn more
and I can try out new tools andI can impart this wisdom,
hopefully, on other businessowners and professors and
students.
Speaker 3 (39:07):
Thank you, Annika,
for having me Fight on.
Speaker 2 (39:09):
Yes, fight on.
And to everybody who's watchingthis episode or listening to it
, thank you.
Leave us a rating review and Iwill be back again next week
with another amazing guest.
Speaker 1 (39:19):
To learn more about
the Master of Science in Digital
Media Management program, visitus on the web at dmmuscedu.