All Episodes

January 11, 2025 37 mins

Ever wondered how artificial intelligence could shape the future of security? Join us for an insightful episode with Aman Ibrahim, the innovative co-founder of DeepTrust, as we unravel the pressing challenges and incredible potential AI brings to the world of cybersecurity. Aman, inspired by his Eritrean roots and his family's dedication to community and technology, shares his mission to safeguard human authenticity against modern threats like voice phishing, deep fakes, and social engineering. With a rich background in machine learning and experience in both healthcare and tech startups, Aman's journey is both inspiring and informative.

Our conversation ventures deep into the risks of misinformation and the devastating impact of AI-generated scams. As we explore the ease with which voices and likenesses can be manipulated, we highlight the real-world consequences that have already cost Americans billions. Aman's personal stories reveal the urgency of these issues, underscoring how even the most cautious individuals can fall prey to these high-tech scams. The discussion extends to the misuse of deepfake technology in marketing, prompting a reflection on the challenges consumers face in distinguishing real endorsements from fabricated ones.

We also tackle the critical issue of cybersecurity in corporate communication, with platforms like Zoom becoming hotbeds for sophisticated cybercriminal activities. Aman illustrates how AI-driven attacks are evolving, emphasizing the need for proactive defense measures. Discover the innovative concept of co-pilot agents, designed to empower employees in navigating risky situations by providing real-time guidance and ensuring sensitive information is handled with care. Don't miss this eye-opening conversation with Aman Ibrahim, as we navigate the intricate landscape of AI security together.

ABOUT AMAN

Aman Ibrahim is a co-founder at DeepTrust, where he focuses on leveraging AI to help security teams defend against voice phishing, deepfakes, and social engineering attacks across voice and video communication channels. Prior to DeepTrust, Aman worked as an ML engineer at Cruise, where he scaled model and data distribution systems, and he also built models in healthcare research labs to address challenges in nutrition, palliative care, and disease detection. His work at DeepTrust is driven by the mission to protect human authenticity in an age of rapidly advancing AI technology.

LINKS & RESOURCES

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey, what is up?
Welcome to this episode of theWantrepreneur to Entrepreneur
podcast.
As always, I'm your host, brianLoFermento.
I'll tell you what new year,new problems.
We are entering an entirely newworld that is going to and is
presenting so many incrediblepossibilities.
But with those possibilitiescome some threats, some dangers,

(00:20):
some considerations, somethings we need to think about,
and that's why today, we've goneout and found an incredible
entrepreneur who's bringing invery important solution.
That's what I'm going to kickthings off by saying a very
important solution to the planetthat truly is addressing
societal problems.
This is someone who is activelypart of the solution of things
that we're all going to facethis year and beyond.

(00:43):
So let me tell you about today'sguest.
His name is Aman Ibrahim.
Aman is a co-founder atDeepTrust, where he focuses on
leveraging AI to help securityteams defend against voice
phishing, deep fakes and socialengineering attacks across voice
and video communicationchannels.
If you're thinking to yourself,I'm immune to that.
You're not.
I'm not Aman's not, nobody is,and Iman and I are going to get

(01:06):
really serious about talkingabout that in today's episode.
Prior to Deep Trust, iman workedas an engineer at Cruise, where
he scaled model and datadistribution systems and he also
built models in healthcareresearch labs to address
challenges in nutrition,palliative care and disease
detection.
Address challenges in nutrition, palliative care and disease

(01:27):
detection.
His work at DeepTrust is drivenby the mission to protect human
authenticity in an age ofrapidly advancing AI technology.
It's big stuff we're talkingabout today.
I'm personally so excited tohear all of his thoughts.
We were talking off air and Isaid let's stop, let's hit
record.
So let's dive straight into myinterview with Aman Ibrahim.
All right, aman, it's so hardfor me to not just jump straight

(01:54):
into it with you, but firstthings first.

Speaker 2 (01:55):
Welcome to the show.
Thank you so much.
I've never been welcomed sowarmly.
I for a second thought.
I was an audience member and Iwas like excited to see who's
coming on.
I was like, oh wait, it's meNever been introduced like that.
I appreciate that a lot.

Speaker 1 (02:06):
I love that.
No, honestly, I mean, you know,I said that to you off the air.
As a podcaster, this isimportant stuff for us, but then
you and I obviously think aboutthe world at large, far outside
of just our industries, andit's big stuff we're talking
about today.
So before we get there, aman,I'm gonna put you on the spot,
and then we're jumping straightinto the fun stuff.
Who the heck is, aman?
How did you even start doingall these things?
Take us beyond the bio.

Speaker 2 (02:28):
Yeah, no, that's a great question.
Of course, as you said, my nameis Aman Ibrahim, my background
is in machine learning,engineering, and what got me
into this problem space and justsolving problems like this is
I've always had this desire tosolve very difficult problems
that have an opportunity to helppeople you know over the long

(02:51):
term.
And where that came from and Ilike to say this comes from like
a very personal side of myselfmy father and my parent, my mom,
came to this country from acountry in East Africa it's
called Eritrea, and my fatherwas fortunate enough to come
here to study computer science.
He worked at, like, ibm andCisco, so I was growing up I was
exposed to that.
My mother she's always beenvery community driven, so for me

(03:14):
it was just by its nature thatI had that love to solve
technical problems while alsofinding ways to serve people.
It even began when I was likein kindergarten.
My dad would contribute to therefugee community by building
computers for them and when Iwas in kindergarten, six, seven,
eight years old, also builtcomputers alongside with him to

(03:35):
help people that way.
And, as you mentioned before, Ispent some time in like health
tech research, but even built inhealthcare startups Started.
Two other startups of my ownhave over 40 different side
projects that I built were likeserving students, teachers,
athletes, doctors, patients, andeven when I joined Cruise, the

(04:02):
very motivation to be a machinelearning engineer there was.
Yes, solving the cutting edge oftechnology in that space was
amazing.
But then what is the returnsthat you get out of that when
you solve that problem?
You're not only talking aboutthe opportunity to make
transportation accessible topeople albeit the news that came
out of Cruise recently but theambition was making
transportation as accessible topeople.
Removing that concept oftraffic is possible with

(04:24):
self-driving cars, but then mostimportantly is the fact,
besides any other disease rightafter, it's always what perishes
people the most is caraccidents and removing that.
For me, if I could contribute tothat just even a little bit,
that was incredibly motivating.
So when I finally took myselfinto the industry instead of

(04:45):
working at some FANG company,obviously like the Facebook,
googles and things like that Iwas very much more driven to
come to an opportunity like that.
So naturally, when theopportunity and problem space of
deep trust arose, which we'llget into, it was just very
natural into who I am and why tosolve that problem, you know,
helping people trust their owneyes and ears again.

(05:06):
So yeah, that's basically it.
That's how I came into thespace.

Speaker 1 (05:10):
Yeah, aman, I love that overview.
I appreciate so many parts ofwhat you just shared with us,
and now it's clear to me why wewere able to click so quickly
today.
We're both from immigrantfamilies.
As the son of an immigrant mom,I feel like it's factored into
the way that we see the world.
We see endless possibilities.
Our families came to thiscountry in the pursuit of
freedom, in the pursuit ofpossibilities, in the pursuit of

(05:31):
you and I having more doorsopen to us than what they had
themselves, and I think thatthat so deeply is integrated
into the way that we see theworld, which it makes perfect
sense to me that you're part ofvery big solutions, amman.
So let's talk about thatsolution that you wanna be a
part of here and that you'reactively working to solve,
because this is big stuff and Isaid it at the top of this

(05:51):
episode.
A lot of people will probablythink, ah, this is either so far
into the future that I don'thave to worry about it, or
they're probably thinking I'mimmune to it agency owner, what
the heck is all this stuff goingto do for me?
Or I'm a web developer, aman.
Paint the picture of why theheck this is a cause that you
care so much about.

Speaker 2 (06:10):
Yeah, absolutely so.
The problem space that we'refocusing on today is the fact
that with only three seconds ofaudio, a single profile picture,
you can steal someone'slikeness very easy and if you've
been on social media enough,there's been enough content
where we saw that genitivecontent used for entertainment,
funny things as well.

(06:30):
But for every new technology,there's also the misuse of that
very technology.
So, for our perspective, aiisn't inherently bad.
It's just there are people whohave certain intentions.
So with the very little thatyou need in are people who have
certain intentions.
So with the very little thatyou need, in terms of both
skillset and sample content, youcan easily mimic anyone.
And we're talking about this isnot again, like you've said and

(06:53):
that was a great point.
I thank you for bringing it upthis is not an emerging threat.
It's already here.
There is already a report putout by the FTC where regular,
everyday Americans have alreadylost over $3.3 billion from
imitation scams.
Those are scams that are eithersomeone imitating someone you
recognize like personally, orimitating a celebrity telling

(07:17):
you oh yeah, there's thisgovernment check.
And these are very things thatme and my co-founder experienced
ourselves.
Part of why my co-founder isvery motivated to solve this
problem was one day hisgrandfather was sitting in the
room, got a phone call sayinghey, it's me, noah, noah's his
name stole his voice, his veryvoice, and asked him to send him
money and these sort of things.

(07:37):
And then another time formyself, my mom was sitting in a
room where she was watching whatshe thought was an
advertisement from Oprah someoneshe trusts, telling her hey,
there's a government check thatyou need to sign up for.
Just put in your socialsecurity number here and you'll
get it.
So this is people taking thisnew technology that you can
again be able to mimic voicesand likeness however you please,

(08:00):
with very little skill set andusing it for very malicious
intent and this goes frommisinformation to imitations,
scamming and phishing.
So just general, like anotherexample, use cases.
If I can steal your secondvoice in three seconds, imagine
if you're in a high pro, high,high situation court case where
someone submits evidence of yourrecording saying this and that.

(08:23):
So there's, there's a hugegeneral problem of how to even
authenticate what I'm seeing andhearing.
My own biology is fooling me soyeah, yeah, aman.

Speaker 1 (08:34):
These are big, important things and it's funny
we could talk about it in thebusiness context.
Dude, you and I actually didn'ttalk about this yet, but I see
it every time.
I log on to Instagram now I seesome marketers, who obviously
are not the most ethicalmarketers in the world, are
using Joe Rogan clips.
Because, joe Rogan we couldfind hours, hundreds, if not
thousands of hours of his voiceout there and they're having him

(08:55):
endorse products that they'readvertising.
And so us, as the consumers, wecan't tell.
And Joe Rogan has a large andloyal audience and they're
probably sitting there thinking,oh, he endorses this supplement
or this, whatever it is.
People are mimicking that, andyou made a very important point
to me before we hit record todayis that we've always relied on.

(09:16):
Well, I can tell a scam.
Ok, if I get an email from aNigerian prince who's promising
$50 million, I know that that'sfake.
But if I can see something withmy own eyes, if I can see it,
then I can believe it.
And so, seeing Joe Rogan, it'sa video of him, it's his voice,
it's him talking about this, butit's not real.
Talk about how this has becomethe first time in history that

(09:38):
what we see is not what we mightbe able to believe anymore.

Speaker 2 (09:42):
Yeah, exactly, this comes very down to the fact that
trust is a very sensitive thing.
When you say you have aNigerian prince reaching out to
you, you've never met thisperson, there's no context
between you and them engagingwith one another, so it takes a
lot, for there's a huge barrierfor, say, that Nigerian prince
to get what he wants out of you.
But when you bring familiarityinto something, whether it's you

(10:04):
know, your grandson or your momor your dad or your sibling, or
, say, a trusted figure, someonewho you've taken their advice
from before, whether it might bea Joe Rogan to your doctor or
whatever it may be, you quicklylike, lower your guard and now
anything can happen.
And the sort of perspectivethat we put into place is the

(10:26):
question.
This is what I'll state.
There's this concept called theliar's paradox and, for your
viewers, take a moment to Googlethe liar's paradox.
It's a concept in psychologywhere they have a straw man
figure and I'm paraphrasing here.
But basically, anytime thisperson speaks, it's always a lie
.
It's just this made up figureand whatever history.

(10:47):
And there's a moment that thisliar says I am a liar and that
becomes a paradox in the senseyes, they are a liar, but in
that moment they're telling thetruth.
So what we actually perceivethe problem space actually to be
is generative.
Ai actually causes a liar'sparadox in and of itself, where,
if you see enough content right, where, whether you're seeing
or hearing something that looksand sound real but then ends up

(11:10):
being generated or fake, thequestion no longer becomes what
is fake, but what is actuallyeven real at that point.
So that is, that is essentiallythe problem that we're really
looking at from this misuse ofthis technology.

Speaker 1 (11:24):
So, yeah, I'm going to ask you a big, broad question
, then, and it's inevitablygoing to lead into us talking
about deep trust in the workthat you're doing.
But how do we in today's worldhere we are in 2025, how the
heck do we even begin to tellwhat's real and what's fake?

Speaker 2 (11:41):
Yeah, I mean, listen, at this stage, of course, it's
very much a lot of theories andnothing is truly proven until
you actually literally executeupon it.
But for us, our perspective isthis you should bring in
concepts that are alreadyrecognizable, you know, and let
me take it to like very basicforms of trust.

(12:01):
You know, I told you like whatlowers your guard when you find
something that's familiar andyou trust the source of it, and
that's the concept that we'retrying to bring into what we do
today.
In the long term, how do youbuild essentially the trust
layer for the internet?
And I want to bring the abilityfor us to be able to put
providence into content when issomething coming from?

(12:23):
And if you can establish astandard that does that, like we
already have done for you know,uh, end-to-end encryption and
things like this.
This is how you can eventuallytrust where things come from,
and this is not a new conceptwhatsoever.
When you, when you, for example, receive like an academic paper
and, uh, someone just hands itto you, you don't just say, oh
yeah, trust it, know you havereferences, and then those

(12:45):
references have references, andthen you eventually have
essentially a chain of wherethat knowledge came from, and
this is where we have, say, coreand that sort of knowledge.
Content has that demand for thatsort of authenticity.
Now you need to have some sortof way to validate where
something is coming from andthere's finally a demand to
build that sort of standard.
So, at any point content iscreated, generated or modified,

(13:08):
our perspective is, in the longterm, we need to have some sort
of standard set in place whereyou put a signature into the
content that doesn't disturb thehuman experience of it, but
then that very cryptographic,immutable key that's signed into
the content that you can't edit, you can't remove, you can't
manipulate, can not only tellyou hey, this came from this
source, but it came from thischain of sources, and then it's

(13:31):
up to you to decide whether youtrust that chain of sources.
A very easy example that Iwould love to see happen in the
future is, let's say, a videoclip is produced and ends up on
your Twitter feed.
You can now look at it and youcan say, okay, someone clipped
this on their iPhone.
Or actually it starts from, say, a Sony movie that was produced
, you know, and that was signedby Sony.
Someone you know edited it onPhotoshop, so then it got signed

(13:54):
again individually, and thensomeone trimmed it on their
iPhone, signed again, and thenit finally ends up on your feed.
So now you have a true sourceor true chain of where things
came from, and then that's howyou truly trust where things
come.
Now, until we achieve thatstandard, that's like obviously
a big, ambitious goal to againbuild what I'm saying is the
trust layer for the internet.
There are steps along the wayand we'll get into those details

(14:17):
and it's part of what we'rebuilding today as part of our
product.
We're not jumping straight into, obviously, the dream goal here
, but in order to get to thedream goal, there is a journey,
both a technical one and a onethat's very focused on solving
people's real current problems.
So, yeah, that's how we trulybelieve, like the golden rule or
the golden path to solve thisproblem altogether.

Speaker 1 (14:38):
Yeah, I love how big you're thinking about this, aman
, because, truth be told, it'speople like you, it's innovators
, it's pioneers, who have thatgrand vision.
It's the only way we're goingto get there is, first, dream
big and then let's bring it backto the actual actionable
building blocks, which I lovethe work that you guys are doing
at DeepTrust, because I justthink about all of my businesses

(14:59):
and businesses, large and small, across the entire United
States, across the world.
We're digital, more digitalthan ever before Since the
pandemic.
So many more people are workingremotely.
We live on Zoom, we live onSlack, we live on Google Meet,
we live on Microsoft Teams.
With that in mind, where haveyou identified?
What is that gap, the immediateand actionable gap that Deep

(15:19):
Trust is plugging?

Speaker 2 (15:21):
Yeah, so you've already gotten the hint there.
We've become a very muchdigital driven society where we
don't have to do things, forexample, in person.
Like, believe it or not thosewho are listening, I'm not in
the same room as Brian.
I'm probably a couple thousandmiles away, you know, and that's
the very reality that we livein.
A couple thousand miles away,you know, and that's the very

(15:41):
reality that we live in.
We are post-COVID, where we've,you know, gotten established to
this distributed and remoteworkforce, and then, on top of
that too, we're now post-gen AI,where the very likeness of
people can be manipulated andyou basically have, like a man
in the middle of attack.
When you're trying tocommunicate to someone, you
think you're speaking to Brianor you're speaking to Aman, but
it's someone in the middlethat's impersonating that very

(16:01):
individual.
And the problem we're focusingon today.
Just to get to what we're doingtoday, we're helping security
teams at enterprises, especiallythat are in the regulated and
sensitive data space.
We're helping those securityteams protect their employees
and organizations against socialengineering, whether or not
deepfakes are involved on thevoice and video communication

(16:23):
channel.
So that's what we're doingtoday and, uh, we, this came
from like a year of just likesitting down, talking to people,
truly understanding.
Where are those problems?
Today we saw so manyopportunities, like we defined
30 different customer profiles,some of which included
intellectual property.
So, in the IP space, people whomake content and their entire

(16:46):
revenue stream, or theirlikeness, is their business,
their bread and butter how doyou protect their likeness in
the wild?
So, someone like yourself andobviously we can talk about the
Drakes and the Weekends as weremember a year and a half ago
and then there's even again Ibriefly mentioned it there's the
digital forensic space.
How do you you're in the courtof law?

(17:07):
The stakes are incredibly high.
How do you actually understandthat the evidence that you're
putting in front of such animportant decision is bonafide,
it's genuine or it's manipulated, and the list goes on and on,
such as, again, misinformation,trust and safety.
It's again the very core of howwe operate as a society.

(17:27):
We again, biologically, arevery dependent on our eyes and
ears, but now, as a society,we're very digitally dependent
as well too.
So there's so much of ourfoundations, of our day-to-day
life, that's going to be shakenby this very thing.
And today, again, we'refocusing on where the pull and
demand is the highest, whereasagain in these enterprise
security spaces.

Speaker 1 (17:43):
So yeah, yeah, Aman, I love the way you tackle that.
I think about one of myfavorite entrepreneurs in the
world is Dante Jackson.
He's a cybersecurity expertbased out of Georgia and Dante
and I when we talk, I love howmuch he thinks like a hacker.
He thinks like the people whoare looking to do wrong, and so,
having this conversation withyou today, I'm picturing the gap

(18:05):
that your business plugs andI'm thinking well, who the heck
would want to join a corporateZoom call and what damage could
they do there?
And then I immediately jumpedstraight to man.
If I could impersonate a companyCFO and I could penetrate
within a finance departmentmeeting and authorize them to
write the entrepreneur toentrepreneur podcast, a massive
check Well, that's coming fromthe CFO, so of course, people

(18:27):
are going to listen to that.
Paint that picture for us,because I would imagine that
you've thought about this at away deeper level than I did.
I'm just going to write a checkto this podcast.
But what are those real lifethreats and concerns that
probably even enterprise levelemployees aren't even thinking
about when they enter a Zoommeeting?

Speaker 2 (18:46):
Yeah, I don't even have to paint a picture.
Those malicious Mozart's arealready out there.
There's already been multiplecases, especially in the
financial services space, forthat particular situation, where
again you have all types ofbusinesses dependent on this
type of communication channel,you're going to be a tiny
startup to the little departmentof defense where again you can
impersonate anyone, andobviously your first thought was

(19:08):
like hey, a CFO.
But we've seen incidences whereit was a equal level colleague
reaching out to their IT helpdesk and the reality was the
colleague was around the cornerand through that they were able
to get credentials of thebusiness and then thus steal
customer data.
And then we've even seen hugeevents, such as the incident
that happened in Hong Kong wherea accountant was actually

(19:32):
following their very training.
They received an email from theCFO saying hey, we have this
really important deal happening.
You need to act quick.
And naturally the person was askeptic.
And again, this person's notsome sort of idiot, this is a
trained accountant that worksfor a multinational firm all the
way in Hong Kong.
And they're like I'm going tofollow my training, jump on a

(19:55):
Zoom call first to get thesituation.
They jumped on the call.
They not only saw the cfo, butthen they recognized and saw
people from from their immediateuh uh group in hong kong and
because of that they're like, oh, clearly, this, why would this
be?
Uh, you know, made up oringenuous, so they wire the
money away and that was a 25million dollar scam.

(20:16):
So this is this is not even aoh, what could happen?
Sort of situation.
There's been multiple repeatedattacks that we've already seen,
both in the public domain andprivate, where we've gotten to
learn those incidences bytalking to chief security
officers directly, cisos, andthis is just every single
cybersecurity report that we see.

(20:37):
They clearly not only see thisas the fastest growing attack
vector, but there is very muchno reason for a malicious actor
to not utilize this technologyover the next year, and I almost
want to say in some ways I saythis sometimes this is a perfect
time to be a villain, in thesense that it's not even the
fact that you can createindividual, incredibly
convincing attacks, but you cannow automate it.

(20:59):
You can have, say, a languagemodel sitting behind it
responding to a person in realtime and you can just send that
call to a thousand people andyou're talking about email.
Phishing campaigns have like a5% success rate.
We've not only seen internallysuccess rates above 50% to 70%,

(21:19):
but we've already seen reportsof people not even using exact
voice clones.
They're using like a generalvoice and a chat GPT language
model behind it and they wereable to do bank transfer scams
at like 50% success rate and itonly cost them about like five
to 10 cents.
So I mean, if you're not a goodperson, why wouldn't you do

(21:40):
this?
And that's actually somethingwe've briefly mentioned in your
question.
There's a cybersecurity CEO.
You say he thinks like amalicious actor and even what we
do at DeepTrust.
We've built this AI bot calledTerrify Terrifyai and what it
does is an AI conversational bot.
It speaks to you, it's veryfriendly, it's trying to be your
friend.
It's like hey, how are you?

(22:00):
What's your name?
And then within 15 seconds, itnot only memorizes things about
you, but starts mimicking yourstyle of speech.
So I don't know if you're fromthe Valley or from the South.
You got a particular twang toyour voice.
It'll start repeating that backto you, but then within 15
seconds, it not only does that,it has your entire voice and
it's speaking back to you inyour own voice.
So we showcase that just tohelp people understand through

(22:22):
the experience, because it's onething for me to be on a podcast
or like an interview orwhatever to tell you oh my God,
the danger is here, it does thisor that.
It's another thing to literallyexperience it yourself within
10 to 15 seconds.
So this is what we're talkingabout in terms of the threat
landscape.
It's not a oh here.
Maybe it's already coming andgrowing.

Speaker 1 (22:43):
Whoa.
Come on.
First things first.
I want to say I'm grateful thatyou are one of the good guys
who's pioneering theseinnovations, because you can so
effortlessly show us andobviously this is just the tip
of the iceberg of your expertisehere in a short podcast
interview, and so I can onlyimagine the depths to which you
are well-versed in all of thesethings.

(23:04):
And when I say all of thesethings, I want to call it out
for listeners is that we're nottalking hypotheticals.
In today's episode, aman iscoming with real life business
examples, real life case studies, societal this transcends
business.
And so we are here, if you tuneinto this episode, thinking, oh
, this is going to be a fun yearahead in AI we're already at a

(23:25):
point where these things arehappening?
Yeah, so, aman, I want to askyou this, because when you talk
about deep trust and even morethan just you talking about deep
trust it's so clear to me thatyou have that long-term vision
as well as the actionableshort-term solution I love is
part of your messaging.
It seems to me like you guysview deep trust as an agent, as

(23:45):
someone who's on your side.
I mean, when I saw yourmessaging a security co-pilot
for employees, it is someone, or, in this case, something, that
is of service to the employees,to the organization that it's
deployed in.
Talk to us about that formfactor and that delivery
mechanism that you've built atdeep trust to make it a reality
for enterprises yeah, absolutelythe.

Speaker 2 (24:06):
The reason we want to co-pilot is, listen, every
company that has a security teamor an IT department, they
enforce new processes andtraining upon you and the
reality is, if you're just youknow, say, another engineer or
salesperson or marketer, youjust do them for the sake of
doing them, if you even do them.
And the reality is, the truerisk for a business is the human

(24:29):
risk in the sense that, hey,listen, I'm a human being.
I can't memorize it andremember every single piece and
aspect of this particularguideline or this policy or that
one, and when I engage into arisky situation with the
business, I'm not going tonaturally be as well equipped.
So let people stay good withwhat they're good at and empower
them, and let them not have tobe as worried about the security

(24:52):
risks and have a co-pilotthat's there to assist you.
So, like our very product, itcomes in the form factor today
as like an agent that joins yourcalls and eventually we want to
we can eventually move off ofbeing like a physical presence
in the call, but regardless,it's there to analyze what's
being said, who's saying it inthe context of the business, and
then if you need any assistance, it'll give you just in time

(25:14):
training.
So, for example, it could beobviously like a malicious
attacker where we have doubts intheir identity.
They might be pushing or addingurgency to their requests.
We can then become a sourcebasically saying, hey, slow down
a little bit.
We have doubt in their identity.
According to your training,just ask these questions before
you move forward.
And what that also does is itempowers the person because, for

(25:36):
example, if you're thataccountant in Hong Kong and you
have that CFO talking to you,that power dynamic is very hard
to refuse.
But imagine the enablement thathas when you're like, hey, the
agent is requiring me to ask youthese questions.
It's not me, right?
And then, even when you'retalking about something that's
more mundane and might even benaive, where you might have two

(25:57):
employees who are not maliciousat all but they may be
mishandling sensitiveinformation, you can just have
the agent in there say, hey,just remember, by the way, this
and that you're meant to do thisand this.
For example, like Aman is anengineer and I'm exaggerating
here he's coming in asking for awire.
He could be like, hey, slowdown.
He doesn't have the access tothis action Loop in this person

(26:20):
or that person to actuallyengage in this opportunity.
And what this does, again, isremove the responsibility, the
heavy burden of hey, you're thelast line of security for our
entire business.
Make sure you do the rightthing and you actually have
something that's basically themost educated security employee
in the call every time with you.
So, again, allowing people tojust stay good with what they're

(26:42):
good at and not having them tonecessarily put all that
pressure on.
And I think one other thing tothrow in there that I meant to
throw in there was when it comesto breaches 90% of them, you
know.
So this is, this is again meantto empower the people to not
necessarily have to, you know,become the sole reason why you

(27:12):
know your company falls apart.
Oh yeah, that's basically thevalue that we look to put.

Speaker 1 (27:17):
Yeah, I love that, aman.
I love the way you think aboutit.
I love the way that you guyshave built out DeepTrust, love
the way you articulate it.
It's fascinating to me becausehere we are, we're talking about
some high-level tech stuff andobviously you've been a large
language model engineer, you.
You've worked in so manydifferent capacities within this
field.
But it's fascinating to me andit's just a huge kudos to you
and the entire team behind thescenes is that you're also

(27:39):
thinking about a company'sinternal controls, the fact that
you're thinking about thehierarchy, and how am I going to
tell the CFO, no, let that onusfall on the technology that's
supporting us.
So I think it's absolutelybrilliant the work that you're
doing.
I want to put you on the spothere because you are so well
versed in all of these thingsand you're totally not prefaced
here.
So it's a bit of an unfairquestion, but I know that

(28:00):
listeners will get a lot ofbenefit from this, and that is
what's some of your advice forentrepreneurs, or even maybe
extrapolate just to societally,just to individuals these days.
I'll share with you.
One of the more brilliantinnovations I've seen is I saw a
fellow entrepreneur who's avery public figure.
He has a page on his websitethat says these are my only
verified accounts and he listsfrom his website unless you get

(28:23):
an email from this email address, it's not me.
Unless you see someone onTikTok and Instagram and
Facebook, unless it's thesethree profiles, it is not me.
So I think that's a really lowtech solution there.
What are some of your bestpieces of advice and strategies
that we can protect ourselves,whether it's low tech or high
tech?

Speaker 2 (28:42):
Yeah, yeah, yeah, the first layer of defense and
maybe what I should actually saythe people who are the most
exposed are the people who areleast informed.
They're not aware of thisproblem actually even existing.
They're the ones who are themost exposed.
So, always, always, always.
Step one Obviously, I wish Icould say, hey, just buy a deep

(29:03):
trust agent.
No, step one is actually getyourself informed, understand
the stakes that we're talkingabout, Be aware of the fact.
Hey, listen, as an averageAmerican, I'm likely to have
already uploaded enough of mylikeness onto the internet.
So it's very possible someonecan steal my likeness and I
should be aware of that.
Now, two, what can you do?

(29:24):
Obviously, the next step is youknow, make sure the people
around you are also informed,especially those who are close
to you, your loved ones and youknow they're not perfect
solutions.
But make sure you and yourfamily have, you know, private
passcodes or secret phrases incase they're in a situation
where it seems like someone'sputting pressure and urgency and
it sounds like someone Irecognize.

(29:46):
Then you can at least have somesort of true proof point to be
like okay, listen, this is how Iknow who you are or not, and in
fact, that's how a lot of thesethere's a couple of stories
that have actually happened,like this, where the CEO of
Ferrari thought he was talkingto someone else or, excuse me,
another executive thought he wastalking to the CEO of Ferrari.
Someone else or, excuse me,another executive thought he was
talking to the CEO of Ferrariand the only reason why the

(30:06):
person didn't fall for theirasks was right towards the end
of the call the guy rememberedoh, I recommended a book to the
Ferrari CEO and he proceeded tojust ask him that personal
question.
So part of the safety that wehave today is making sure your
personal relationships are aware.
Don't be taken advantage, um,by having maybe these one-on-one

(30:28):
pass phrases or things likethat so it can help you in the
moment, and then, uh, and thenthat's, I think.

Speaker 1 (30:34):
I think that's the the main thing that you can
really do today yeah, reallywell said I think it's important
advice aman, truth be told, asa podcaster, I tell that to
everybody that I know personally, I tell it to my fellow
podcasters.
So this is important for all ofus to confront today.
You've actually increased myurgency about this, because when
you talk about even justuploading pictures and you talk

(30:55):
about three seconds of audio,probably every single person I
know on their own facebook isalready exposed.
So so, aman, huge kudos to you.
I want to reiterate again I'mso glad that you're one of the
good ones and that you are apositive force in this industry.
I want to ask you this question.
It's super broad.
I ask it at the end of everyinterview and it's entrepreneur

(31:15):
to entrepreneur.
But, of course, you also haveyour hat on with regards to the
work that you do, and that iswhat's your one best piece of
advice.
You not only are a subjectmatter expert doing the very
important work that you're doingwith DeepTrust, but you're also
a fellow entrepreneur growing abusiness serving your clients,
being part of a solution.
What's that one thing thatyou've picked up along the way
that you want to share with ouraudience at all different stages

(31:36):
of their business growthjourney?

Speaker 2 (31:38):
Yeah, first I want to preface by saying like I'm not
an expert.
I'm always still learning, butfor me personally, what I've
found to be the driving force,or the driving thing that I keep
in mind day to day, is makesure you're solving actual
people's problems.
And what I mean by that isespecially us engineering
founders.
We love to just build coolthings, and I put myself in that

(32:01):
situation enough times to knowthe pain.
So I'm not coming.
I'm not coming with this advicein the sense of like yeah, I'm
holier than thou.
No, I've, I've gone throughwhat it looks like to.
You know, build something coolbut then not have anyone who
actually wants it.
And the way you figure out whatpeople want is you sit down and
you talk to them.
There isn't like really abetter way to to figure out like

(32:22):
what problems people need, uh,to have solved.
And there are good problems tosolve and there's like bad
problems to solve.
And what I mean by that it's,for example, you know you could,
people could have problems intheir life that weren't
necessarily something that theywould, I guess, say, pay for to
go away Like, bluntly speaking,you know my hands are a little
cold.
You know it'd be really cool ifsomething could automatically

(32:43):
put gloves in my hand, right,but I'm not going to.
I'm not going to pay for arobot that, you know, does that
for me the moment I think aboutit, you know, at least not right
now.
So that's a problem that I have, but it isn't one that I'm
motivated to actually get solved.
So talk to people and reallyfigure out what is what is your
day-to-day.
You know pains that you have,that you would love to go away,

(33:04):
and I think a really good bookto reference is the Mom Test,
and this is something I love toalways recommend to very
technical founders andentrepreneurs.
At least take a look at thatbook and understand like, who am
I serving?
You know what problems do theyhave and you know how do I make
their day in life a little bitbetter, step by step.

Speaker 1 (33:24):
Yes, gosh, aman, so much good advice and valuable
insights from you here today.
You've successfully scared us.
You've successfully impressedus.
You've successfullyover-delivered in all the ways,
aman, and I'm so impressed withthe work that you're doing.
I want to toss it to your wayto drop those links so that
listeners know where to go.
But before I do so, I want toshare with listeners so many of

(33:45):
the real things that Aman sharedwith us today.
Go to his website.
He's going to tell you hiswebsite in just a second Spoiler
alert.
It's already in the show notes.
You can click right on through.
But, with that said, when yougo to his website, so much of
Aman's messaging is not just toscare people.
It is purely to showcase.
This is the reality.
Whether you want to walk aroundwith your eyes closed or not,
this is a real current problem,not a future problem.

(34:07):
You'll see, what really struckme the first time I went to
DeepTrust's website is there's avideo.
They call it voice theft inaction.
You're going to see this stuffthat Aman's been talking about.
Hearing it's one thing, seeingit as another, but actually
experiencing it is obviously awhole different level of it.
So the brand that they'rebuilding behind the scenes at
DeepTrust is right in line withthe importance of the work that

(34:30):
they're doing.
So, aman, that's just metooting your horn a little bit,
but man, take it from here.
Drop those links on listeners.
Where should they go from?

Speaker 2 (34:36):
there.
I think if there's one thing,one place you should go, just go
to our website, deeptrustai,deeptrustai, and then from there
there's two things I'd checkout One our blog.
I know sometimes some of usmight just want to watch content
, but I'm telling you, myco-founder puts a lot of
interesting information everysingle week on not only the

(34:58):
problem space, but how youshould go about.
Again, the way you solve thisproblem first is keeping
yourself informed and then, ifyou want to try something a
little cool and scary Brian didmention the video click the
video, check it out, but thewebsite link is there.
It's called Terrify tariffaiand just take 10 to 20 seconds
to have a conversation with anAI bot that'll steal your voice.

(35:21):
It could be fun, but at leastit's very educational.
It's definitely an educationalexperience.
Voices are deleted once you'redone using it, of course, and I
guess you could have aconversation with yourself
technically.
But yeah, that's what I wouldsay.
That's what I'd say.

Speaker 1 (35:37):
Aman, in over a thousand episodes.
It's the first time this hasbeen the call to action hey,
listeners, go get your voicesstolen.
But, Aman, it fits perfectlyhonestly, because there's no
other way to experience it otherthan experiencing it.
So, listeners, all those linksare down below in the show notes
, no matter where it is thatyou're tuning into today's
episode.
Aman, I personally am sograteful for the work that

(35:58):
you're doing and also the factthat you came on here and you've
shared so transparently and ina really important way.
You've opened our eyes to therealities of current situation.
We've got a fun and excitingyear ahead of us, but with that
comes a lot of responsibility todo things right and make sure
that we're protected in all theways.
So, Aman, on behalf of myselfand all the listeners worldwide,
thanks so much for coming onthe show today.

Speaker 2 (36:21):
Absolutely.
And can I say one thing as welltoo?
Thank you for having me, brian,and also thank you for the work
you're doing.
Like I'm going to come out ofthis episode and then recording
feeling a little bit moreenergized.
You know, and being a founderand entrepreneur, you sometimes
need as much of the energy youcan get, and this was honestly a
very enjoyable conversation.
Thank you for the opportunity.

Speaker 1 (36:40):
Thanks Amman, Thanks Amon, to our amazing guests.

(37:00):
There's a reason why we aread-free and have produced so
many incredible episodes fivedays a week for you, and it's
because our guests step up tothe plate.
These are not sponsoredepisodes, these are not
infomercials.
Our guests help us cover thecosts of our productions.
They so deeply believe in thepower of getting their message
out in front of you, awesomeentrepreneurs and entrepreneurs,

(37:23):
that they contribute to help usmake these productions possible
.
So thank you to not onlytoday's guests, but all of our
guests in general, and I justwant to invite you check out our
website because you can send usa voicemail there.
We also have live chat.
If you want to interactdirectly with me, go to
thewantrepreneurshowcom.
Initiate a live chat.
It.
Go to thewantrepreneurshowcom.

(37:44):
Initiate a live chat.
It's for real me, and I'mexcited because I'll see you, as
always, every Monday, Wednesday, Friday, Saturday and Sunday
here on the Wantrepreneur toEntrepreneur podcast.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.