Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:53):
How's it going,
Giddy?
It's great to get you on thepodcast.
We've been working on gettingthis thing going for a while,
and I'm actually really exciteduh to hear about your background
today and and dive into you knowthe problem that you're solving
because actually the problemthat you're solving is pretty
relevant to some of the workthat I'm encountering right now,
(01:14):
which is just a huge ocean of aproblem I'm finding out.
SPEAKER_01 (01:20):
I can imagine.
So, first of all, Joey, thanksfor having me here.
It's my pleasure.
Yeah, looking forward to a greatuh discussion now.
SPEAKER_00 (01:28):
Yeah, yeah,
absolutely.
Well, why why don't we startwith telling your background?
You know, how you got into IT,how you got into security, what
made you want to go down thatpath?
Was it something that interestedyou?
You know, what what does thatlook like?
SPEAKER_01 (01:43):
Yeah, so I would say
it started the actually where
when I was in high school,actually, I would say that I
always uh loved programming,math, solving complex problems.
I started to deal withcryptology almost like as a
hobby.
And a couple of years afterthat, uh I joined the Israeli
military, right?
Like every Israeli and joinedthe 8200 unit, right?
(02:05):
Like the Israeli NSA.
And happened to be, which Ididn't know when I joined it,
actually, when I was recruited,but actually go to the same
place.
I like to deal with the sameproblems as I was a kid, right?
The on the cryptology side.
Served for X amount of years andthen kind of moved on, moved on
to the industry.
But I would say that was kind oflike cemented my love for data
analytics, complex algorithmicproblems, and of course dealing
(02:28):
with the cybersecurity, whichwas not the term terminology
used then.
It evolved along the way, but acybersecurity type of problems
and finding the right solutionsfor them.
So I always loved it, as I said,since I was a kid.
SPEAKER_00 (02:41):
So talk to me, okay,
talk to me about the crypto side
in 8200 group, right?
I've had out a lot of 8200 grouppeople.
And I know you can't tell me anyspecifics or anything like that,
obviously.
I don't want you to, you know,don't tell me anything you
can't, right?
But what is that like forcrypto?
Can can you talk to me a littlebit about like just what that
(03:03):
looked like?
Were you creating cryptoalgorithms?
Were you deploying it in youknow harsh environments or just
what what is that?
SPEAKER_01 (03:10):
Yeah, so as you
said, I cannot um share much,
but I can tell you that we dealtwith code cracking at the end of
the day, right?
Like similar to what the NSAdoes, right?
So at the end of the day, right,there's encrypted uh
communication of some sort, andthe job is uh to find systematic
words to scale, right, to crackit.
So that's what we were dealingwith.
SPEAKER_00 (03:29):
Hmm.
SPEAKER_01 (03:29):
Many years ago, I
probably information uh is uh
very, very outdated by now, butthat's how my career started,
actually.
SPEAKER_00 (03:37):
Yeah, it's
fascinating to me because you
know, like you always think youalways think anything digital,
like the NSA, the 82 HunterGroup, you know, Russia, China,
they could just break into it,you know?
You just kind of assume that aslike an outsider, right?
And then you read a story abouthow like the NSA and the CIA
came together, created a companyin Germany that was like, you
(03:59):
know, creating these devices tosell to their our adversaries
that had a backdoor built intoit, right?
Because we couldn't find out howto break the crypto that like
you know, just everyone isusing.
Like it's not, it's not like youknow, necessarily that
proprietary.
Like it's it's out there, youknow, everyone's using like AES
256 and stuff like that, youknow.
(04:21):
So like to go to that extent, togo to that length, I mean, to
me, that's pretty extravagant,right?
Like it kind of just shows youthe difficulty that they're
facing.
SPEAKER_01 (04:32):
Yeah.
No, I agree that at the end ofthe day, right?
The NSA, CIA, 8200, GCHQ, andall of them are intelligence
agencies, right?
So they go a long way, right, toto find the best sources of
information, right?
They can deal with whatevertheir country is sending to do.
So where it's complex, it'stough, many things are
impossible, so not everything ispossible to be done.
(04:53):
But there is a lot ofcreativity, which I think that's
uh one of the reasons uh notjust myself, right?
As you know, there are a lot ofuh Israeli founders coming from
8200, because it's kind of aunique uh place where on one
hand you are dealing with verycomplex problems, many of them
are not possible to solve, oryou don't know ahead of time
whether it will be ever solvedon one hand, which leads to a
(05:13):
lot of creativity, a lot ofopportunity, right, to
experiment with technologies.
And that's I think one of thereasons why there's so many
startups that are starting a foryou know founders that actually
serve there, because there's somuch um, I would say it's
training, it's experience indealing with the uncertainty of
technology and its ability toactually provide a day and
solution at the end of the day.
SPEAKER_00 (05:34):
How often would you
say you would be handed a
problem that you would you knowcome back and say, like, yeah,
we can't, we can't do X, Y, andZ.
Like we have to go another routewith it?
SPEAKER_01 (05:46):
If you're talking
about my service then a lot, I
mean it was a say I would sayit's not about just uh getting
to a dead end.
It's more about taking projectsthat you don't have a clue if
you'll ever be able to besuccessful with them at all.
Because you just don't know,right?
Yeah, and sometimes it takes awhile, months, years to figure
that out.
SPEAKER_00 (06:05):
How did you develop
the skill set to actually figure
that out?
And I I ask because you know,when people are getting started
in cybersecurity, I alwaysrecommend that they start, you
know, on help desk, right?
Because you get a lot ofdifferent experience on help
desk.
And one of those factors withhelp desk is you learn very
quickly how to identify aproblem.
(06:27):
That how's it going, everyone?
Before we continue on with thisepisode, this episode is
sponsored by BonFi AI, as youprobably guessed.
But as always, that doesn't meanthat they told me what to say or
anything like that.
They simply believe in thepodcast, they believe in the
product that I'm puttingtogether that I'm putting out,
(06:48):
and they wanted to support thepodcast.
So Giddy came on, and you know,all of the questions are
unscripted as always.
And uh, you know, they have afantastic product that I think
will help a whole lot ofcompanies out there because I
know firsthand how big of anissue data security and data
governance can be for companies.
(07:08):
So, with that, please enjoy theepisode.
This was a fantasticconversation, and please check
out Bonfi AI down in the linksin the description of this
episode on whatever platform youfind it on.
Thanks everyone.
We can't do X, Y, and Z.
Like we have to go another routewith it.
SPEAKER_01 (07:28):
If you're talking
about Sir the Serv, my service
then.
Yeah.
A lot.
I mean, it was a it's a I wouldsay it's not about just uh
getting to a dead end, it's moreabout taking projects that you
don't have a clue if you'll everbe able to be successful with
them at all.
Because you just don't know,right?
Uh yeah, and sometimes it takesa while, months, years to figure
(07:51):
that out.
SPEAKER_00 (07:52):
How did you develop
the skill set to actually figure
that out?
And I I ask because you know,when people are getting started
in cybersecurity, I alwaysrecommend that they start, you
know, on help desk, right?
Because you get a lot ofdifferent experience on help
desk.
Yeah.
And one of those factors withhelp desk is you learn very
(08:13):
quickly how to identify aproblem that you're not able to
solve, that someone elseprobably on your team can solve,
or you know, you have to go toGoogle for it or whatever it
might be, right?
But you learn immediately, Idon't know this.
I haven't encountered it, Ihaven't seen it, I don't know
what it is, right?
And that's a critical, that's areally critical point because
(08:36):
you're using your time andresources efficiently, which is
what you need in a help deskenvironment, you know, to turn
over these problems as quicklyas possible, really no matter
what the environment is, right?
How how long, how long did ittake you to be able to develop
that kind of skill?
And then what tools did youimplement to kind of like factor
(09:00):
it in to be like, okay, this isdefinitively something I I can't
do, right?
Like, does that make sense?
SPEAKER_01 (09:06):
Yes.
But I would say that there's alot of difference between the
upbringing, as you said, right,of security professionals,
security analysts that arestarting with the help desk and
actually learning it from theground up.
A lot of what you need to do isto supplement it with research,
right?
So a lot of the ways to handlethe unknown is research of
different sorts, right?
That's when you're developingnew techniques, new concepts,
(09:28):
new ideas, and some of them,almost by definition, are not
the one that's going to work atthe end.
So I would say it's a it's a mixof a lot of different one
skills, but also an environmentthat actually supports you in
actually doing something andspending months sometime on it
without knowing whether you'retaking the right approach or
whether the problem is solvableat all.
(09:49):
Which again, it's part of a justthe rolling forward, my let's
say, career in the in thehigh-tech or sci or startups.
I think it's a g was a greatbackground of how to tackle
complex problems that it's notclear to solvable or not
solvable in a practical way.
And why, if you do solve them,you create a big technological
mode, right?
You solve some big problems to alot of organizations, and it's
(10:10):
super exciting, but in manycases, just not clear upfront,
despite the experience anddespite the maybe supporting
environmental resources whereyou'll get to the right
solution.
But that's uh that's part ofwhat excites me.
It's a part of just dealing withthe unknown.
SPEAKER_00 (10:25):
Yeah, it's kind of
gives you a bit of a rush, you
know, when you're when you'regoing into something that's
unknown.
SPEAKER_01 (10:31):
It does.
SPEAKER_00 (10:32):
Yeah, it you know,
I'm I'm getting my PhD right
now, and not that my field iscompletely unknown, but there
isn't a whole lot of materialout there, you know, on what I'm
trying to research.
And I'm kind of pulling threecritical areas together and
trying to find that overlap, youknow, that works, that meets the
(10:52):
requirements and whatnot, whichis something completely
different from every other, youknow, level of study that we've
been taught.
You know, like in all the waythrough even getting my master's
degree, I was always taught,like, hey, you have a paper,
it's due on this date, it needsto be on this topic, you know,
whatever, right?
(11:12):
You have a project.
Right.
And now I'm creating the tasks,like I'm figuring out the topic.
I am then creating the tasks andcreating goals for the end of
each class of what I need todeliver.
And the university is justletting me, you know, do it.
And it took me man, it it tookme probably a semester, a full
(11:36):
semester, maybe a semester and ahalf to actually figure that
out.
Because I'm just sitting herelike, well, what do we, you
know, what do we do?
You know, like I don't know howto figure any of this out.
And then my chair finally likebroke it down to me.
He's like, you're the one that'sdeciding.
You're researching.
That's what researchers do.
Exactly.
SPEAKER_01 (11:55):
Yeah, so I would say
that going back to my early
career right in the militaryservice, that was a lot of what
we did, right?
So we had the high-levelmissions, but what has to be
done on the day-to-day basis,how to deal with them, what to
develop, how to develop it, howto test it.
It was a lot of freedom tooperate, which again was a great
uh learning experience forstartups, right?
(12:17):
Because many of them arestarting that way, right?
You have ideas for something youwant to solve, at least in my on
my end, ideas that people didnot solve before, or problems
that people did not solvebefore.
You have some concept, you aretrying to invent something from
nothing and make it happen andturn into a business.
So uh I think it's very, verysimilar to that.
That's why I said earlier,right?
(12:37):
That I think that the analogy isbetter to actually to what you
just mentioned, which is aboutresearch.
It's a mix of research andpractical problem solving and
putting that together if you'relooking on the startup in a
business context.
So uh multidimensional type ofuh innovation.
SPEAKER_00 (12:53):
Yeah, yeah,
absolutely.
So, where did you go after 8200group?
Where did you go to actuallyscratch that edge of researching
kind of yeah?
SPEAKER_01 (13:03):
So I worked a few
years just after my military
service.
It wasn't very long, it wasabout five years, so longer than
the minimum, but not it's not aI was not a military career
person, right?
So got out as a young captain, Ithink, if I remember correctly.
It was a while back.
Yeah, so I worked a few years inthe industry, so in a kind of
locally hired company, but Ialways had the urge, right, to
start my own uh startup since Iwas a kid, as I said, right?
(13:26):
Started to work on developing alot of code, and just like I
just have to do that, evenbefore I knew the term startup,
right?
It was clear to me that's what Iwant to do.
And that's when I started tryingto embark on it, right?
So I had a startup in the uhwhat would be considered today's
the open source intelligence uhspace, then started the Skybook
Security, which I was CEO andran for a many, many years.
(13:49):
Got a pretty nice size, sold itto private equity, stayed a bit
more, and left it a few yearsago.
And the last um the my last uhgig, let's call it that way, is
bonfi.ai, which I started in 24,along with my co-founder and CTO
Danny Kippin.
SPEAKER_00 (14:07):
So, so after the
8200 group and you go and work
for a normal company, right?
I I'm I'm trying to compare andcontrast, right?
Because in America, when likelet's just assume someone from
the NSA stops working at the NSAand they go work for a normal
company, and somehow, you know,people there find out that they
(14:28):
used to work for the NSA.
Like that person's like likegold.
You know, I I mean likeeverything that that guy says or
girl says is like gospel.
You know, I I I mean there'sthey can't do any wrong or
anything.
You would think that they likeliterally walk on water, but in
Israel, I I feel like that wouldbe completely different because
(14:48):
it's like you know, you walkinto a place and you're like,
yeah, I used to work for 8200group, and they're like, Oh,
good, we have a whole hall of150 people of you guys, you
know, like is that is that'swhat it what I probably now more
than when uh I started actuallyin my career, because then a few
things.
SPEAKER_01 (15:07):
One, I never said to
anyone, not family, not wife, no
one knew where I served actuallyafter my service.
I'm not talking about during myservice.
So now that I'm kind of openlysaying I dealt with code
cracking, it took me probably 20years even to say the words.
Well, no one knew actually, youknow, I said kind of
intelligence core, and that'sit.
So no one knew exactly where Iwhere I served.
And I think that the units therewere smaller.
(15:29):
They were not small, but smallerthan than let's say developed
over the last uh uh 20, 20 plusyears.
So it it was popular to be, youknow, to see kind of uh let's
say uh veterans that came fromthose units, but not as many as
you see today.
And definitely the visibility towhere you came from, what we did
there was zero, right?
We never talked about it.
SPEAKER_00 (15:48):
And is that is that
pretty typical?
You said that you did like theminimum service amount.
Is that pretty typical for threeplus two?
SPEAKER_01 (15:55):
So it wasn't the
minimum, but it wasn't very
long, right?
It was five years.
Typically in Israel, it wasdepends on the year was two and
a half to three years.
That was the minimal service, atleast for boys.
SPEAKER_00 (16:03):
And like, do a lot
of people typically go that
route, or I mean, I would assumeso, right?
Because there's probably only aselect few that like stay in for
multiple you know, so there area lot that are taking so let's
say another year, two or threeextension to that, like I did,
right?
SPEAKER_01 (16:18):
Especially, you
know, intelligence corps,
special forces or air force andalike.
But they, you know, I wasactually I really enjoyed the
service there, but uh I didn'twant to stay a single moment
more than that.
Nothing bad there.
It was actually it was a lot offun.
We did it with a lot of greatprojects and a lot of um
opportunities, right, to developthere.
But I was just there, right, togo to the industry, right to
(16:40):
develop products and uh start myown companies.
So after five years, I said,okay, that's uh it was a great
experience, but enough for me.
SPEAKER_00 (16:48):
Yeah, yeah.
No, that makes sense.
I I could imagine myself goinglike one of two ways, right?
Either I'm a career in themilitary, you know, for like the
entire career, or you know, I'min there maybe one or two
contracts, right?
Learning as much as I possiblycan and then moving on, right?
And I I feel like that's thatwould be pretty typical,
(17:10):
especially for for this industryand the skill set, you know,
like just it makes sense.
Yeah.
People in this industry, theyneed to constantly be learning
and doing something new andeverything else, you know.
It it wouldn't be like them tostay in for for so long.
So, so afterwards, you know,talk to me about how you
(17:30):
identified the problem with, youknow, large data sets that
you're currently working towardssolving.
Because and I'll tell you rightnow, right?
I was I was on the call with acustomer a couple weeks ago, and
they were talking aboutimplementing Microsoft Purview
for terabytes of data that theyhad.
It was on-prem, it was in thecloud and everything.
(17:52):
And they told me that they werestarting from literal zero.
I didn't think of it was literalzero.
I figured you have to havesomething turned on, you know?
Sure.
I get in their environment andit is literally zero.
There isn't even permissions toto use the services, right?
And I'm looking at it, and it'sthe first time I've ever looked
(18:13):
at purview.
And, you know, maybe an hour in,I immediately thought to myself,
this isn't my specialty, youknow.
Like pe there, there's there'sthings that are on the fringe of
my specialty that I can getinto, I can learn relatively
quickly and make progress andmove forward, right?
But this is like beyond that,where it's like you don't just
(18:35):
jump into the deep end with thisthing, you know, you kind of
like you need some guidance, youknow.
And just looking at it, I mean,it's a minimum eight-month
project, right?
Minimum.
Like there's no going around it,right?
So the problem is massive.
And I didn't even realize thatit was that big of a problem.
SPEAKER_01 (18:53):
Yeah.
No, it's a it's it's a greatpoint that you are making.
And they maybe let's generalizeit a bit and then we can get
specifically to MicrosoftPurview and the like, because I
agree with you that they'releaving a lot to hope for, let's
call it that way, from datasecurity perspective.
But if you're looking on datasecurity, right, it's a it's one
of the segments in thecybersecurity space that was the
(19:14):
least served, in my view, overthe last 15-20 years, where
there are hundreds of datasecurity companies, hundreds of
products, and I would saygenerally almost none help
organizations to address somebasic needs.
Where they have the sensitivedata, to be able to detect and
prevent the right type ofleakage or misuse of
(19:36):
information, to be able toquantify it from risk
perspective.
They just say the what islacking in this capability.
Now, when we start to thinkabout doing a working on bona
fi, right stands for bona fideAI, it was when Gen AI kind of
started to take off.
And we said, wow, there's awhole domain here, a data
security domain that iscompletely unready for AI.
(19:58):
And when we dug even more, wesay, okay, it's unready for
itself.
I mean, data security solutionsare just not a fit for the world
predating Gen AI.
There's the ability to classifyand accurately analyze content,
and the ability to have somesingle platform that they can it
can be activated, let's call itthat way, in a multi-channel
way, in a consistent way, andit's at scale.
(20:20):
They are not adaptive.
They don't have context they canactually analyze content with.
They're just not a fit.
And that's before Gen AI.
And then we start to see kind ofwhat's happening with the
adoption of Gen AI, regardlessof the adoption is I'm going to
use Microsoft 365 copilot, orI'm going to use ChatGPT, or I'm
going to use some embedded GenAI in some other custom
application or whatever it mightbe.
(20:40):
The problems are the same andvery similar to each other.
Just solutions are not providingthe visibility, accurate
analysis, and therefore not anypractical way to detect, prevent
those types of risks.
Data in motion, data address,data in use, super complex
problem.
And that's why we decided tostart the company, right?
Bonfi.ai.
We launched a few months agoalready with customers in the in
(21:01):
production and making our kindof inroads into the market.
The more we get into it, then wesee exactly what users are.
That the organizations that youwould expect to have some
controls, maybe not great, maybeinitial implementation, maybe
not the most sophisticatedprocesses, have very, very
little going.
And I think that the reality isthat they are using tools and
(21:24):
methodologies that arecompletely outdated.
I mean, the concept, forexample, to be able to discover
everything, classify everything,ask humans to review the
classifications on terabytes ofdata doesn't make any sense.
No one ever will finish thoseprojects.
It can take eight months or fiveyears.
And by the time you finish them,probably the classification is
already wrong because thecontext around understanding
(21:47):
each piece of content, adocument, an email, whatever it,
whatever it might be, iscompletely off.
So that's one issue.
Second issue with the adoptionof Gen AI, more and more of the
content is generated and used onthe fly.
Think about, let's say I'mwriting an email assisted by
Gemini or Copilot, by Microsoft,or anything else.
So I'm not saving the email,letting some machine classify
(22:10):
that, maybe a human will reviewit, and later on I'll do
something.
It's created and sharedinstantaneously.
Think about a web use of, let'ssay, ChatGPT or like sites,
regardless if there are internalapplications or commercially
available applications, etc.
So the world is shifting fromdata at trust to data in motion.
The world, even for data trust,is adopted and the vendors
(22:32):
associated with that adoptedmethodology that do not make
sense for organizations.
Look very theoretical, like, ofcourse, let's discover
everything, we'll get greatvisibility, and with the
visibility, now we can control.
Really?
Show me one organization thatactually, after they scanned 50
or 100 terabytes, understoodwhat's there and took any
meaningful action item.
Now let's say think about yourbank and you are now scanning
(22:54):
your 50 or 100 terabytes.
And God knows you're going tofind a lot of banking
information in your S3 bucketsand Azure data stores, etc.
What will you do then with that?
Of course, you are a bank andyou are going to have a lot of
banking information.
What happens then?
Probably not a lot, right?
So the whole is changing, right?
Much more generated content,shifting from data in motion to
(23:16):
data, from data trust to data inmotion, need actual risk
mitigation, visibility and riskmitigation, not just discovery.
And the a lot of the solutions,both old and new, just don't fit
the mark there.
And that's why we started Ponfi.
SPEAKER_00 (23:30):
Make sense?
Yeah.
No, that makes a lot of sensebecause you know, I've been in
the industry for, I don't know,maybe 10 or 12 years at this
point, right?
I've spent a large amount of itin healthcare and financial
industry.
And I can't name a single timewhere, you know, we we knew, you
(23:52):
know, where the data was, weknew how it was classified, we
knew, you know, all thedifferent tags with it, we knew
the policies with it.
We kind of bought these toolsand you know, it was a
multi-year project that justnever seemed to end, right?
And yeah, I I've never seen,I've never seen an environment
be that knowledgeable abouttheir data.
Most of the time, what they dois they throw it into a file
(24:14):
share, they throw it into adatabase, they encrypt it, put
least privilege on it, reallyenforce it hard, and that's like
it, because they don't know howelse to kind of scale with their
data.
Because, like you said, thethere's a manual process
involved with these legacytechnologies where you need
someone that comes in andactually says, Yeah, that's
(24:36):
tagged right, that's classifiedright.
Let's write this policy aroundit.
And there's millions of piecesof data in just a few terabytes,
you know, like it's not it's notlike an insane size that we're
even talking about, you know.
SPEAKER_01 (24:51):
Like, I agree.
SPEAKER_00 (24:52):
Yeah, for modern
computing, like it's a
manageable size, but then youlook at the actual data, and it
could be millions of pieces ofdata that you now have to
classify somehow.
And it's like, well, that personis going to do nothing else
other than that.
SPEAKER_01 (25:06):
Yeah, and it'll be
wrong by the time it's done.
Yeah.
I would say it's almost inhumaneto ask humans to classify it.
No, seriously, so much data, andall of that is without the use
context, right?
Part of the move from data trustto data in motion is not just
because of the generation,right, and use of Genei, of
course, the right technologiesin web traffic, copilotype of
use, and a lot of other types ofChatGPT and the like.
(25:29):
That's uh part of the issue.
But the but the second issue isthat to classify a piece of
content without understandingthe context of the business and
the context being used is almostmeaningless.
Again, take the example Imentioned earlier.
Maybe let's shift the example tohealthcare.
Let's say you want to find whereyou have PHI, you'll run
Discovery, after eight months,you'll see that you have
everywhere PHI, okay?
(25:49):
Because that's what you aredoing as a healthcare provider.
So, what would you do then withthe information?
So the fact you have sensitivedata, it's not the issue, right?
Everyone has every business assensitive data, otherwise,
probably they should not existif they have no propriety or not
custodian of customer data, orthey don't they don't have their
own propriety information.
Why they are there even?
So the question is not whetherthere is sensitive data.
(26:11):
The question is sensitive dataleaking to the wrong place.
Can it leak by the wrongsharing?
So think about let's take anexample now, let's get to the
purview and copilot andMicrosoft World.
Let's say I'm using now copilot,creating helping, getting
assistants to write somedocument, which is great.
It uses the Microsoft graph tobe exposed to everything I have
access to, whether I should haveit or not.
(26:32):
Copilot is writing somethingnice, looks good to me, saying
great, let's save I'm going toshare it with a customer.
Fine.
Not only that, if I may leak bythat a lot of sensitive data,
let's say I kind of share itbecause I was uh a bit reckless.
I opened the share for theentire folder and the entire
SharePoint site, and not justfor this file specifically.
(26:53):
Okay, let's say, because I wantto make sure that the customer
has kind of the right access.
I didn't consider the fact maybesome colleague of mine or myself
put some other customer data inthe same place, which was
relevant only for the othercustomers.
Who would know that?
So the point is that the risk isnot the factory sensitive data,
right?
The fact that it's being sharedmistakenly, maliciously,
(27:14):
non-maliciously, recklessly ornot, with the wrong party.
So regardless if the sharing isopening a share, right?
Like data address, opening,let's say, permissions to for
someone to get to it, or sendingit over email or over chat, or
taking a piece and putting it insome unsanctioned chat GPT as
site or like, that's a problem.
And I think that's what theindustry missed, right?
(27:35):
Data security is not differentfrom other disciplined security,
it should be risk-oriented,should have context.
Without that, so the problemswill never be solved, my view at
least.
SPEAKER_00 (27:45):
So you have an AI in
the background that is doing
what?
And I'm asking that becauseyou're you're a pretty young
company, and from my limitedknowledge of AI models, right,
they get better over timebecause they're seeing more
data, they're seeing more usecases and situations and
(28:06):
whatnot.
And so they're able to make moreintelligent decisions over time.
Do you did you find that to be alimitation just being so so
young or early on because youhave to train the model to some
extent more?
Or, you know, how did youovercome that difficulty?
Maybe it wasn't a difficulty,and I'm wrong.
SPEAKER_01 (28:24):
Yeah, for sure.
It's a it's a great question.
So not because of that, butcoming to think about it, that
actually we can answer thatquestion as well.
But the reason what we did, weor what we're doing, we
developed technology that doesself-learning of the business
context.
So we're doing it uh by again,unique technology that can get
to structured andsemi-structured data source in
(28:45):
the organization, understandentities, basically create kind
of big knowledge graph thathelps us to identify entities in
context and other techniques.
But I would say that just let'sfocus on that for a second.
It's basically an adaptivelearning technology that uh
we're using, which can learnextremely fast based on the
customer environment.
Of course, solely for them.
(29:05):
We never take the data, we don'tcreate models out of that, not
for us and not for othercustomers.
It's solely for the purpose ofthe specific uh customer of
ours.
So it's very fast, it can takehours to a day or two, get the
full business context, it'supdated automatically, so it's
adaptive to the changinglandscape of customers,
employees, groups, stacks,whatever you have in the
(29:27):
organization that are itprovides the right context for
the use of our solution there.
So adaptive, fast learning,customer-specific learning, so
it has the business context ofthe customer, and that was our
approach uh that um that makesour solutions significantly more
accurate on one hand, falsepositive-wise, significantly
less because it's allcontextual, but at least as
(29:48):
important, and I know a lot oforganizations did not get to
that, is that a lot of peopleare thinking about the false
alarms is the biggest issue indata security, right?
Which is true, it's a big issue,and we're doing a significantly
better job due to the context inin showcasing or escalating only
the right uh let's say alerts.
But I think there is at leastprobably at least equal, equal
sized issue, if not bigger,which is the false negatives.
(30:11):
I think it's 80, 90% of the datasecurity incidents are
completely because they are notbecause the solution today
cannot even you cannot evenexpress the type of things you
need to protect you need toprotect against with those
solutions, not talking aboutwhether they are accurate or
not.
Just impossible.
Again, think about customertrust.
(30:31):
Let's say you're a healthcareprovider and you want to make
sure in the way that because youhave a lot of PHI, a lot of
interaction with your patientmembers, let's say, you want to
make sure that whatever youcommunicate with them is solid,
their data, and now no one else,not because of some training of
a model, not copy and paste, nothallucination, etc.
How do you make sure?
How do you even express yourselfin a traditional or modern deal?
(30:53):
And saying something like, makesure that if I'm sending
information to this patient thatdoes not contain information for
another patient, who know that?
We do because of our technology.
So the point is that we reduce alot of false positives, we
reduce significantly falsenegatives as well, right?
The blind spots that everyorganization has in many of the
data flows that they have.
SPEAKER_00 (31:14):
It's interesting
what you said.
You know, probably 80 to 90percent of the data, the data
issues or the data leaks,companies don't even know are
taking place.
Do you think they cannot evendetect them?
That's my point.
SPEAKER_01 (31:27):
It's not a they
can't try to capability to the
fullest extent, it's impossibleto detect them.
SPEAKER_00 (31:33):
Right.
Do you think that if let'sassume, right, organizations are
now able to detect 100% of it,do you think that like their
breach disclosures wouldincrease significantly?
SPEAKER_01 (31:46):
It's a great
question.
Sure, if they have morebreaches, they need to disclose
them.
But remember that part of whatsolutions like ours can do is to
make them avoid a breach.
Right?
Our solutions are not designedfor detection only, though
detection is a great startingpoint because it provides
visibility that can result inbetter controls, better
training, and a lot of avoidanceof uh at least non-malicious
(32:09):
acts at the minimum, right?
So that's one.
But a solution like ours,actually, you can use them for
prevention or remediation.
So you can so you can actuallyavoid breaches altogether.
So think about again the exampleI mentioned earlier, but let's
now take a mail example, right?
Let's use some Gemini, ChatGPT,microscope pilot to help you
write the email.
Mistakenly or not, it containssensitive data you should not
(32:30):
have sent to the whoever thereceiver is.
We can actually detect it realtime, block it along the way, or
walk in detection mode as well.
So we can actually support bothdetection, detect and respond
type of apparatus, but alsoactive prevention for serious,
let's say high-risk situations.
So the point is that risk can bemitigated, not just detected,
but actually can be mitigated inmultiple ways, both by using the
(32:53):
right detection, visibility,which again result in secondary
control, secondary action itemslike training, tighter access,
and other stuff, and actuallybeing in line or have a
real-time remediation for actualprevention of risk as well.
SPEAKER_00 (33:07):
Yeah, this is a
whole world that I feel like
people try to avoid, honestly.
Is that something that you seeas well?
SPEAKER_01 (33:16):
Yeah, um, less and
less.
I know I'm saying somethingwhich is probably not going to
be too politically correct, butgot comfortable in the lack of
abilities of the data securitytools.
Because it is what it is, right?
For compliance, we buy a tool,maybe we turn on some basic
function.
That's what you can do with it,and that's it, right?
It's almost uh fine, that'swhat's available for us.
(33:38):
We cannot do much more thanthat.
But I think they're changing.
I think what we are seeing, andwe're seeing a lot of interest
in what we do because of AI, notbecause it's only problems that
they are resulted, resultingfrom AI adoption, but I think a
lot of organizations feel veryuncomfortable, you know, or
security professionals veryuncomfortable in their seats
with their lack of visibility,not talking about controls of
(34:00):
their information flows orinformation systems.
I think they as an industry,right?
The industry got used to protectthe plumbing, not what goes
through the pipes.
And with the adoption of AI,everyone understands what they
need to do at the minimum.
They understand visibility-wise,what's happening there so they
can actually have a strategy ofhow to mitigate the risk.
Now, we're seeing someorganizations that are
(34:21):
implementing DSPMs, right, as away to feel better.
Now, let's discover all of our100 terabytes and classify them.
But as I said earlier, it'sfine.
There's usefulness, right, ingetting high-level visibility
that now as a bank you have 100terabytes of banking
information, good job.
But what do you do with that?
How do you mitigate the risk ofthe bank or the healthcare
(34:41):
provider or technology companyor IP leakage?
Can't, right?
So I think that the awareness isgrowing very, very quickly, both
for the old, let's call it oldpre-gen AI type of use cases,
because people understand, yeah,maybe it's about time to do
something about it becausesolutions like ours, and I'm
sure there will be otherstartups as well, make it
(35:01):
possible to solve some of thoseissues that were never solved
for properly.
But definitely when you'relooking on the AI user,
regardless if you it's asanctioned application, like
let's use Microsoft 365 copilot,but make sure that we have some
decent protection upstream anddownstream, where there is none.
Okay, and when organization arelooking preview, a purview,
sorry, they look at it like youdid in your story, and they say,
(35:23):
okay, great, maybe someone elsecan do the job.
So that's that.
There's the unsanctionedapplications, right?
So basically shadow AI, someoneneeds to inspect that as well,
right?
Those uh information flows thatpotentially can lead to
unsanctioned use of AI, right?
Shadow AI, and also to customapplications.
So there's a there are so manymoving parts that I think
security professionals andorganizations, they all
(35:45):
understand they just need totake the data security
significantly more seriously.
And uh, good news, there are nowgood solutions can actually
address it in very differentconcepts.
If you try to do the same thingthat you've the industry have
done over the last 20 years, itwill lead to the same results,
even if you're using somesmarter LLM along the way to
classify, just not good enough.
SPEAKER_00 (36:04):
So for your
solution, you said that it's
learning the context of thebusiness within the industry
that they're in.
Yeah.
How long does that take?
SPEAKER_01 (36:13):
It takes hours to a
day, two days, and of course
continuous learning, et cetera.
But you can have a great use ofour product, and it's designed
for that.
It's not a coincidence you canimagine.
How to bring visibility andprovide useful visibility and
start tracking information,understanding what's there and
what kind of risk andpotentially turning on some
(36:34):
prevention can make it valuablefor you in a couple of days.
SPEAKER_00 (36:38):
Yeah, it's
interesting.
And then uh just over time itbecomes more and more accurate
and more, I guess, in tune withyour business, right?
SPEAKER_01 (36:46):
It's it does, but
the accuracy is very high from
from the first moment you use itbecause we are using for
grounding our entity awareness,the business context, we're
using corporate their thecustomers' corporate data for
their benefit.
So we're not guessing.
So, for example, let's say wewant to understand if you mix as
a bank between one customer datato another, we know the customer
(37:08):
base, right?
That's how the system works.
So we can identify them.
It's not kind of like somethingthat in the first week will be
70% correct, maybe in twomonths, 85%.
As long as we have the data andwe digest the contextual data,
it can be extremely accuratefrom almost day one.
SPEAKER_00 (37:23):
So, how do you
ensure that you're getting all
of the data?
Because you you also mentionedpreviously, you know, there's
all these different, I just viewthem as like different, I don't
know, use cases or people orentities within the environment.
You know, someone's usingCopilot, someone's using Chad
GPT, another person's usingSharePoint to pull data from and
(37:45):
to and whatnot.
There's so many different likedata origination points.
How do you ensure that you'rethat you have them all?
Like, how does that even work?
SPEAKER_01 (37:54):
No, it's it's a
great question.
So of course, we cannot forceanyone, right, to have all of
the information, flows orsystems available to BonFi, but
uh but let's discuss our sidefor that and then our kind of
practical implementation wouldlook like.
The platform, which we what youcall BonFi ACS adaptive
consecurity platform, is amulti-channel architecture data
(38:14):
security, right, solution.
So it's designed with one corecontent analysis engine that is
using the business context andbusiness logic to analyze and
take actions as you as youdefine them, regardless of the
source of data.
It can be email, line, it can befile sharing for data address,
it can be web traffic comingfrom your browser, can it can
(38:35):
work in the same way?
It can work on differentinformation flows with the same
system, same logic.
So you can apply the samecustomer trust policy to
different information channels.
You don't need to do it sevendifferent times with different
systems, etc.
Now, when we are onboardingcustomers, of course, they don't
start in day one witheverything.
Boiling the ocean is typically agreat recipe for a failing
(38:56):
project, right?
So typically what's whathappens, they are starting with
one or two connectors, let'ssay, to the most important
systems.
Between us, most of them startwith Microsoft 65 or alike,
because 80, 90% of theenterprises have it, and it has
a lot of issues.
Not blaming Microsoft or not,but but because it's a core data
store of so much information,collaboration tools and emails
(39:17):
and all of that stuff,naturally, naturally, it has a
lot of sensitive data, and thecontrols around it typically are
very rudimentary, even ifcustomers bought per view, as we
said, many of them have verylittle use.
If you put on top of thatco-pilot adoption, it's out of
control.
And we're seeing a lot oforganizations are urgently
looking for solutions like that.
So many organizations start withthat, they don't have to.
(39:38):
Start with, let's say, connectorto Microsoft 65, maybe yet
another connector likeSalesforce or another system
that they wish.
Get this, start to getvisibility, getting
automatically entity riskscoring, which is a byproduct of
what we do because we quantifythe risk of every piece of data
which is sent or shared, then wecan actually quantify the risk
of the actors as well.
(39:58):
So it supports, can supportinside the risk manager program
if they wish.
They don't, the organizationdoesn't have to use it, but it
comes out of the box as abyproduct of what we do.
Tune policies, right?
We have out-of-the-box policiesfor a lot of different topics
like privacy, IP leakage,customer trust, toxicity, and
many others.
But they can tune them, turnthem on, off, add their own
policies, so tune them, startdefining some watch it for a
(40:22):
while, days, week, two, seewhat's there, and then start to
define some automation ruleslike in what condition do we
want to escalate to the SIM, toa SOC, let's say via SIM
integration, when do we want totake active enforcement action
because something looks like amass PHI leakage that we're not
going to take the risk, and wejust want to block this type of
sharing, etc.
(40:43):
So then you can turn it on alongthe way.
And the second dimension is say,okay, great, now it works great.
I trust it, I know how to use itas an organization.
Let's connect it to moreinformation flow, maybe web
traffic via browser extension,maybe to other SaaS applications
like ServiceNow or Jira orSalesforce or alike where you
have a lot of sensitiveinformation with very little
(41:05):
visibility as well.
So it's typically gradualimplementation once you get it
going.
Again, part of the benefit ofhaving a multi-channel
architecture, you have onepolicy plane, one business
context, applies everywhere, soit saves a lot of effort and
time for the organization on oneend.
Second, provides a lot ofvisibility for the actual risk.
Think about whenever you have auser that, let's say,
(41:27):
maliciously going to leave thecompany sales rep, let's say,
and you see suddenly somestrange use of some based on web
traffic, and you see some maybeselling some information to some
unknown Gmail, and maybe oftenopening a file share with Excel
party, maybe all of it togetherlooks like some pattern I need
to notice.
So the fact, of course, thesolution along the way will tap
(41:48):
into multiple channels canprovide even better visibility
for organizations to address,say, higher level risk, let's
say.
SPEAKER_00 (41:55):
That makes sense.
So as an end user, am Ionboarding my domains into your
platform and then activating allof the different integrations
that like all the differentintegration points that I have?
SPEAKER_01 (42:08):
It's actually pretty
simple.
It depends, of course, on theinformation flow.
But uh, for SaaS applicationslike Microsoft 365 for
Salesforce OGR and all of them,we're just using their APIs.
So very, very simple.
Takes a few minutes on theplatform end, on our end, it
takes a couple of minutes to putthe, let's say, service keys and
the like, and then it's working.
Seriously, as simple as that.
(42:30):
We have also an embedded SMTPserver.
So if people want to send emailsvia us, right, to use as a relay
with all of the remediationactions, we can do it as well as
an alternative with APIs, so wecan actually support web
traffic, custom applications.
We actually are going tointroduce soon an MCP server
interface.
So basically differentinterfaces that can use the same
engine, regardless if it stemmedfrom machine-human interaction
(42:53):
with the world, let's sayanother system or another agent,
using workspace like Microsoftor Google type of workspace type
of solutions or SASapplications, all of them can
flow to the same engine for thesame type of risk analysis.
So pretty powerful.
SPEAKER_00 (43:08):
Huh.
Wow.
I mean, legacy technologies,they're they're very rigid in
this space.
You know, like they're veryrigid, not really working with,
you know, where the organizationis, where they may have their
data, you know, 100%, right?
And like the all of the otherlegacy solutions that I've seen,
you know, it's kind of like,yeah, we can we can identify all
(43:30):
of this stuff in SharePoint, butwe're not going to be able to
see, you know, co-pilot.
We're not going to be able tosee Chat GPT, right?
Like all this other stuff.
And even the ability, like youmentioned, the ability to
potentially even like build yourown web interface for it, you
know, for like your internalteam or whatever it might be.
It's totally there and it'spowered by your solution.
(43:53):
I mean, that's that's incrediblypowerful, incredibly useful to,
I mean, big and small companies,but I feel like bigger companies
would be using that feature awhole lot more because they're
more, they're more around let'sbuy a product and then build it
for the way that we use it andneed it, right?
And so you're you're turningyour your product into the
(44:16):
engine for someone else's, youknow, web interface or whatnot.
That's that's huge.
SPEAKER_01 (44:22):
Yeah.
No, we agree.
That's exactly why we do that.
That's why we believe that youknow a vision and it's vision is
great, but needs to betranslated to a real product, a
real platform, which is designedto fulfill the re the vision.
I think we're seeing the evenstartups, right, in the DLP or
DSPNs, right, spaces that aredoing better with the old
(44:44):
concepts, right?
So taking the old concepts andimplementing them again with,
let's say, better technology,better filtering, better
architecture.
But we believe that that won'tsolve anything.
Does it just need to change theconcepts of how data security is
done?
Because it's just too complexfor organizations, right?
To have some day data accessgovernance in just a nicer
(45:04):
interface when you have millionsof files doesn't change
anything, right?
Won't solve any problem, right?
So where the space has to bereinvented and reinvented in a
way which provides both thevisibility control along all of
those channels that you justlike the example you just
mentioned, custom applications,let's call it for, for example,
with the same engine, withoutthe need for the security teams
(45:25):
to be expert in everything,which is part of the issue,
right?
Think about today, let's saylarge enterprise, how many
different information systems orinformation flows they have, and
each one of them might have itsown data loss filtering
capability.
It's crazy.
And then you expect the securityteam to be able to be expert in
all of that stuff, withconsistent implementation of the
policies they have, providevisibility for the insider risk
(45:48):
or other entity risk in aholistic way, it's just
impossible.
SPEAKER_00 (45:52):
Yeah.
Yeah, no, it's completelyimpossible, especially with the
shrinking team size of security.
You know, organizations aren'tspending on cybersecurity like
they used to.
And even the companies that thathad giant security teams that I
either I was a part of or I knewabout, you know, those teams are
shrinking as well.
(46:12):
And they're they're hugecompanies in the environment.
Everyone would know their nameif I said them, right?
It's becoming completelyunfeasible, especially with with
DLP and data, just dataprotection overall.
Like as soon as I looked atpurview, I mean, I was
immediately, yeah, like you'regonna spend eight months on this
thing minimum, and you're notgonna see any value until then.
(46:35):
Yeah.
SPEAKER_01 (46:35):
And again, the issue
is that it's not the engineering
side, right?
Uh whether developers are goodor not, right?
It's not about that.
And I'm sure they are good andtalented like in a lot of other
companies.
The issue that the concepts arejust not feasible to implement
between us, even in a smallorganization, not talking about
large organizations, right?
And that's if it's not feasibleand you shove so much work on
tight security teams, then it'snot going to be used.
(46:57):
Or it's going to be used to theminimum that the organization
can actually spend time on,which is going to be way less
than what's expected, even as aminimal control for any
compliance requirement, nottalking about AI governance or
any common sense securityapparatus.
SPEAKER_00 (47:12):
Yeah.
Yeah, that makes a lot of sense.
Well, Giddy, you know, we'rewe're at the top of our time
here, and I really enjoyed ourconversation.
This is a fantasticconversation, very, very
educational, honestly, formyself, because this is
something, this is an area ofsecurity, one of the very few
that I haven't spent that muchtime with, right?
And as soon as I as soon as Ilike looked at it, I was like,
(47:35):
oh my God, this is this is agiant issue.
Like, this is so much biggerthan what I had assumed before
going into it, right?
And so it's really helpful tolike kind of build in the
context that we did.
SPEAKER_01 (47:47):
Yeah, no, thank you.
Thanks for the time on that.
And maybe just to reiterate thethe last point, all of the
cybersecurity industry, right?
Practitioners, everyone, is usedso much to focus on the
plumbing.
While the only thing between us,almost the only thing they need
to protect is actually theinformation, right?
At the end of the day, that'swhy we exist, right?
If there was no information inthe systems, no security, you
(48:08):
don't need the firewall, right?
If everything is fine, right?
You just don't need it.
Okay.
Or that's true for a lot ofother types of solutions.
So it's all about theinformation and the kind of the
cybersecurity say industry justmissed the mark, right?
Yeah, the information is notwell protected.
And everyone is hoping that ifthe plumbing is going to be
right and you'll put the rightfilters or the right routing or
whatever, as the plumb, thedifferent um the different pipes
(48:31):
are connected to each other,maybe it will be good, but it's
not.
SPEAKER_00 (48:34):
Yeah, yeah, no,
that's a great point.
Well, before I let you go, howabout you tell my audience where
they could find you if theywanted to connect with you and
where they could find yourcompany if they want to learn
more and figure out what theycan do for their own
environment.
SPEAKER_01 (48:48):
Yes.
So we yeah, it's super, supersimple.
Go to our website www.bonfi b obnfy.ai, and you can learn a lot
about the company, can raiseyour hand, fill the form.
We'll be happy to contact you,to talk to any one of you, want
to talk with me, put it in thesubject, I'd be happy to talk
with you as well personally.
But I think the website isprobably going to be the best uh
(49:10):
place to look at.
Also looked at LinkedIn where wehave uh the company page, which
we provide a lot of uh usefulinformation about.
So either way works for us.
We'd love to talk with anyonethat has interest in the data
security domain and especiallyin the context of uh AI
adoption.
SPEAKER_00 (49:28):
Yeah, absolutely.
Well, thanks, Kitty.
It was a great conversation.
I really appreciate you comingon.
SPEAKER_01 (49:33):
Yeah, sure.
Joe, thanks a lot for for thetime and uh having me here.
SPEAKER_00 (49:37):
Yeah, absolutely.
Well, thanks everyone.
I really hope that you enjoyedthis episode.
I hope that you learnedsomething about data security.
I definitely did.
There's a whole ocean of dataout there that you know needs to
be secured that is a huge issue.
So, you know, if if this is aproblem in your environment,
which it probably is, make surethat you go check out their
(49:57):
website.
Feel free to contact either ofus.
I'll put all of their links inthe description of this episode.
Thanks, everyone.