All Episodes

February 4, 2025 29 mins

In this conversation, I speak with Alastair Paterson, CEO and co-founder of Harmonic Security.

We talk about:

Harmonic Security’s Unique Approach to AI Data Protection:

How Harmonic Security’s Zero-Touch Data Protection uses small language models to identify and prevent sensitive data leaks, differentiating it from traditional DLP solutions.

Challenges of AI Adoption & Enterprise Security Risks:

How enterprises are struggling to adopt Generative AI safely, as employees unknowingly expose sensitive data. The risks of shadow AI usage, and why visibility into AI applications is essential for organizations.

Harmonic’s Browser-Based Solution for Secure AI Adoption:

How Harmonic Security’s browser-based extension provides real-time monitoring and intervention, allowing enterprises to track AI adoption, prevent data leaks, and enforce security policies without disrupting productivity.

➡️ Get a DEMO and Take Advantage of Harmonic's GenAI Securely 

ul.live/harmonic

➡️ Check out Harmonic's Data leakage report "From Payrolls to Patents"
ul.live/harmonic-data-leaked

00 Intro
00:12 Guest Introduction - Alistair and Harmonic Security
01:16 Background on Digital Shadows and Transition to Harmonic Security
02:50 The Impact of ChatGPT and Generative AI on Security
04:35 The Problem with AI Data Leakage and Enterprise Risks
06:20 The Evolution of Data Protection: From DLP to AI Readiness
08:45 The Challenge of Shadow AI in Enterprises
10:30 Understanding Harmonic Security's Zero-Touch Data Protection
12:15 How Harmonic Security Works - Browser Extension Overview
14:40 Detecting Sensitive Data in AI Prompts
16:50 Live Demo - Preventing Data Leaks in AI Chatbots
19:35 Visibility and Monitoring of AI Usage Across the Enterprise
22:10 Risk Classification and Training Data Considerations
24:05 Policy Enforcement and Customization Options
26:30 Future Developments - Expanding Coverage Beyond AI Apps
28:15 Final Thoughts and Where to Learn More

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Unsupervised Learning is a podcast about trends and ideas in cybersecurity,
national security, AI, technology and society, and how best to
upgrade ourselves to be ready for what's coming. All right. Well, Alastair,
welcome to Unsupervised Learning.

S2 (00:21):
Yeah, thanks for having me, Daniel. Long time fan of
yourself in the show. So good to be on here.

S1 (00:27):
Awesome. Yeah. So can you tell me about yourself and and, uh,
harmonic security?

S2 (00:34):
Yeah. Quick bit of bit of background, as you can
tell from the accent. Originally from the UK here. So my,
my previous company to harmonic was Digital Shadows which I
set up in London and the threat Intel space and
we were we were really good at spotting all the
sensitive data that had already leaked out of businesses across
the open, deep and dark web. I did the series

(00:55):
A in Silicon Valley, moved here in 2015, so I'm
now ten years into the US in a dual national
and so on. So pretty bedded into the the. Bay
Area digital shadows was acquired in July of 22 and
as you'll remember well November of 22 ChatGPT comes out
and you know the world changes and I you know,
I started just, you know, exploring that and talking to

(01:18):
a lot of smart people in the space and seeing
what I could learn. And, and actually, you were one
of the first people who was writing a lot about it.
I started, you know, picking up your, your newsletter, I
think in, um, early 23 as you were really getting
going on the topic. And so that's ultimately led to
me founding Harmonic Security in August of 23. So about

(01:40):
18 months old now. Um, we are we're really building
a new data protection technology. We're calling what we're doing
zero touch data protection. And we're harnessing the power of
generative AI to build this. It's it's it's our own
set of specially trained small language models that we're using
for data protection. That allows us to do some very
different things that I'll talk about. But use case one

(02:03):
for us is very much, ironically enough, it's generative AI
adoption from the enterprise. Right. And the challenges, particularly in
sensitive data leaking into these different AI applications and models.
We see that as like the number one barrier to
to adoption in the enterprise, and we're helping them with that.
And so that's really what harmonics about today is we're

(02:24):
building out a pretty unique approach here to data protection.

S1 (02:29):
Okay. Interesting. So there's like an old space I don't
know if it's been renamed, but it used to be DLP. Yeah.
For like for like outbound, you know, sensitive data going out. Um,
it sounds like it's something like that. But at the
same time it's like AI readiness. Um, and then like
the leakage issue. So, like, how do you see those

(02:50):
and differentiate them?

S2 (02:52):
Yeah. It's a great point. I think there's there's been
as you mentioned, there's a few categories that we, we
can collide with a little bit, and we're trying to
not think so much about the existing categories, is just
focusing on what the problem is that the enterprise is
trying to solve today. And that naturally leads us to
overlap into into some of these categories, for sure. I mean,
I think if you think about, as I'm doing about

(03:15):
the problem space today around AI, that's it's kind of
a top three for everybody we talk to is, okay,
you've got you're in a position of having to go
and try and adopt this technology. Um, the business is
pushing for it in most cases. Right. We don't want
to be left behind. We need to go and get
on this. Um, but the obviously, there's, uh, there's a
bunch of people that are worrying about the risks attached

(03:36):
to that. Principally, where does the sensitive data go? And
the start of that journey is visibility, where you need
to understand effectively what's going on today in the enterprise,
because whether you like it or not, employees have already
jumped in and have started using a bunch of these
tools and technologies. And, and so, so you sort of
look at what enterprises have have to deal with their

(03:57):
and often the the approach has been, well, um, let's
just sit, block and wait. Right. Try and block all
this stuff. Pretend it's not happening. Put a policy in place.
You know, maybe we'll start to build, you know, by
one of the an enterprise version of something, right? It's
co-pilot or it's it's ChatGPT. And we'll try to point
everybody at that. Um, but inevitably, you know, there's this,

(04:18):
this sort of Cambrian explosion of other AI applications and
uses out of the core apps that often get blocked.
And and you don't want to frustrate that either. You
want you want to be adopting the majority of the
the technology where you can. So the question comes back to,
you know, how do you do that safely? And and
I think getting getting visibility into it is the first part.
And then the second part is, well what about the controls.

(04:39):
And you mentioned DLP, I think that's kind of the
classic one where where organizations start to look at their existing, uh, options,
whether it's DLP or trying to label all the data
in it in a company. Right. Those have been the
two things we've tried to do for 20 years. Uh,
hasn't really worked that well over the last 20 years.
I mean, it's it's just such a challenge for most
security teams, right?

S1 (04:59):
Yeah, it sounds great, but when you try to implement it,
it turns to garbage really quickly.

S2 (05:03):
Yeah. I mean, it really does. I'm yet to find
somebody who who loves their, their DLP or really enjoyed
the process of trying to label all the data in
the business. Um, you know, and usually it's, it's often
just a compliance tick box. Right. It's kind of there
if the regulator asks, but is it really doing anything
very useful for you? Well, probably not right. It's right.

(05:23):
It's like a yeah, yeah.

S1 (05:26):
So so let me let's do this. This should be fun.
So I'm just going to give you a scenario that
I think is happening quite a bit. And then let's
talk about where harmonic security would fit into this. So
a big part of the use case is simply the
business says, Holy crap, we need to have AI immediately, right?
We need marketing to be moving on this. We need

(05:48):
some sort of product that says something about AI. So
we were thinking it would be something related to like, um,
our CRM lookup. Uh, you can ask questions about your
current account or whatever. So obviously we want to hit
that API. Here's the API internally for that. Um, also
we want to do some external lookups, uh, to combine

(06:09):
it with some web search or whatever. So I guess
we'll need that API. And then like the business kind
of sends over this hunk of garbage over to like
the product team which has like three or 4 or
10 internal APIs plus some other agent functionality or whatever.
And then you have like this, uh, combiner, uh, sort

(06:30):
of agent that actually formulates this stuff from the API's
and hands it back to the user through this chatbot.
And that's like 20 different nightmares combined. So where is
harmonic security on this?

S2 (06:44):
Yeah, I, I think for, for the reasons that you
just outlined really we're not trying to solve that problem today. Uh,
and I'll talk about why I think. I think the, uh. Yeah.
November 22nd, ChatGPT comes out. I think 2023 and into
24 companies were were having to say that they were

(07:05):
AI forward. And, you know, every CEO is talking about
AI and telling the board and the market about it.
And and what happened in reality was, I think a
lot of people, you know, spun up some sandboxes in
usually open AI related. They tested out some use cases,
mostly around, as you say, you know, sales or customer
success activities. Very few of those have entered production. And

(07:26):
I think what's happened and continues to happen is that
the the percentage of companies that are really trying to
build out their own AI to solve these problems, particularly
the common business problems, is just going away. It's diminishing
to to be the really sophisticated orgs only. And what
the vast majority of companies are doing is ending up
adopting third party SaaS that has already thought about this.

(07:47):
It's what they do as their bread and butter. So
are you going to build a better CRM than Salesforce
and and all the other players? Probably not. Right? Salesforce
is busy spending all day long figuring that that scenario out.
And I think that's true for most of the common
business use cases right around stuff like customer success, technical support,
the way you're going about your marketing, it's going to

(08:08):
be more, you know, are you going to you're going
to be using the latest tools, though, whether it's, let's
say like gamma for writing presentations now or, you know,
granola for your note taking and all these types of things. Right.
And then some of those get built in ultimately to
Microsoft and Google suites, like big announcements this week from
Microsoft and Google. And so if you've got a bunch
of employees, maybe you're on the Microsoft Suite. Sure. But

(08:29):
your employees want to use notebook LM, right. They want
to be there's that Project Mariner spinning up about how
do you get agents in the browser through Google. But
I think that that's really the direction that most enterprises
end up in. And then there's a bunch that have
failed projects around trying to build their own. There's a
bunch that will have successful projects building their own. But
I think that's for me, that's got a lot of

(08:50):
the attention right now. And, you know, obviously this stuff
like we talk about the, um, you've got the, um,
the top ten, the OWASp top ten, and uh, which
is smart. Like, I think it's a good set if
you're building your own. But I don't see I don't
see these types of, of issues as the ones that
almost the majority of CSO that we talk to are
struggling with. Right? It's it's more, hey, we need a

(09:12):
business wants to use all these tools. We don't know
what we're using today. Who knows what's going on in
marketing right now? Because these people are adopting these tools.
They're firing our corporate data in because they want to
get productivity out. I think that's going to be step
one for the vast majority of companies. And that's our
immediate focus.

S1 (09:29):
You're right. You're right. And that started first. It started immediately.
It started in basically November of 22. Yeah. Because people
were like just dragging and dropping like everything into their
um okay. So what does that look like? What does
that look like in terms of like bringing over sensitive documentation? Um,
PRD documents, like all sorts of stuff that's like, should

(09:51):
not be shared. What does the interface look like for
harmonic security to be able to see it. And is
it monitoring. Is it monitoring and blocking or some combination thereof.

S2 (10:02):
Yeah. No. Great. Great question. So yeah harmonic. What we
are at the core is a browser extension that we
can roll out in 30 minutes across all of the
enterprise browsers. At that point, within the 30 minutes, we're
starting to get visibility into all of the AI adoption
of all of these third party tools and services across

(10:23):
the company. And we can start to show show businesses, right.
These are the ones that you know, that shadow AI
challenge effectively. But the sanction versus unsanctioned as well. So
you may know that you've approved certain coding assistance, but
it turns out your engineering team is pumping your IP
into a bunch of others. It may be that you've
approved an enterprise version of Copilot, and yes, your employees

(10:44):
are adopting that, but they're also creating spreadsheets in, you know, CSV,
for example. They're putting data into that, which is a,
you know, a free tool that is not absolutely not
covered by your enterprise agreements. And, you know, the question
marks around where the data is hosted, how is it
being stored and secured? Um, you know, maybe it's gamma,
which is an awesome product, but you probably if your

(11:05):
employees are firing your data and generating presentations in it,
you probably want to know that you've got the enterprise
plan right and you want to standardize on these things.
So so I think that's the those are the challenges
that we solve very quickly. We give you that visibility
part one. But the visibility is kind of the the
easy bit. And we give you the risk as well.
So which ones are training on your data. Which ones

(11:26):
are hosted in geographies you don't want your customer data
going into. But the the bigger bit though. So that's
the kind of the governance, the visibility and often the
step one. But then we come to the controls. Right.
Because it's it's great. We can see some of this stuff,
but what do we do about it. And that's really
our differentiator. So so harmonica at the core I mentioned
my last company, Digital Shadows, was looking at sensitive data

(11:47):
that had already leaked out with harmonic. We're using knowledge
of that to build small language models that understand sensitive
data really well. Much like a human would. And that
means that instead of just doing PII and PCI badly,
which is effectively what the rest of the industry does today, one,
we could do those things really well. But more importantly,
we can also spot things like architectural diagrams or legal correspondence,

(12:12):
or employee financial information, or corporate spreadsheets, insurance claims data,
things like that you could never spot before because the
the technology was just doing a regex or some sort
of basic rule based. Totally. It doesn't work. We know
it doesn't work, but but we do it because it's
all we've had and the regulator asks us to. But

(12:33):
now there's a better way because we can, you know,
we've got these smart models that have incredibly high accuracy rates,
takes the load off the security team. So this is
where the zero touch data protection comes in, because we're
accurate enough that we're not noisy and annoying for the employees.
so we can jump in with the employee and resolve
issues straight away at the end. The end point with
the end user. And then this load doesn't fall on

(12:55):
the security team. Now we have the data, the audit.
They can go and see what who's been doing what
and what. We've been able to stop getting out. And
they configure it and manage it. But ultimately we're solving
the data protection challenge without the load on the security team.
So why would you go about, you know, rolling out
classic DLP technology or trying to label all the data or,
you know, worry about the the types of casb style

(13:19):
solutions that are that are giving you some sort of
insight into what's going on, but not fixing the problem. Right.
We can fix the problem. We can do it in
30 minutes with this browser based rollout.

S1 (13:28):
Interesting. And this data could actually be used to power
a labeling project because you'd be seeing the real stuff.

S2 (13:35):
That's right. You actually get a real insight into what's
going on. We released a report earlier this week that
you can you can download and and so on to
look at what what is leaking out. And because we've
gone we've obviously got great visibility now into this, this
type of data across our client base. And the the
interesting thing is it's it's 8.5% of the prompt data

(13:56):
that we see has some sort of sensitive information in
and of that sensitive information, about half of it is
customer data related. But there's a significant minority. It's I
think it's about 15% that is things like legal and
financial information. And then you have obviously IP and employee
data and all kinds of things like that. But it's
not really been possible to have that visibility before. We

(14:17):
sort of kidded ourselves by putting some sort of DLP
that was tuned to spot PII. We've got data protections
covered and things like that, or the labeling, like the
sort of myth of labeling all the data when when
we know the reality is that most, most of that
is inaccurate and you don't find all the data anyway. So,
so this we've been able to show what's really happening
and then put controls around it.

S1 (14:41):
So do you have um, is there any way you
can pull up the interface. Do you want to show
any part of part of it, or do you want
to just talk through it?

S2 (14:47):
Yeah. I mean, we absolutely can. Actually, let me just
check if I can log in. Totally. So. So I've
just got an example here of, um, we're running Gemini.
And so you've got an employee coming along. We're imagining
in this instance we're an insurance company because we're working
with a few of these. So these are things that
that we've seen before. Um, I'm putting in some, some

(15:09):
data here that in this instance is is something we
don't the company doesn't want getting into something like Gemini. Right.
It's actual customer claims data that's going in. We have
seen this. I think it's probably pretty tempting if you
work in claims right now to be getting something like
a ChatGPT or Gemini to start automating aspects of your job.
So we've got an employee saying, hey, you know, can

(15:30):
you review the following claim, propose next steps to determine
if it's legitimate or not. And they've punched.

S1 (15:36):
All this data in. This is worth calling out. You
are on the actual Gemini site. You are actually connected
directly to Google and they're just doing their normal thing
with whatever endpoint. And as long as they're using the
approved enterprise browser because you're an extension, you can see.

S2 (15:53):
Exactly, exactly. And we can. We work with all the
browser types we can install in, in, you know, very
secure ways such that we appear everywhere and see, see
everything that's going on, you know, no matter what browser
is being used, even if the employees are installing their own. Um,
so so that's that's the start point for us, is
you've got an employee that's doing something like this. How

(16:14):
would you spot this with existing technology? You probably wouldn't.
You could argue you can match on the policy number.
So I'll even take that out. Right. But from context,
you and I can see that this is still customer
insurance claims data. Right. Um, but historically, we'd never be
able to stop this if I try to submit this
and I'm running harmonic. We do know that this is

(16:35):
sensitive data. So we've come up, we see we can
see this is insurance claims data here. And the reason
is that this has bounced off our language models. where
we have a small language model that's been trained just
to look at insurance claims data, understands it very well,
doesn't matter to us if it's data from Chubb or
Beazley or Hiscox or whoever, right. Because because the model

(16:56):
understands generically what insurance claim data is, just like you
do and I do. Right. We could recognize claims from
all those companies because we know what one is. Right.
So you don't need to do the old world of
exact matching rejecting on on things goes away. And instead
we can have these, you know, much more higher order
approaches to stopping our sensitive data leaking out. So this
this is completely.

S1 (17:16):
This this is the real experience here. This is not
like a demo. Like you actually press enter and then
this popped up.

S2 (17:22):
Yeah. And so right now this has not gone to Google.
This data is is still with us in the browser.
But I've given in our example here we have the
option to ignore harmonics. I'm going to ignore. And now
the data is gone. Right. We've let it go out
of the door and it's it's sat with Google. Gemini
is going to do its thing and start to start

(17:43):
to give us a response here. But if I come
to the harmonic portal now, and we'll walk through a
little bit more of this in a minute, but we
have logged and audited this. So you can see that,
you know, just a minute ago this was me into Gemini,
which we consider a high risk the public edition. And
I ignored the intervention. And you can go and see

(18:04):
the actual prompt data that was put in in this case.
And there's a lot more we can we can dig into.
But that that's like the if you want to like
the flow is, is is that is the core flow.
But as I mentioned, the kind of the starting point
for most organizations, even before they get to that is
really just trying to understand, you know, what is going
on in the business today because most of them don't
have that visibility. And so we start with that. And

(18:27):
so you've got usage and adoption here where we can
start to show, you know, the number of apps and
the categories that they exist in.

S1 (18:36):
And the apps would be things like Gemini, ChatGPT anthropic
or what? How do you differentiate an app?

S2 (18:43):
Yeah. So so we started out at harmonic just looking
at JNI specific apps, right. Like ChatGPT and Gemini. I
think very quickly that distinction goes away. Right. Because pretty
much all SaaS is wrapping JNI features into itself at
the moment.

S1 (18:57):
Yes.

S2 (18:57):
So we're essentially expanding and have expanded harmonic to look
across the whole stack of your enterprise apps because and
so all of these are AI enabled apps. On the
right hand side that we see is is being active here.
And then you've got newly discovered, we can start to
show outliers, break it out by the category of application

(19:17):
that we're seeing. So this is broader than just the
core apps.

S1 (19:22):
That makes more sense right. Like you said AI is
just an instance. Yeah. It's like it's like saying, hey, um,
are you a database company? Right. Um, and it's like, uh,
what do you mean? We use a database? It doesn't
mean doesn't mean we are a database company. So I
just blends into everything it does. So at that point

(19:43):
it's not it's it's protection from I if you're using
an AI app specifically, but really it's all the same stuff.
You're pasting something that shouldn't be going into a form.

S2 (19:53):
Yes. Exactly. Right. And so for us, ultimately, the difference
between this data going into a Dropbox public folder or
going into Gemini is like, what's the difference really? There is,
there is I think there is a little bit of
a difference because some of these there's something a little,
little insidious about how some of them are collecting and
training on the data that's going in. And there's obviously
this explosion in AI apps that wasn't like no one

(20:14):
was campaigning to get workday installed as an employee. But
but they are campaigning to jump into tools that automate
bits of their job. Right. So I think I think
there is a distinction in kind of in one aspect,
but but for the most part, it's all the same, right?
It's data leaking out of the business and whether it's
going into AI or elsewhere, then harmonics going to help.
But our focus for use case one has been all

(20:36):
about AI adoption. And and so it's sort of giving
that visibility and then allowing you to put the right
controls in place. And and just to show you one
more kind of cool thing that, you know, we have
this whole detection catalog, we're continuing to expand, but you
can see the types of things we can spot, right.
M&A data is obviously absolutely critical. How would you spot
that leaking out historically really difficult. But we've got models

(20:59):
that understand what that looks like. And so instead of
it being a rules based approach, we have these human
readable data definitions that explain to the model like what
is what is that right? What is M&A data? Why
is it important. So we can interact with the end
user and coach them and nudge them appropriately.

S1 (21:19):
That makes sense. Prompts rule the world. I mean ultimately
those are prompts. And prompts are the intelligence.

S2 (21:25):
That's right. And then but then to back it up,
you need a model that's got the right set of
training data and is fast enough to sit in line. Right.
And that's kind of the core of our tech is
is having built that, that data set, train these specific
models and made them really fast. Um, so yeah, that's um,
that's what harmonics doing essentially. So, so the goal with,
with this of course, is you can, you can much

(21:46):
more safely adopt generative AI. You've got some nice reporting
that you can show your AI committee about who's doing
what with the data, the fact that we're able to
protect and intervene and coach the, the employees. Um, but
but then, of course, the beauty is that we don't
need to load up the security team with a bunch
more work here because we're handling this automatically with the
end users.

S1 (22:06):
Yeah, it's being outsourced to the user directly.

S2 (22:10):
Yeah, but at low enough volume and friction that they
don't they don't feel it, which is. Yeah. You know,
we only get in the way when they're going to
expose the company to real risk. It's not just pinging
on a, on a regex match. That's innocuous because they're
doing something. That's fine. Right.

S1 (22:25):
This is really wonderful. I wonder, um, you're probably already
thinking of this, but in the insights tab, do you
have anything around known providers that they've just clearly said
that they train on the data? So it's like an
even higher risk.

S2 (22:39):
Absolutely. Yeah. I mean, and we can do the breakdown
here between the public editions, the free editions and the
ones you have an enterprise license for. So just because
you have an enterprise license from ChatGPT doesn't mean that
people are using that versus their home edition. Right. And
they're logging in with their personal account. So we have a,
you know, we consider the free edition higher risk precisely

(22:59):
because of the training declaration that it that it has.
And we can we can start.

S1 (23:04):
There you go. Training declaration. Hi.

S2 (23:06):
Yeah. So if we want to drill into that, uh,
you know, here's where we've picked it up from. Um,
and yeah, so, so OpenAI state that, uh, they may
use your data to train and improve their models in
that free edition. So, yeah, that's the visibility that we're
giving the enterprise. And then you can put appropriate controls
around it.

S1 (23:25):
Yeah. That's wonderful. And then you define a policy somewhere
I imagine. Um, and that's what gets implemented.

S2 (23:31):
Yeah, exactly. There's a ton of, um, a ton of
things that you can do to configure this, but we
have a have a whole config designer we've taken instead
of just having these these horrendously complex screens with 1000 controls,
we've taken a more visual approach to building the config
out and inspired by some of the great work that
companies like Tynes and others have done recently, to make

(23:53):
things much more user friendly in how you set this
stuff up. And so there's there's a range of ways
you can set up and configure it. And beyond that,
you can also do a massive amount of customization around
the intervention itself. So maybe you want your own logo
and color scheme. You want to put your own security
policy here. You've got an AI policy. You can link

(24:14):
to it in in here and apply, you know, different
controls to redirect employees to secure secure options and things
like that. So depending on how you want to set
it up, some some companies are much more draconian than others,
of course, because they, you know, they're dealing with very
sensitive data or they're highly regulated. Others are a little more, hey,

(24:35):
we want to want to just trust the employees, but
we are auditing and logging this stuff, so if they're
really starting to push our data places, it shouldn't go.
We can we can see that.

S1 (24:45):
Sure. And it allows for both of those situations. It's
more strict or more open.

S2 (24:50):
Yeah that's right. Yeah.

S1 (24:51):
Yeah. This is really great. Um, what what are you
working on next that you can talk about? Like, what
are you excited about? Uh, new threats or new situations
you're trying to address?

S2 (25:02):
Yeah, we're continuing to build out our coverage to, you know, cover.
You know, as I mentioned, beyond the kind of core
gen AI sites and gen AI enabled SAS, we're going
to build harmonic out over time to cover essentially everything
going through the browser. Uh, so it's, you know, you
can see the movement of the enterprise towards browser based

(25:23):
access to most of their services and applications, and we
want to be the essentially the data protection layer for
everything that goes through there. Uh, so that's that's kind
of directionally where we're headed and we're busy building out
a out a bunch of integrations as well at the moment. We.
Next week we're rolling out Okta Integrations. We have entra
ID as standard at the moment, so we're continuing to

(25:44):
add to things like that and building out more insights
is kind of next so that you can start to see, well,
what are the types of prompts that are getting used
by different teams in the company? You know what, what
are our use cases as a business here and and
where are the risks within that. But but the goal
is that this is really more of an enablement tool for,

(26:05):
for gen AI than, than just a data protection tool,
because companies obviously want to be adopting this technology and
we can let them do it safely instead of just
blocking everything where a lot of them sit today.

S1 (26:17):
Yeah. Interesting. You mentioned about the use cases like that
could be a really cool, like you said, a business insight.
It's like everyone's trying to get help with these architecture
diagrams or whatever. It's like, okay, well let's go solve that.

S2 (26:30):
Exactly.

S1 (26:31):
That's pressure that needs to be relieved.

S2 (26:33):
What we found in a couple of instances is the,
you know, the CSO is part of the AI committee,
and they get given kind of the the tools, responsibility
to implement some controls around this, and we give them
the ability to come back to the business and say, hey,
did you know, here's the set of things that we're
using today. What do we think about this? It's a
good starting point for the conversation of which, you know,

(26:55):
we've got obviously teams that want to use these types
of tools in these use cases. Are we going to
standardize on some of them. Are we going to block
like what's our policy. Right. And and I think it's,
it's I think where the security teams often go wrong
historically has been, you know, we get seen as the
Department of No. And it's it's kind of a blocker.
And I think this is an opportunity to say, well, hey,
look we understand you need to do A, B and

(27:17):
C because we can see it. Here's a secure way
to do that. Right. And you go and talk to
your colleagues, talk to the departments and enable them to
be successful. And I think those conversations go pretty well.

S1 (27:28):
Yeah. Well absolutely love it. It's the best implementation I've seen.
I feel like you're really, really in touch with the problem. And, um. Yeah,
I think your history with the previous company, uh, gives
you a lot of, uh, advantage there.

S2 (27:46):
Yeah, yeah. No, it's. I learned a lot of lessons
on that journey. And this time around, being based in
the Bay area always helps when you get. You get
started as well. It's just such a great place.

S1 (27:55):
Ground zero. When? When everything is blowing up. Right?

S2 (27:57):
Yeah. Yeah. Yeah. It does feel like a kind of
a new industrial revolution. And this is the heart of it.
So great to be here for it.

S1 (28:05):
Well, awesome. Well, how can people find more about the company?

S2 (28:09):
Yeah. I mean, harmonic security is is the starting point
on the web.

S1 (28:13):
Great domain.

S2 (28:14):
Domain? Yeah. It's, uh, pretty pretty good for that. Um, but, yeah,
I would say we we love jumping into demos straight away. Uh,
we can roll out, as I said, in 30 minutes.
So a pack is really easy for us to spin up,
and we start to give you those insights in the pack,
which then helps to inform what happens next. So if
you've got companies that are starting to think about that

(28:35):
as their step one inventorying what's happening with Gen I
in the business and thoughts about UI act and things
like that coming down the line. Right. We're a great
first step for that. But then we also have the
controls that come next. So if anyone's interested in that,
we'd love to speak to them. Happy. Happy to jump
on a call personally with anyone that's excited about what
we're doing.

S1 (28:55):
Very cool. Well, thanks for the conversation. I really enjoyed it. And, um, yeah,
I'm sure you're going to get some interest from this.

S2 (29:01):
Yeah. Thanks, Daniel. Been a been a pleasure, as always.

S1 (29:04):
All right. Take care. Thank you. Unsupervised learning is produced
on Hindenburg Pro using an SM seven B microphone. A
video version of the podcast is available on the Unsupervised
Learning YouTube channel, and the text version with full links
and notes is available at Daniel Miessler newsletter. We'll see

(29:25):
you next time.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.