Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Unsupervised Learning is a podcast about trends and ideas in cybersecurity,
national security, AI, technology and society, and how best to
upgrade ourselves to be ready for what's coming.
S2 (00:17):
All right, welcome to Unsupervised Learning. This is Daniel Miessler,
and I'm happy to have Matt Mueller here from tines.
S3 (00:24):
It's a pleasure to be here Daniel.
S2 (00:26):
Awesome. Yeah. Looking forward to this conversation. Um, I've heard
so much about the company and, uh, happy to hear
more about, uh, what it's actually about what problem you're
trying to solve. Um, I really like to start there with, uh,
the problem. What do you see as being the problem around,
I would say security in general, but also, how are
(00:47):
security problems being magnified by AI stuff?
S3 (00:52):
Yeah. So I'll maybe start with the original problem, uh,
that our founders were trying to solve. They, uh, were
security operations Professionals who had, you know, done security work
at companies like DocuSign and at eBay and some other
large places. And they were extraordinarily frustrated by the just
(01:12):
a sheer amount of manual labor involved in actually responding
to security incidents. Right. Like if you think about what
a traditional security operations center or SOC does, they receive
an alert. They have to go research that alert in
a whole bunch of different places. Uh, and, you know,
take a number of manual steps just to decide if
(01:33):
that's a true positive, right. They have to do work
just to decide if they have to do work. Um,
which is an enormously frustrating place to be. And so
they were looking out on the market for some kind
of automation tool, uh, that could help them ease that
burden and not seeing one that they wanted. They ended
up building Tynes. Uh, and, you know, starting out sort
of first and foremost as this tool to solve a
(01:55):
lot of the SoC, uh, problems around, you know, inefficient, uh,
you know, alert management and, you know, burnout and all
those sorts of things. And what we've discovered over time,
which I think has been really cool, is that the
SOC is not the only team in cybersecurity that needs automation, right?
Turns out there's inefficiencies everywhere. Um, another really fascinating thing
(02:17):
that we've that we've learned over the years, uh, is that, uh,
automation is actually a lot easier than I think a
lot of people give it credit. Right? Like a lot
of traditional automation tools, required learning Python, learning coding languages,
having very, very deep systems knowledge in order to be
able to get automation done. Um, and so with the
rise of AI, what we're seeing is, you know, people
(02:40):
that have automation ideas, right? Like, I think if you
ask almost anyone, they have ideas about how they can
make their job easier, right? They just haven't in the
past necessarily been able to express that. Um, and now with,
with tools like, I like no code. Uh, no code.
Workflow builders, these folks are able to, you know, build
those automations like like they haven't before. Um, so, yeah,
(03:01):
I would say the pain point that that Tignes has
been trying to solve is, you know, everybody who has
ever had to, you know, push a file from point
A to point B manually has a pain that Tignes is,
is trying to solve.
S2 (03:14):
Yeah. Interesting. Yeah. One way I've been thinking about this
is like, what would you do with five times more staff?
S3 (03:22):
Right.
S2 (03:23):
Right. So it's like we know what we want to do.
We're constrained by how many eyes and hands that we
actually have and brains. Right? So I mean, all these
things could be done manually. The question is, you know,
do you have the people to do it? Do you
have the people to create the automations to do it?
(03:44):
And it just seems like, um, I'm a big fan
of Theory of Constraints, and the constraint is usually people
in time that they have to focus on these problems.
S3 (03:55):
Absolutely. And I think, you know, one of the things
that we're also learning as well. And this is where
AI has been such a such a fascinating new addition
is that it's constraint around, uh, knowledge as well. And like,
even if you have, uh, all the people that you want,
security teams have just such a fractured ecosystem that they're
responsible for protecting, and it's virtually impossible to become an
(04:17):
expert in every single system that you're responsible for. Right?
And so, you know, again, what we see is, you know,
these these people that are experts, they free up all
this time. Great. Maybe I know exactly what I want
to do in AWS, but we just acquired a company
that has a GCP environment. Um, and now I have
to learn an entirely new, different cloud provider. Right. And
(04:37):
I'm just not going to be as good in that. Yeah. Um,
and that's where I think AI tooling has been really,
really helpful, uh, to make that context switching less of
a cognitive load for people.
S2 (04:48):
Mhm. Yeah. One thing I'm worried about is this addition
of staff to uh attacker teams.
S3 (04:56):
Mhm.
S2 (04:56):
Right. So if you have an entire team, let's say
it's 100 people. And five of the people are like really,
really good and really dangerous. What happens when AI tooling
or automation or agents or whatever it is turns that
into like 30 or 50 of the best people and
it turns the other like 80 into like 800. And
(05:19):
this is what I think the agents are actually going
to do for both us as defenders, but more importantly, attackers.
So the time that it would have taken them to
find our mistake goes from like days or hours to
like maybe minutes. And so we have to be doing
something on the defense side to counter that.
S3 (05:42):
Absolutely. And for me, it's it's been interesting watching how
attackers have been thinking about AI. Um, Sophos actually just
published a report recently with some analysis around how attackers
are using AI. And if you look back a couple
of years, there were a lot of headlines that I think, uh,
you know, they were valid at the time, right? We
(06:02):
just didn't know what generative AI was, was truly capable of.
And so there was a lot of concern that I
was going to invent all these brand new kinds of attacks.
And that really hasn't happened. Right. Instead, what we're seeing
is attackers are using AI much the same way defenders are. Hey,
make my email sound a little better, right? Um, you know, uh,
(06:24):
make my phishing page, uh, you know, generate 14 different
varieties of landing page for me. Um, and so where
we're seeing attackers start to use AI a lot more
to your point, is increasing their velocity, right? Um, and
so to my mind, you know, again, as defenders, you know,
there's only so much time and attention that you can
that you can afford to put into problems. Um, let's
(06:47):
not worry about, you know, types of attacks that haven't
been invented yet, right? Let's worry about the ones that
are occurring today and how those are evolving. Um, you know,
our our security teams, uh, can really build defenses against
attacks that don't exist yet. Um, but we can sort
of see how those attacks are becoming faster. Right? How, uh,
(07:07):
the the, you know, the translations are becoming better and better. Uh,
and so the bar for fooling our employees is, is
getting lower. Um, it means we need to be able
to react faster. Uh, we need to be able to react, uh,
in a more, uh, you know, we need to be
able to adapt a little bit better, right? Like, we
can't just apply rigid playbooks to every single security scenario. Um,
(07:30):
and so, you know, to me, it's a little bit
of an AI arms race. Um, and, you know, I
think for us as, as defenders, in my view, applying
AI in the places where attackers are also applying it,
that seems to make, you know, make the most sense,
at least today, right, with today's models. And that that
answer could change in, you know, three days, right, when
some new foundation model gets released that, uh, that changes
(07:53):
the game. But at least with today's models, I think
that's that's where I sort of see things evolving right now.
S2 (07:58):
Yeah. I really love this point that you're making because
the the reality is and it actually goes to pre
AI as well. It's like, are you going to hire
this super hacker person who's going to like all these
new techniques and new ideas and advanced attacks and like
the day to day job of like a CISO or
(08:19):
the day to day job of a defender is so
nuts and bolts, it's like we okay, is a log
being generated for step one.
S3 (08:30):
Right?
S2 (08:31):
Can we even like if you take like a bunch
of minor attacks, can we even know if this attack
is being waged against us? Do we have any sort
of detection capability? Right. That's one question. And then the
question is like, is that log going anywhere where someone
could potentially see it, a system or a person or
(08:51):
anything like that? Okay. That's cool. That's a nice second level.
Is anyone actually looking at it? Okay, that's three. And
three doesn't even guarantee what you need. Which is are
they going to do something about it. And like these
basic workflows are like everything. And if you look at
like a ciso's job, it's managing the budget. It's managing
(09:13):
this basic workflow of like logs and processing and, you know, uh,
workflows coming through the security operations. Um, and it's politics
and stuff like that. And it's like, it's not hacker movies.
It's really just these fundamentals that we have to do
more consistently and at scale. Um, so I really like
(09:34):
that point. What do you see as like the, the
biggest challenges for CISOs right now?
S3 (09:40):
Yeah, I mean, for CISOs right now, I think there
are there are two halves to the challenge. The first
is securing AI for their enterprises. Uh, and the second
one is how to apply AI for security. Um, you know,
if you look at the first problem, um, it's been
really interesting to see the CISO role evolve from sort
of a self-acknowledged the team of no, uh, to, you know,
(10:04):
really trying to enable the business. Um, we definitely saw,
you know, when when, you know, for like ChatGPT first launched,
for example, there was some team of no, uh, mentality there, right?
And what happened? Every single employee just worked around the
constraints that the security team tried to throw up. Um,
and so now what we see are CISOs starting to,
(10:25):
you know, I think the CISOs that are that I
see that are least stressed about AI are the ones
that have, you know, started to adopt frameworks around usage, right? Helping,
you know, not just setting barriers for their organization, um,
but working with them to understand, like, hey, what are
the risks that we're taking on, right? And like, honestly,
even just showing where AI doesn't necessarily succeed as well today. Right?
(10:49):
It's not saying no, you can't use it. It's actually
working to demonstrate. Okay, this use case seems interesting, but
may not actually be the most helpful output, right? We're actually,
you know, if we have a chatbot, can it be
fooled into giving away free airline tickets, for example? Right. Um, yeah. Uh,
and will that be held up in court? Answer is yes. Uh, so,
you know, I think when it comes to securing AI
(11:10):
for the business, um, you know, it's it's been actually
almost a little bit refreshing to see CISOs adapting more quickly,
I think, than they have to to other technology stacks. Um,
and sort of saying like, hey, this is our this
is our next big frontier chance to serve as a
trusted advisor to the business, right? Like, we can reset
a little bit on, on some of the other, you know,
(11:31):
technology shifts and and just say like, right, we're we're
now helping the business, uh, and helping the business understand
what and when and where it wants to take on risk. Um,
when it comes to applying security within or applying AI
within the security organization. That's where I think there's going
to be a very interesting balance. One of the things
that we're starting to hear now is that boards are
(11:53):
mandating that teams within an organization figure out how to
use AI. And, you know, when it comes to cybersecurity defense,
AI is useful for a lot of different scenarios, but
not necessarily every scenario. Right. You have to balance the
fact it's, you know, I sort of, you know, if
you're if you're building the plane while flying it, you
(12:14):
have to make sure a wing doesn't fall off when
you're adding, adding a different technology. Right. And so I
think for CISOs who are, who are adapting AI for
cybersecurity defense, I think that's going to be a really
interesting sort of balance to strike of saying, yes, we
need to go experiment with AI. We need to go
figure out where it's useful for us, but also recognize
that the consequences if you know, if AI fails for defense,
(12:38):
it may be a little higher stakes, right? Because now
we're talking about actual like, you know, data protection, right.
And potential data breach issues. Um, and so I do
think that CISOs have their work cut out for them, uh,
you know, in that regard, uh, when they when they
have to go be the ones that are applying and
using AI versus setting guidelines for other teams.
S2 (12:58):
Sure. Absolutely. So why why do you think boards are
pushing companies to adopt AI?
S3 (13:06):
I think it's, uh, you know, sort of the in
a lot of ways, boards have always pushed companies to
be more efficient. Right. To to make sure that they're
they're maximizing the effectiveness of their team. Um, no board
of directors is going to say, you know, it looks
like you have a pretty, uh, bloated workforce that really
isn't doing much. That's fine. Right. Um, and, you know,
(13:30):
now there's a new tool at hand, right? Which has
a lot of promise, uh, to, you know, uh, displace
some of the grunt work that teams have to do. Um,
you know, again, I don't think most people at this
point are seriously contemplating replacing the bulk of their staff
with AI. You know, it's it's, you know, it's a
(13:51):
fear that certainly has been heard and talked about a lot.
But I think the reality is the focus now is, hey,
give your, your team access to these tools. Um, see
what they can do. Right. See if they can increase their,
their own efficiency here. Um, and so to me, what
this translates to is we will ultimately be more effective
(14:11):
as a business, not necessarily by replacing our staff, but
by making it so that, you know, their individual productivity, uh,
extends a little bit further. Right. Um, or, um, you know,
another lens of this is we know that the business
of doing business inherently involves a lot of toil. What
would our designers what would our security professionals? What would
our HR people be doing, uh, if they weren't just,
(14:34):
you know, sort of doing daily, daily document processing tasks, right?
Like what creativity could we unlock for the organization? Um,
so I think it's it's fair, honestly, for boards to,
for boards to push companies on these things. Um, yeah.
I think the only failure mode is if they say
you must adopt AI. No exceptions. Uh, even if you
find a use case that models today's models aren't necessarily
(14:56):
ready for yet, right? Like that could be the the
only danger there.
S2 (14:59):
Yeah, that's what I was going to say is there's
probably some push as well that says just get it
into product so we can market it because everyone else
is talking about it. But I think it's probably like,
I don't know, 75, 25 in the direction of like
find efficiencies, like you said.
S3 (15:15):
Yeah, absolutely. And you know, definitely the the early days
of AI adoption, I think was a lot more, you know,
this model of and now we have AI, right. Like,
well for what you just kind of added a wrapper
around around a chatbot. Right. Um, and you know, for us,
at times as we were thinking about adding AI into
our product, I mean, we started out as a as
(15:36):
a no code workflow builder. Um, we took a look
at some of the early AI pushes, and we ended
up with something like 50 failed experiments, uh, to integrate
AI into our platform because we didn't just want it
to be yet another chatbot, right? That looks cool as demoware,
but doesn't actually add any value for anybody. Um, it
(15:57):
requires some thoughtfulness to make sure that, you know, you
can't just slap AI on something and say, oh, great,
this is now a better product, right? Like, it actually
requires deep thought and integration to make AI useful.
S2 (16:07):
Yeah, and that's a great transition. We've set a pretty good, uh,
baseline here for what's happening in industry. So for the
problems that we talked about, uh, difficulty of automation, basically
a constraints on, you know, work that could be done
by security teams because of just the size of the
(16:27):
team and the stuff they're working on. So how is
time specifically addressing these?
S3 (16:33):
Yeah, we're addressing it through a couple different layers. The
first is, uh, recognizing that just about every single security
team has a different, uh, adoption maturity level when it
comes to AI and also different constraints. Um, and so
our number one design principle was, you know, don't be
prescriptive in how people use AI within your platform. Um,
(16:55):
a very. And so, you know, one of the very first, uh,
integrations we built was, uh, we called it an automatic transform. Um,
and basically what this was, is, you know, a lot
of what people use times workflows for is to, you know,
take data from one system, transform or manipulate it in
some way, and then move it into another system automatically. Um, and,
(17:17):
you know, if you don't want to learn the necessary,
like all the ins and outs of some arcane like
JSON schema. It turns out AI is actually really good
to understand. You know, given a JSON input you tell AI.
I would like to extract these four fields, transform the
data in this way, and then output it in this format. Um,
and so the first, you know, so one of the
(17:37):
first integrations we built was this was this automatic transform
where I would actually, uh, generate Python. Um, and uh,
you know, so now your workflow, it still looks like
the same, you know, deterministic workflow that you built by
hand before. But now you've got one additional piece here
that you didn't have to build by hand. Um, you're
(17:58):
still having you know, Python is deterministic, right? If you
give the same input to a function, it'll produce the
same output. Um, and, you know, so we, we balanced
the kind of what we saw as the best of
both worlds there where, yes, you don't have to write
that Python. You don't even have to know, like what
Python does. You just need to validate that when you
put out, you put the input in, you get the
output that you expect. And so, you know, for teams
(18:20):
that are sort of nervous about integrating AI or uh,
maybe really early in their their maturity journey, we wanted
to give them a first step. Right. Something that had
a lot of guardrails on it that was safe to
play around with. Um, we also have the ability to
just integrate, uh, a straight up AI prompt, uh, into
into your workflows as well, um, where, you know, if
(18:41):
you're maybe more comfortable with, you know, prompt engineering or
using those sorts of things, um, you could actually then go, uh,
you know, give it almost any input you wanted and
get any output you wanted structured, unstructured and so on
and so forth. And then more recently we launched a
tool called workbench. Uh, and workbench is basically our way
(19:03):
of addressing the fact that most chat tools are their
best when they have access to your data. Right. Um,
there's this inherent tension between, uh, sending your most sensitive
business contextual information, uh, to a third party vendor, um,
and also making sure that AI is useful for you.
So when we launched workbench, uh, we made sure that
(19:25):
the AI models that we were using were actually completely private, uh,
to the, you know, the tenant that was running them, right?
There was no logging of the data. There was no
sending the data across the internet. Um, and it enabled
teams that had previously had constraints around sending data to
third parties be able to say, oh, right now I
can actually connect my uh, my AI models to the
(19:47):
tools that I use, uh, in a way that's safe
in a way that complies with my policies. Um, and
now we can finally take advantage of that true combination
like that, that that ideal combination of large language models
that have the context, that have the business data and
feel good about it.
S2 (20:04):
Nice. And what were the types of operations that they
were doing with those, uh, prompts and Llms was that
data transforms? Was that analysis, um, benign or malicious, that
type of stuff?
S3 (20:16):
Yeah, I think one of the big use cases that people, uh,
often start out with is around, uh, phishing analysis. Um, the, the,
the phishing ecosystem, if you will, has evolved a lot
over the years where you used to be able to
check for things like a suspicious link or suspicious IP
address or, you know, malicious attachment in an email and
(20:36):
that told you whether or not it was phishing. Nowadays, uh,
business email compromise is one of the biggest attack vectors
that we see. And often it's simply, you know, somebody
impersonating your CEO or impersonating somebody at your organization. And,
you know, I mean, I get these all the time
from from the times CEO, allegedly. Right. Saying, hey, Matt. Right.
(20:56):
It is me, your CEO. Uh, please provide your phone
number to me. Right. And they're they're looking for for
additional context there. Um, and it turns out that's a
really hard problem for traditional tools to be able to solve.
A human can look at that message and sort of
instinctively understand that this is phishing, right? Your tier one
(21:17):
SoC analyst can look at that and be like, oh, right.
That's that's obviously not the CEO. But how do you
explain that to code? Right. How do you explain that
to a very strict workflow tool? Because asking for a
phone number is something people do all the time, right?
It's the context of this conversation, uh, that results in
that not being, you know, it results in it being
malicious versus versus benign. And large language models, of course,
(21:39):
are pretty darn good at understanding intent, understanding, you know,
the nuances of some of those things. Um, and so
for security operations teams that would get, you know, reports
of phishing that they would have to go analyze. Um,
you know, there was a certain bulk of them that
had to be done by humans just because, like, the
rules that they set just couldn't catch them, particularly in
(22:01):
the Bec case. And so now what we're seeing is,
you know, these AI analysis tools are being able to,
you know, you can use them for a verdict. Um,
or you can just say, hey, I actually want you
to extract the intent of this message, and I'll combine
that with some other signals that I have, right? Like
mix and match AI and, you know, maybe a threat
research or a threat intelligence database that I have to say, oh,
(22:23):
this sender is actually in our database, as you know,
being potentially risky. Uh, combine that with, uh, you know,
the intent of this message being asking for contact information
and now you instantly have not just a yes, this
is malicious verdict. You also have insight into the intent
of the threat actor. Right. And um, with workbench in particular,
(22:44):
This is where analysts can now iterate on that. Right. And,
you know, going back to like what would you do
with the additional time that you can that you can
save now they're able to, you know, pivot into additional investigation, right.
And say, okay, if I know this was the attacker's intent,
what else can I learn about the attacker? What else?
You know, what else would this attack look like? Right. Maybe,
you know, maybe we received this one report. What would
(23:06):
it look like if it had succeeded with a different user? Right.
And can I go investigate that now? So, so much
less time spent on triage, more time actually spent asking
the questions of like, who is attacking me? What can
I do about it? How do we know that we're safe, right?
And moving beyond to the things that actually require some
some human thought and creativity?
S2 (23:26):
Yeah, that makes sense. Yeah. Something you said earlier was
really interesting. You're talking about data transforms. And like, I
spent a lot of time, uh, different companies dealing with this.
It seems like it's not just, um, augmenting augmentation with
stuff that humans were doing that times could help with,
(23:47):
but also things that, um, like, um, data pipelines. Um,
and I was also thinking about, uh, and you probably
aren't I you got to focus when you're a product,
you got to focus. But, um, quality checks and security checks,
you kind of have the same sort of vibe. You
have things coming in and you're moving through a set
(24:09):
of steps for checking, for validation, for quality, for whatever
you could do. And if you add AI to that,
you could have like judgment in there at any of
those steps. So, I mean, um, it seems like you're
very much focused on security. Um, but are you seeing
people use it for, um, more like broad use cases,
(24:33):
like quality and stuff like that, because, I mean, this
is everywhere in it. It's everywhere in business. This is
just business in general needs these sort of workflows at scale?
S3 (24:44):
Yeah, absolutely. Um, and, you know, I at the end
of the day, it feels like almost every problem boils
down to either case management or data management. Right? And so, um,
you know, especially in the data management world, um, you're
you're exactly right. This is where, you know, uh, we
can use tie ins, and we see customers using tie ins, uh,
to remove some of the toil and burden off of,
(25:06):
you know, just managing those, those pipelines, everything from, um. Hey,
what do we expect this log source, uh, to be,
you know, like, producing, right? Like, do we actually, you know, uh, AWS, uh,
loves to subtly change the shape of CloudTrail logs.
S2 (25:24):
Totally.
S3 (25:25):
You know. Right. And there's no there's no big announcement.
It's just one of these days, I've noticed that a
fewer of my logs are getting classified correctly. Right? And, like,
why is that? Um, and so that's where, you know,
tines and, you know, AI implementation within tines can, can
serve as that sanity check. Um, we also see customers
using tie ins to integrate with, uh, you know, some
(25:46):
of their, some of their other data management platforms, particularly
around hot and cold. Uh, you know, data stacks. Right.
And like, you know, in the context, you know, of doing,
for example, a security investigation, um, you know, you may
only have 30 days worth of logs that are that
are like hot, hot, right? And, like, actively accessible. And,
you know, you have all the rest in Amazon S3. And, um,
(26:09):
you know, you need to be able to pull them.
This is something where tines can help make that retrieval process, uh,
and rehydration process a lot more, a lot more simple. Um,
and so yeah, we're absolutely seeing people using tines as
sort of, you know, the meta orchestration and monitoring layer
on top of these data pipelines that they're building because, uh, yeah,
at the end of the day, you know, there's the
(26:31):
number of people that have data pipelines that don't have
a data engineering team is a lot larger than, uh, yeah,
the teams that do, unfortunately. Right.
S2 (26:39):
Yeah, that makes sense. And as far as like AI
and security wise, what are the main use cases that
you're seeing?
S3 (26:46):
Yeah, I mean, AI is, uh, you know, what we're seeing, uh,
used a lot is again, sort of, uh, either, you know,
extracting context. Um, you know, we're we see threat intelligence
teams are using AI in a couple of different, really
interesting ways around reporting. The first is, you know, when
you consume, uh, threat intelligence reporting that has been produced
(27:09):
by another organization being able to extract indicators and all
that sort of stuff. Um, but then when you are
actually producing, reporting, uh, there's a lot of different consumers
of that, some of whom are human and want a
PDF and some of whom are computers and can't read
a PDF, right. Uh, and so being able to use
these capabilities to produce multiple different kinds of, you know,
(27:32):
intelligence distribution, I think has been a really to me,
that was sort of an unexpected but really fascinating use
case to see, um, of like, oh, right. It's not
just about reading data, right? It's about producing the data
that our, you know, translation of data. Really, uh, for
for the right audience. Um, so.
S2 (27:49):
You have, like, a little piece of useful intelligence and
you have, um, yeah. I did a lot of work
on this, uh, at Apple, actually, with the threat Intel team,
they have this little nugget of intelligence, and their customers
are like, whatever, 19 different customers, including, like, global security,
which is physical security. And then you have all these
(28:10):
different product teams and software teams, and they all care
about something different.
S3 (28:14):
Right?
S2 (28:15):
And that's a workflow combined with AI that you could
just produce those 19 different artifacts.
S3 (28:20):
Right? Exactly right. The CISO just wants to know, basically,
are we vulnerable? Have we been hit right. And that's
that's about it. Um, and the SoC may want to
know a little bit more technical detail and so on
and so forth. Um, so yeah, that to me, that
has been a really fascinating use case of, you know,
it's it's avoiding toil, but not in the way that
everyone thinks, right? Like you still, as the human, are
(28:41):
putting your creativity into this report and developing the nuance
and understanding, and then AI is helping you translate that
into a different context.
S2 (28:49):
Yeah, that makes sense. So is workbench the main the
main thing that you guys are working on and talking
about right now? Tell us more about that.
S3 (28:59):
Yeah. Workbench uh, is definitely something that has really taken off, uh,
in our customer base. Uh, and again, I think a
lot of that is the fact that, you know, uh,
a chat tool is a chat tool, right? There's a
lot of those out there in the world. Um, but
what tines provides is that private and secure access and
the context and integration, uh, with all of your other
(29:21):
data and most importantly, your other tines, workflows. Um, and
so the way we're seeing people now start to use workbench, uh, is,
you know, sort of almost like dipping in and out
of using deterministic automation and also using a chat interface
for their, for their analyst. So in an incident, um,
you know, an analyst may go into workbench and say,
(29:43):
I received this alert, please analyze it, recommend some next
steps for me. Right. And one of those next steps
might be, you know, this account looks like it's been compromised.
You should probably lock this account. And you say okay, great. Uh,
workbench allows you to trigger other workflows that have been
built within tines. And so I don't have to worry
about I maybe hallucinating the endpoint, uh, of our identity provider. Right. Or, uh, mistaking,
(30:09):
you know, like, if, you know, there's a bunch of
other people in tines named Matt, I don't have to
worry that it's going to grab the wrong Matt. Right?
Like I can actually delegate that task to a deterministic workflow,
and then it comes back to workbench.
S2 (30:22):
That's great.
S3 (30:22):
That's great. And workbench says, uh, you know, great. We've
done that part. Here's, you know, would you like me
to write up an incident summary? Right. And you can
close out this case. Um, and so really giving people,
you know, a much more explicit way of working through
common tasks, delegating to automation where necessary. Um, but, you know,
this is still very much, you know, a sort of
(30:44):
a co-pilot sort of use case. Um, one of the
things we're really excited right now is excited about right
now is, uh, you know, some of the agentic AI
capabilities that, um, you know, we're we're starting to see some, uh,
people using tines for sort of basic agentic AI stuff. Um, and,
you know, in the same way that we didn't necessarily
(31:05):
want to rush and be the first people to integrate
a chat interface just to say we had AI Agentic AI,
I think has had, uh, maybe an evolution, uh, in
terms of our understanding of what an AI agent actually is, right?
And what constitutes agentic AI and so on and so forth. Um,
and now that these have more stable definitions, uh, we're
(31:28):
investing in figuring out what agentic AI looks like when
it comes to tines as well. So that's something that, uh, I,
you know, we're, we're starting to get some internal sneak
peeks on, uh, and it's, uh, it's pretty exciting.
S2 (31:40):
All right. Could you possibly, uh, show us a demo
of workbench?
S3 (31:44):
Yeah, absolutely. I'd be delighted to, um. Let's see. Hopefully
I have the correct screen pulled up here. Um, but
this is the the workbench interface, and you can see here, uh,
you know, it's a fairly classic chat interface. Um, and
if you treat it just like any other chat bot, uh,
(32:06):
you'll get fairly generic. Lem answers. Um, so in my
in my previous role, I worked in security operations, uh,
at Coinbase. Um, and we dealt with phishing all the time.
We dealt with incidents, um, you know, and, uh, you know,
got got attacked all the time. So, um, let's imagine
here that I, you know, have received reports that the
(32:27):
domain Coinbase is, is phishing. And I'm looking to learn more. Um,
can you tell me if Coinbase So.com is phishing. And
it'll think for a second. But you'll notice here that
because we are only talking to the LLM, it gives
(32:50):
us a fairly generic answer, right? I don't have any
specific information. Um, and so because.
S2 (32:55):
This has got a training cut off date, right. This
is like it only knows so much. It's not an
expert on domains.
S3 (33:04):
Exactly, exactly. And you know, this is good generic advice, right?
Like you should always check to see if it's an
official Coinbase domain. Um, but as I start connecting tools, uh,
things can get a little bit more interesting. So now
if I ask it the same question, if Coinbase is
so.com is phishing. It's going to look a little bit different, right?
(33:31):
It knows that one of the tools available to it
is URL scan. Um, and so now, rather than just
giving me a generic answer, it's actually going to use, uh,
URL scan. Um, and because it's searched for URL scan
and found a result, it actually knows to then retrieve
that result as well. It's kicking these things off automatically
(33:52):
because these are read only actions, right? Like no system
is going to change because, uh, you know, because it
called these tools. Um, we also have the ability, you know,
if there's a, you know, a write action or a,
you know, potentially destructive action, it'll ask you to confirm, uh,
before before it takes that action. But these are read only, right?
(34:12):
And so now that we've pulled back, uh, some URL
scan results, we can officially confirm that this is a
phishing site, right? With the most up to date data. Um, and,
you know, if you've been on URL, scan, com, you
know that there's a lot of data that they pull back. Um,
and this sort of provides this very quick, uh, insight
into URL scan. Um, and so, you know, this is
(34:35):
this is something where, you know, security teams, uh, in
the past, if they wanted to use all of these
different tools, uh, they had kind of one of two choices.
They could either have a lot of different panes of
glass open, uh, or they could, you know, sort of
do like a one time enrichment into, you know, a
case that came in. Right? Um, I may not know
(34:57):
ahead of time what tool I want to use in
order to investigate something. Uh, and so what workbench lets
us do is say, okay, now, you know, we can
pick some of the tools that are, you know, tools
that are at our disposal, get the latest data. Um, and, uh,
you know, ultimately, uh, you know, get that sort of
much more dynamic, uh, you know, uh, interface without without
(35:19):
ever having to leave workbench.
S2 (35:21):
Yeah. They just asked the question normal way. They don't
think about the tools that would be required to answer
it correctly.
S3 (35:28):
Exactly. Um, and, you know, let's imagine that, you know,
we're not using URL scan. We're using a different, uh,
Service provider to to analyze fishing. I don't have to
know anything about that service provider. Right? Like, all I
have to know is what information the AI has extracted
in order to be able to make to make that determination. Um,
(35:48):
and so for us, this is, you know, again, sort
of feels like one of the best of both worlds
when it comes to chat assistance, right? Like, you can
get the, uh, validated data from the latest, uh, trusted
data sources. Um, you don't have to know any of
the technical details behind it. You remain in full control
over what sources are connected, when actions are taken, and
so on and so forth. Um, but ultimately, you know,
(36:10):
it just sort of removes that cognitive burden of having to,
you know, contact switch between a bunch of different systems.
S2 (36:16):
Yeah, that makes sense. I really liked what you were
saying before when you were talking about building the workflows. Um,
where you seamlessly pivoting between when you need intelligence and
when you need the consistency of like a legacy or
what did you call it? Um. Deterministic system.
S3 (36:36):
Yeah, exactly.
S2 (36:37):
Yeah, that's that's really, really important because when you talk
about scale, you talk about processing terabytes of data. That's
not an LMDh thing, right? Right. You got to you
got to pivot to traditional tech there. Uh, I thought
that was really interesting. Well, this is, um, this is awesome.
What else is, uh, coming out? What else should people
know about that? You're, um, either have out now or
(36:58):
releasing soon.
S3 (37:00):
Yeah, some of the stuff coming out soon, we're, you know,
we're adding, uh, a lot more into workbench. Uh, and,
you know, right now, a lot of this, uh, interface is, uh,
text based. Um, we've heard a lot of requests from
our customers that they'd like to be able to do
more with documents and images. Um, so that is, that
is coming very, very soon here. Um, and then, you know,
(37:22):
the other thing, uh, and, you know, if I look
at here at this, uh, workbench builder or, excuse me,
this story builder, uh, tab, um, right now we have
an AI action that includes just a prompt right where
I can maybe generate some some output. Um, as we
start looking, uh, you know, over the next few months
(37:42):
at how agentic, I, uh, you know, interfaces with, uh,
both traditional workflows as well as, you know, uh, copilot
chat workflows. Um, this is going to, uh, this is
going to evolve, and, uh, I'll leave, uh, I'll leave
the tease, uh, at, uh, you know, at, uh, you know,
saying that there's going to be more than just workbench, uh,
(38:04):
for sure, uh, over the next few months when it
comes to AI.
S2 (38:07):
Fantastic. And where can people find out about you? Uh,
the website and everything.
S3 (38:13):
Yeah. Folks, go to Tynes. T I n e s.com. Uh,
they can find us there. Um, and if they want
to get a taste of of all of this, uh,
what we've shown today, what we've talked about, we have
a free community edition. It's free for life. Uh, you know,
comes with a whole bunch of different features. Um, and,
you know, I have my own community edition tenant that
(38:33):
I actually use for personal stuff outside of work as well. Um,
we've had people use their, uh, tenants to actually, uh,
build to do a fantasy football draft, which I thought
was interesting. Um, and, uh, yeah, we want people to
be able to experience, uh, you know, experience, uh, what's, uh,
you know what? All we're building.
S2 (38:53):
Very cool. Well, Matt, I enjoy the conversation. I think
this is, uh, super interesting. And, uh, look forward to
talking to you in the future.
S3 (39:02):
Likewise. Thanks so much, Daniel. I really appreciate the conversation.
S2 (39:04):
All right. Take care.
S3 (39:05):
Cheers.
S1 (39:08):
Unsupervised learning is produced on Hindenburg Pro using an sm7 microphone.
A video version of the podcast is available on the
Unsupervised Learning YouTube channel, and the text version with full
links and notes is available at Daniel Mysa.com newsletter. We'll
see you next time.