All Episodes

August 5, 2025 • 55 mins

Join the Tool Use Discord: https://discord.gg/PnEGyXpjaX


Unlock the secrets of secure AI development in this episode of Tool Use! We're joined by Craig McLuckie, the co-creator of Kubernetes and the Co-Founder and CEO of StackLok, to dive deep into the world of MCPs and the future of AI infrastructure. As AI agents and the MCPs become more widespread, ensuring the security and reliability of these systems is paramount. Craig discusses the critical need for hardened and secure MCP servers to prevent risks like data exfiltration and the creation of backdoors.


Discover how ToolHive, developed by StackLok, is solving these challenges by providing a registry of trusted, curated MCP servers. We explore how ToolHive helps developers by offering a standardized toolkit of reliable tools, simplifying integration, and managing security concerns like secret protection and network isolation. Craig shares his insights on best practices for developing secure MCP servers, the importance of community-driven development, and the future of authentication and authorization in agentic systems. Learn about the shift from a platform engineering to a solutions engineering mindset and why you need to embrace an experimental approach to thrive in the new AI age.


StackLok: https://stacklok.com/

ToolHive: https://toolhive.dev/

Stacklok Discord: https://discord.gg/Uhz3VshErv

Craig's LinkedIn: https://www.linkedin.com/in/craigmcluckie/


Connect with us

https://x.com/ToolUseAI

https://x.com/MikeBirdTech

https://x.com/cmcluck


00:00:00 - Intro

00:03:35 - What is ToolHive?

00:06:21 - How ToolHive Helps Developers in Production

00:13:13 - How the Community Guides the ToolHive Roadmap

00:22:03 - Gaps in Current AI Infrastructure

00:31:26 - Will API Keys Survive the AI Age?

00:47:18 - A New Mindset for Thriving in AI


Subscribe for more insights on AI tools, productivity, and security.


Tool Use is a weekly conversation with the top AI experts, brought to you by Stacklok

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I'm starting to use MCP for pretty much everything.
Start with a set of base serversthen, and then, well, that is a
building block so that you can start putting together workflows
that use those servers to actually solve real problems for
you. The question is now, if you're
connecting a client to server and that server is not
connecting to resources that require further authentication
authorization, how do you handle?
That right, But you're also nervous because you've been

(00:20):
around long enough that you knowthat writing secrets into Jason
files creates risk for you as a human being.
You would be. Borderline crazy having an API
key to a production system accessible to a stochastic
agent. There's a set of.
Things that need to be done to just not shoot yourself in the
foot. Is MCP Secure As MCP

(00:42):
proliferates across the AI ecosystem and even starts making
its way into production, we wantto make sure it's done in a safe
and reliable manner. We've seen the demos, a
malicious MCP server can exfiltrate data and create
backdoors. And as more people are vibe
coming up these servers, we wantto make sure that we look at our
tools and make sure that they'rehardened and secure.
So on episode 51 of Tool Use, brought to you by Stacklog,

(01:05):
we're joined by Craig Mcluckkee,the Co creator of Kubernetes,
the former CEO of Heptio and thecurrent leader of Stacklog who's
developed tool Hive. We're going to be discussing why
security matters with MCP, how to make your MCP servers more
secure, best practices, things to look out for, and we're going
to get into the developer experience and community driven
business. It's a great conversation and

(01:26):
I'm thrilled to introduce Craig Mclucky.
Yeah, I think MCP is a, a reallyinteresting technology and the,
the kind of hint here is in the P the protocol.
And you know, for me, as, as I was thinking about the, the, the
world at, at, at large, like I became fascinated by
Transformers, right? These these stochastic systems
that are able to do things that we've never been able to do in

(01:48):
technology before, right? Like this, there's the ability
to start dealing with semantic reasoning and the ability to
start processing data, actually driving outcomes.
It's, it's just a fascinating place of system.
And then you look at the outsideworld and it's kind of hard,
cruel and uncaring, right? Like the, the, the traditional
IT systems are, are, are, are, are very unforgiving.

(02:10):
And I think when I think about what MCP is, it's really a
bridge between these two worlds.It's a, it's a way for a large
language model, which is optimized to converse natural
language. It just happens to be very good
at producing well structured Jason to start interfacing with
these these these adjacent systems in a way that's natural
to, to that environment. So being able to start looking

(02:31):
at an outside world instead of aset of of APIs that have to be
called sequentially, which creates opportunities for
hallucination to start, you know, overwhelming systems,
which creates a system where unnecessary context is being
egress from these systems. That it's also affect the self
attention mechanisms of, of, of of Transformers.
Having that very Natural Bridge where you can start to describe

(02:52):
in a very precise way the specific set of resources,
tools, prompts, completions, whatever you have that are
necessary to bring that context,you know, from the outside world
into the model and create an elegant bridge where those
models can start actually interacting with the outside
world to to control things through a much more natural

(03:12):
framework that's more optimized for the way that that models
work. Is is is very compelling.
Yeah, I had someone kind of frame it, frame my mental model
where instead of an API, when you're getting like a singular
piece of data with MCP, you kindof go for workflows, try to get
a general goal accomplished or something that augments more
than just like mapping a one to one MCP function with a an API

(03:34):
endpoint. So bringing Toohive into this
what, what is the problem that you're solving?
Where Where does it fit into this ecosystem?
If you think about MCP as basically a protocol that
enables you to describe tools and all the rest of it and in a
way that's more natural for language models, having a set of
curated base service that you can start to use is really
useful, right. So right now it's a very chaotic

(03:56):
world out there. You know, everyone is is sort of
energized MCP demos really well.So, you know, developers one hot
coding afternoon away from producing a server that does
something really novel. They get it out there, it gets a
fair bit of traction. People look at it and they're
like, hey, shoe, wow, this is great.
They put get up stars on it. And it's very difficult to
discern signal from noise in theecosystem.
It's really hard for developed to know, hey, this was, you

(04:20):
know, something that's, you know, people have talked about
and read it a fair bit. But it was a hot coding weekend
for a developer who's not necessarily committed to the,
the sort of maintenance of this thing other than it's a cool
demo versus, you know, there's, there's teams out there that are
spending a lot of time reasoningabout how to map the APIs into a
set of tools that can be or skills that can be consumed by

(04:42):
an organization. And so as a starting point, just
taking a lot of the guesswork out of, hey, is this a good
server? Like if I have to pick between
the, the sort of 50 servers thatprovide me access to basic fetch
capabilities, what's actually going to run well for me?
What's going to have a decent kind of profile from a memory
management perspective? What's going to be super flaky
when I actually go from that, hey, I'm using this to I'm NPX

(05:06):
installing this and it seems to work well for me while I'm
coding to hey, I actually want to build a system that uses this
over time to access information that that my organization is, is
is is really sensitive about. And so, yeah, start with a set
of base servers, then enable that as a building block so that

(05:27):
you can start, you know? Putting together workflows that
use those servers to actually solve real.
Problems for you solve real problems for you when you're
trying to code inside the developer inner loop and solve
real problems as you're startingto build the genetic systems
that address that outer loop. Where where you want to be able
to sustainably create access to those classes of systems that

(05:48):
that that your organization is, is, is living with every day and
that are potentially under regulatory scrutiny or have a
lot of other kind of hardening requirements for access.
Yeah, And I've admittedly mostlyviewed MCP as a tool for my
development. It allows me to have a common
interface that I've been using for a while, cursor claw,
desktop, and just be able to input more context into it.

(06:11):
But the idea of having it be a part of a production system is
almost beyond the scope of what I've explored with MCP, just
because I build the final app. Like I'm OK doing all these all
these different aspects, but howdoes the idea of of a tool hive,
docker container being deployed into a production system help
developers own? Yeah, so there's, there's,
there's two sides to it, right? And that's it's, this is the

(06:33):
thing that's relatively complex because we're using MCPS to help
us work and then we're using, you know, increasingly we
started to use MCP in the thingsthat we produce that all the
word product, right. So it's, it's Perth the tool
that like, you know, man is a tool using creature, you know,
like we, you know, we, we, we shape our tools, our tools shape
us. This lot of kind of great quotes
that go into this, the space. And so that journey from, you

(06:57):
know, OK, I'm, I'm, I want to use MCP right now because I'm a
knowledge worker. Like developers are knowledge
workers and there's a lot of other class of knowledge workers
out there. And so I want to be able to
introduce this as a technology that enables me to do my work on
a day-to-day basis. And the reality is like in, in
an organization, there are teamsthat are dedicated to supporting
their knowledge workers. There's teams that are dedicated

(07:17):
to making developers productive.That's the same team that
probably went and procured your cursor license in the 1st place
so you can do your work. That team is motivated by your
success and that team wants you to have access to the best tools
out there. But.
That team also. Knows that maybe if you're in a
bank MPX, installing something on your desktop, it's not the
best idea. So the starting point for us
with Tool Hive is is embracing that situation.

(07:40):
You're a developer, you want to use this thing.
You maybe can't because your organization won't let you or
you're a developer, you want to use this thing, but you're also
nervous because you've been around long enough that you know
that writing secrets into Jason files creates risk for you as a
human being like this like this,this, this, there's very real
risk for you as a human being. So the starting point of just

(08:02):
let's have a system when we lookat it as people that have been
around for a little while gives you that experience, but also
gives you the assurances that you're not going to get hurt
while you're using it is a greatstarting point.
And then over time that transitions from OK, like I'm
using these servers locally, butmy organization wants me to be

(08:23):
able to access documentation that's in a specific repo.
Organization wants me to be ableto access knowledge.
That's not necessarily, you know, something that I, I wanted
empowered agents to just be ableto do Direct Line of connection
to. So this move from service being
a local concept, which is just, I'm running them, you know, as
part of my day-to-day workflow, to service being the, the bridge

(08:47):
between the information that an enterprise has that it's
necessary to enable me to do my job, you know, in that sort of
developer in a loop. And then ultimately supporting
other knowledge workers with, with, with, with, with, with
access becomes really important.Most of the world's monetizable
information exists behind a firewall of some kind, right?
Like it's it's, it's, it's most of the world's not monetizable

(09:09):
information has authentication authorization requirements.
So being able to bring that thatinformation into your workflow
requires a certain amount of respect for that, the sort of
access to that information. And, and it should involve a
certain amount of controls associated with it just being
able to scope this to access only that, being able to make

(09:30):
sure that the authorization flowworks really reasonably
well-being able to observe what information is actually being
egress from those systems into environment.
And so I think it's a very natural world where, yes, MCP
service started as that developer tool for just
developer productivity. They're going to become a tool
that support knowledge workers over time, because those
knowledge workers need access totheir own information.

(09:51):
It becomes a way for an organization to control their
destiny and say, these are the set of systems I want to have
access by these people or these agents that are doing work just
like developers. And over time, it's going to
become a natural part of the waythat SAS providers who've been
delivering these vertically integrated experiences start to
deliver value into an AI enabledecosystem.

(10:11):
If you've been watching the showfor a bit, you know that one of
the big benefits to AI agents isthe productivity gains.
Now, the demos are cool, they'rea lot of fun.
I've made a few myself. But to get real world benefits,
you need to give it access to real data by MCP, and that's a
little bit scary. So that's why I've been using
Tool Hive. Tool Hive makes it simple and
secure to use MCP. It includes a registry of trust
MCP servers. It lets me containerize any

(10:33):
server with a single command I can install in a client in
seconds and seek your protection.
Network isolation are built in so you can try Tool Hive as
well. Highly recommend you check it
out. It's free, it's open source, you
can install it today, and you can learn more at Tool Hive dot
dev. Now back to the conversation
with Craig. I think part of it is just
making sure you don't stumble over.
You know, poor. Quality tools.

(10:53):
I think, you know, one of the things that I've certainly
observed is there's a very rich ecosystem of tools out there and
some of them are, you know, moreor less stable.
And so part of your work as a developer is, hey, I want to be
able to use a tool to access something off the local file
system or whatever tool that accesses something off the
Internet or I want to integrate with some other popular
technology, GitHub or what have you.
And there's, there's a sort of abundance of choices available

(11:17):
out there. You know, you can go and find,
you know, a bunch of random servers.
You can MPX install them and it's up to you to figure out,
you know, how well do they work?It's up to you to discover that.
Hey, once I've actually gone down the process of getting this
tool working for my particular workflow, that hey, maybe memory
management isn't a priority for the tool developer and this
thing is a bit of a memory log or, or you know, hey, there's a,

(11:42):
you know, we encountered, you know, you know, occasional
flakiness, you know, for the tool in certain situations.
So I think there's a lot of value to just having a, a
relatively standardized toolkit of commonly used tools that are
hardened. They'd actually work reasonably,
consistently well. They're thoughtfully put
together. And so we are working to take a

(12:02):
lot of the of the the challengesassociated with with, with, with
hitting those kind of pre production issues that you might
have with with tool calling. So just giving you a, a
standardized set of tools, we know they work well.
We provide a simple way to kind of actually get those tools
integrated into the environmentsthat you're using.
If you're using Copilot or cursor or, or whatever tools

(12:24):
that that are supporting the tool calling ecosystem.
And I think the our advantage isjust not having to think about
it like, hey, I can discover them.
These are tools that have been appropriately vetted.
I'm not going to catch a diseaseby using this tool because the
team is actually taking the timeto review them, scan them, make
sure that they're packaged well,there's not a lot of, you know,
there's no vulnerabilities in the base images, etcetera.

(12:45):
So I do think there there are advantages to developers as
well. It's not just about enterprise
use. In my day-to-day development
workflow, I'll, I'll use I'm, I'm getting better using MCPS,
but I had the hesitation at first because it exposes my, my
personal data, especially exceptthe file system and other use
cases like that. But there's a bit of friction
when I set it up. I'm trying to like go through
cursor or even claw desktop. If you want to add a new one,

(13:08):
you have to update the config file.
So this ability of discoverability, I think is, is
a massive win. Do you focus around the
developer experience in how you set the product road map or
what's guiding the the development flow of Tool Hive?
The community like I mean to be,to be honest, like our principal
focus right now is, is, is is working with the community.

(13:29):
So you know as as the community is looking at at at real world
use cases. We, we, we, we look.
To within the community to understand what problems they're
facing right now in terms of being able to, you know,
identify and access these tools,you know, tune the tools for
specific use cases. Make sure that you know, based

(13:50):
on their own needs, things like,you know, authentication and
authorization is a is a real problem when you're, when you're
trying to, you know, actually use these tools, not just for,
for local experimentation, but potentially for a genetic, you
know, use cases. And so I think the the intent
here. Is to to.
To make sure that it's, it's very, you know, kind of
community led. The way I tend to describe this

(14:11):
is I think there's a lot of folks out there that are, you
know, describing the Emerald City from the Wizard of Oz and
developers are just trying to walk along the yellow brick road
right now. And so most of the motivation
for what we do and most of the road map is, is coming from real
developers out there that are looking to consume tools for
their, their kind of local development experience, but also

(14:31):
increasingly looking to use tools as part of the systems
that they're building. And as you go from that, that
use of tools. For.
Hey I'm using cursor and I want to be able to access things.
That's great. But when you're starting to get
to a point where, hey, I'm building in a genetic system and
I want to be able to use tools that enable me to use the
anthropic model with some other data that I have in a SQL, a

(14:55):
database and this other thing. There's a real need for just
having a very simplistic experience to be able to acquire
those tools for that developer in a loop so that you know when
you're actually working to buildthat experience locally, You can
you can identify the right tool,you can get it integrated into
your developer environment. You can start to kick the tires
and get things going. But then when it comes time for

(15:17):
you to actually deploy that thing that you've built that
that that energetic system, we're right there with you to
make sure that those tools are now available in something like
a Kubernetes environment where you might be deploying your
application into. So when you're starting to think
about that outer loop of of development, you have a similar
experience in terms of being able to get that system
integrated and also deal with the real world implications of

(15:37):
things like authentication, authorization, secrets
management, and all of the otherthings that are necessary to
actually make that tool usable in a production context.
And that's a great point for developer velocity.
If you can just take a existing tool and plug it into your
system, especially through Docker communication in
Kubernetes, it just will exploitthe process.
Plus, you get that almost that reassurance that you don't have

(15:58):
to worry about all of the nitty gritty that your team has
already put together. Just in terms of the community,
how can people get involved? Where do they go?
If someone has an FCP server that they've been using that
they feel is integral to their product, but they're just not
confident it can be deployed to production?
How do they bring that to stack,lock and and have the team
trying to work together to make it into something more robust?
So the process for kind of adding to our registry is, is is

(16:21):
pretty well defined. So what we've, what we've done
is basically described a set of criteria for things that we
would consider appropriate for introduction into the general
tool of registry. So if you're, you know, if
you're like, let's say you're, you're out there and you're
working on something and you've built a really cool server and
you think that this is somethingthat that unlocks real value in
the world. The way you would kind of get

(16:43):
that added is just submit APR requesting us to add that to the
registry that we've produced. And, you know, we'll then take
that PR, we'll, we'll run that tool through our set of, of
internal sort of processes to make sure that it, it is meeting
the, the, the standards so that it, it, it sort of is, is, is
achieving and, and demonstratingthe sort of things that we think

(17:04):
that our community needs. And if, if it doesn't, well,
we're happy to go back and forthwith you and work with you to
get it to the point where it actually does.
And so the intent is to make the, the, the sort of public
registry something that is, is kind of community owned and the
community has high levels of ownership around it.
So if you have like a cool MCP server, you know, talk to us

(17:24):
about. It and we'll work with you to.
Make sure that the base image that you know we're packaging
is, is appropriate. We'll actually package it for
you and we'll work to make sure that all of the the Social
Security considerations are in place so that it can be
published. To that registry.
But it's also important to recognize that it's not just
about our registry, right? Like, you know, we have a
certain opinion around which MCPservers are great, but we also

(17:46):
see a lot of organizations or developers or other people
wanting to start to create theirown registries.
And so it's entirely possible tosay, hey, these are the set of
servers that I want to use for myself.
I have a GCR registry here. I've populated it.
Let's just use those servers. And you could take a subset of
the servers that we've publishedand published into your own
registry or you could just, you know, make your own, you know,

(18:08):
make your own criteria up and, and decide how you want to
control that. So we're trying to create that
sort of flexibility, starting with a really well curated set
of servers that you can just rely on and trust.
But also and making it, you know, options available either
being able to run run a server directly in a container.
So you don't actually have to gothrough the whole packaging

(18:28):
thing. You just get to benefit from the
use of a container as an air gaps.
Environment to run an. NPX installed server on or you
know, for a lot of the the real world use, the more enterprise
use people typically want to have their own registry.
They want to be able to apply their own, their own criteria
for, for what makes sense for their organization.

(18:49):
And so they can take a lot of what we've done and then
populate those into their own registry or they can introduce
the the servers that they built for themselves for their
organization into that environment.
And do you have any general advice for people to to get up
to that standard? So maybe there's a little less
back and forth, almost like a checklist of things that they
can audit on their own servers to get a bit more robust.
And even if they don't go the stack lock route, just by being

(19:11):
able to be a little more confident that they're what
they're making is secure. Yeah, I think there's a few
things, right, Like the, the, the starting point I think is,
is kind of community viability. And you know, what we obviously
look for when we're, when we want to recommend a server is,
you know, is there an organization behind it or at
least a set of individuals, right.
So there's always that sort of single individual risk, you

(19:33):
know, like, hey, here's a cool server, but it's only got 1
human being supporting it, only one human being maintaining it.
We've never seen any external kind of participation
engagement. We, we don't necessarily have a
strong sense of this going to exist in, in a year from now.
For instance, if you're, if you're looking to, to use a
server in it, that doesn't really matter as much for your

(19:53):
own kind of like, you know, hey,I'm using servers through the
lens of cursor and I just want to access things so I can
produce better code. You know, you can have a chance
to use what you want. But if the answer is hey, I'm
using the server to access relational data into an agentic
system that I'm going to be supporting for the next three
years. You probably don't want to trade
out that server somewhere down the line.

(20:13):
And so that's starting point around just, you know, community
viability. Get up stars is a heuristic that
a lot of folks have used, but we, we tend to prefer looking at
participation. We, we, we, we value looking at
things like how are issues beingdealt with?
You know, what's the the rate atwhich community is burning
through the issues? What level of?
Engagement do they have with their their own community is the

(20:34):
type of criteria that that we'relooking for.
And then obviously there's there's a lot of the mechanics
of just using the tools that exist right now to start, you
know, scanning for malicious code, making sure that you know,
kind of coding standards are being upheld that that the
server itself is not, you know pulling in additional
dependencies that are potentially problematic that

(20:55):
that that don't have, you know kind of cohesive security
around. So it's it's it's kind of.
Trying to look at that that thatholistic picture.
And yeah, so, but you know, my sort of advice would be just
treat it the same way you would treat any other package.
And, and ask the question like, would I, you know, would I
include this package in an application I'm building?

(21:17):
Would I include this application, a package that I'm
going to live with for, you know, for a while and, and start
to apply the same criteria. And, you know, we're looking to
just bring a lot of our perspective that, that, that
came from the, the world of supply chain security to the MCP
ecosystem. But it, it's, it's a lot broader
than that, right? Like it's not just about, it's
not just about, is the server good or bad?

(21:38):
You know, is the server safe or not safe?
It's also about, you know, can Imake the server, you know, work?
Well, in my environment. You know, can I actually use the
server with the halfway sensibleauthentication authorization
system? You know that that's the the
type of consideration that that we think a lot about.
And you just mentioned somethingabout gauging the, the, I don't

(21:59):
say long term viability, but youknow, having some staying power
with, with things changing so quickly and in your exerience
with Kubernetes, being able to understand the imortance of good
infrastructure for expanding out.
Do you see current gaps in the AI ecosystem for infrastructure?
Is MCP being just a protocol going to be sufficient to kind
of help elevate agents to becomea bigger part of day-to-day
life? Or do you think there's other

(22:20):
avenues that we as a community still haven't explored enough
yet so we get this robust infrastructure to really start
accelerating deployment of agents?
You know, I think. There's this, this kind of
different horizons that this exists on, right?
Like this, this, this kind of, you know, in my mind at least,
you know, there's like, how are we using these tools in our
day-to-day work just to produce code And the work product that's

(22:41):
being produced is the code. And then, you know, how are we
using these tools to produce agents where the work product
isn't just code, it's code plus a model plus a context retrieval
subsystem, plus all of those other pieces.
And the problems, you know, exist in a variety of different
ways. Like there's an obvious
challenge associated with authentication authorization

(23:02):
that we're dealing with right now, right?
Like so you know, as we're working in the, in the, in the
MCP ecosystem, you know, I thinkthere's, there's a lot of
progress that's been made towards authentication between a
client and a server. And if you think about like the,
the Oauth workflow to do in a simple client to server
connectivity, it's relatively solidified, right?

(23:24):
Like, hey, we, we know how to. Run our worth flows.
Between a client and server, thequestion is now, if you're
you're connecting a client to a server and that server is not
connecting to resources that require further authentication
authorization, how do you handlethat right?
Like do you want to do a change with workflow?
Do you want to use a JWT token exchange?
You know, like what's what's theright sort of system to support

(23:44):
that, especially when you think about the incredibly complex
world of authentication authorization on the back end?
So I think there's. Obviously a lot more work to be
done around authentication. Authentication authorization,
something has to exist that sitsbetween, you know, a classic
open ID principle, a human beingand a relatively traditional

(24:05):
service account which enables you to take a set of claims that
a person might have propagate asthat just a subset of those
claims. And then present them to an
agentic system so that it can operate on the behalf of an
individual for the duration of the task and then have that
access rescinded over time, right.
So that, that authentication authorization flow is something
that is, is interesting. Lots of great work being done in

(24:26):
the, in the MCP community. You know, we see a lot of great
proposals that are flowing, you know, flowing by and we're
certainly spending a lot of time, you know, talking to folks
and thinking about what those options are for, for real world
use. The other, you know, the other
problem associated with that is,is, is just the, the shape of
of, of, of of agents particularly and, and, and the

(24:47):
problems that are associated with if you have an agent that
you have provided access to a relatively large set of data.
So, you know, hey, I've been an agent that's gone off an index
on my Google Drive. I want to create a glean like
experience. How do I preserve authorization
through that flow? How do I avoid a confused deputy
situation so that that agent is not inadvertently leaking

(25:08):
privileged information to my organization that it, that
shouldn't, it shouldn't, shouldn't, shouldn't be dealing
with. So it's not just about
authentication, but there are some really gnarly and complex
authorization tasks that need toto happen.
The other thing that I see a lotof of kind of interest in around
progressive thinking, forward thinking people is, is, is
establishing and understanding the provenance of data, right?

(25:28):
Like if you're in a real world situation and you have, you
know, some kind of monetizable database, like, Hey, I'm, you
know that these are my, my, my positions as a.
As a hedge. Fund right back if these leaked
this would be really problematic.
You may want to be able to run agenetic systems that you know,

(25:49):
can start to kind of reason about some of those positions
and and start to, you know, lookfor unique insights, you know,
beyond what a human analyst could perform.
But you need to be able to establish provenance.
You need to know exactly which systems are accessing which data
when you need to be able to track the flow of data into and
through these agentic systems and understand where it's being
propagated to. So I'd say that, that that

(26:10):
ability to understand and reasonabout the flow of information
through a genetic systems and being able to establish that
breadcrumb trail of, of what's using what is, is, is kind of
it, it's very progressive and, and sort of and, and, and, and
interesting. And then in terms of like the,
the sort of nearer term stuff, there's a lot of prosaic things

(26:30):
that just need to be wired in, you know, hey, I'm building this
thing. How do I wire it into open
telemetry? What does an appropriate
dashboard look like? How do I deal with throttling?
So I don't, you know, get an insane anthropic bill on the
back of the week because the models showing these tools and
they're just dumping in ornaments of context into the
context window and and blowing up my inferencing costs.
So those, those sort of relatively prosaic ilities are

(26:53):
another thing that that that need to exist for, for
sustainable use of, of energeticsystems that are, are, are
accessing real world, real worldsubsystems that are being kind
of trusted to run in a relatively autonomous way.
I would like to kind of double click on the the trust aspect.
When you mentioned the ability to kind of observe the data
flow, tracing comes to mind. Being able to actually see how

(27:15):
agents perform tasks along the way so you can debug faster but
also helps just produce visibility.
A lot of people view AI as a black box.
One thing that Stack or sorry, Tool Live does is secret
protection and network isolation.
How do those which tend to be removed from the end use of
these projects, help with trust?Is it just going to be the
developer can claim that they'recreating products that only

(27:36):
allow access to certain data? Or is there any way we can kind
of percolate up the issue of a trustful system, a trustworthy
system that tools like Tool Hivewill actually help contribute
to? There's the the sort of prosaic
in the supply and kind of components of this, right?
Like the prosaic components of this are.
You would be. Borderline crazy having an API

(27:59):
key to a production system accessible to a stochastic
agent, right, like, like, let's,let's face it, these, these,
these agents are are, you know, they don't have a conscience.
They don't have like intrinsic awareness, right?
Like the, the, the constraints are precisely the constraints

(28:19):
that you place upon them. And so there's, there's a set of
things that you know need to be done to just not shoot yourself
in the foot. And you know, one of the things
that I would suggest you should probably not do to not shoot
yourself in the foot is to, you know, write secrets into Jason
files that are readable by theseagentic systems that are

(28:41):
egressing that content to the cloud.
And in some cases, if you don't know what you, you know, if you
don't know what you're dealing with, like, hey, I'm, you know,
I'm using open router. I want to try this new model.
I don't know who's behind that model.
Like I don't know what they're doing with that data.
I sure don't want my API. See, you know, my API keys being
egress that model accidentally as part of a, you know, kind of

(29:01):
a code indexing activity. Like who knows if they're going
to get retrained on who knows who's behind that model, right?
Like there's a there's a lot of these systems that are suddenly
just immediately made available to you.
So I think there's just that thesort of the the mundane, which
is just let's let's just make sure that we.
Are. Taking care of a set of things
that that will avoid us getting hurt, you know, kind of over

(29:25):
time and in the future. And then when we talk about this
sort of the sublime, you know, like where we can get to over
time, there's a lot of techniques that have we've,
we've built over time to start to assert trust in certain
environments, right? So the ability to say I only
want to run servers that my organization signed and my
organization's only going to sign these servers using

(29:47):
something like a sig store key if they meet a certain set of
operating criteria. And you can start to, you know,
establish kind of tiers of trust.
So like, you know, these are thesets of servers that are
hardened enough for me to use inthis environment.
And when I say hardened, it's not just about the.
You know, hey the. The image is hardened.
It's also about hardening the constraints.

(30:08):
So basically being able to package up a server, say the
server signed. Here's the explicit profile
associated with the server that my organization is willing to
adopt. So, hey, the server has access
to file system, but you know, when I'm running the server, it
can only mount this subset of the file system locally so that
it can access it. Hey, the system is accessing

(30:29):
resources on my intranet. It can only access, you know,
these, these, these these resources on the subnet or in
this domain or, you know, at this, this kind of HTP endpoint
and being able to kind of. Bundle up a server along with
the constraints and then create a system that can
deterministically enforce that that server is running.
With those constraints and like only servers running with those

(30:52):
constraints can be connected to these clients.
So there's this this long journey from, you know, let's
not be silly and get hurt by having secrets egress to to
models because we're writing APIkeys into Jason files to hey,
let's actually look at what it takes to kind of create a much
more sort of sophisticated real world use so that we can get.

(31:15):
The. Constraints in place that enable
us to trust these agents enough to actually start reasoning
about our data. Very much on board the idea of
whitelisting, scoping, putting guardrails on the capabilities
of these things. One idea then just curious about
your your thoughts of the futureon it.
Do you think API keys are going to stick around?
It feels like a massive vulnerability and I just, I'm
unsure if we can use something like a public private key pairs

(31:36):
with with the mass public just because of the complexity of
setting it up. But do you think API QS are
going to hang around, or you think we're going to need better
systems now that we're trying tojust feed so much data to a
system that a single string of characters can cause such
catastrophic damage? I've been in IT for a long time
and unfortunately these things are fantastically, unbelievably

(31:57):
durable because you know, and this is the sad truth, like
like, why does the mainframe still exist?
Like if you, if you think about it from that perspective and the
the sad truth is, yeah, I think for, you know, for progressive
new systems, you know, we'll, we'll get to the point where you
have a genetic identity as a first class thing, right?

(32:18):
You know, we'll, we'll move awayfrom the world where you are,
you know, dealing with, with directed access through an API
key to raising with like an identity provider, an IDP.
That's able to start carving outidentities.
You'll start to have systems that look a lot more like that
sort of what people describe as as kind of perimeter in the

(32:39):
security or the zero trust security, right.
But there's a reason why zero trust hasn't dominated the
world, right? Like it's, it's, it's an
obviously correct way to reason about the systems, right?
Like, like, you know, being ableto have something where you can
construct a service identity andevery action is, is being driven

(33:00):
through the lens of a specific service identity.
And realistically, the only organizations that have been
able to make that transition to a kind of zero trust operating
profile are organizations that have a fantastically
heterogeneous internal workload like Google, right?
Like Google's been on the zero trust train.
In fact, they they sort of, theypioneered the idea.

(33:21):
The reality is real world enterprises are ooki right like.
An organization is is has budgetand their budget is, you know,
being deployed to address their problems.
And some of their budget is is is deployed to maintenance.
Some of their budget is deployedto up updating their technology
fleet to be more more more modern and some of their budget
is being allocated. To kind of.

(33:42):
Forward-looking systems. The reality is that budget is
never like entirely allocated tojust deprecating all the systems
and moving things. Forward so I.
There's always a server under someone's desk that's running
something it probably shouldn't.And the reality is just given
the conflicting priorities of ofreal world organizations, that's
going to continue to be the case.

(34:04):
So, you know, a lot of people talk about legacy systems.
I tend to think of them as sort of heritage systems.
This, this reality that those systems actually represent your
organization in a very real way in an organization as people
process and technology. They are in many ways the
technology that they've written over the last 20 years.
And unfortunately, the technology that was written over

(34:24):
the last 20 years has this the, the crack cocaine addiction of,
of, of API keys. It's just not, it's not built
around primitive security. It's not built around, you know,
the use. Of a centralized IDP.
It's not built. Around the idea of of of of
identity, so the way I described.
It is, yeah. It's going to get better.
I think the world's going to get.

(34:46):
You know, like the, like, you know, certainly as these types
of systems challenge conventional design, as as we
start to see richness evolving, there will be classes of systems
that that, that, that that do get better.
And then we no longer have the API key.
But until that last database is is is is is rolled out and until
that last Winforms application is deprecated until you know

(35:10):
that last mainframe is no longer.
Sort of, you know, sitting. There and crushing, you know
kind of a lot of the. Sort of ERP problems that your
organization have. I think we're going to be living
in this very yes and well, yes and yes.
We will have IPI keys, yes, we'll have IDP's, yes, we'll
have sort of eject exterior trust, yes, we'll have a whole
bunch of other things. But it's if anything is I've

(35:31):
learned in the last 25 years of of working, the space is is
enterprises are complex and heterogeneous and and there's
just no one solution for them. In terms of the broader
ecosystem, do you think this the, the slowness that comes
with being a big company is going to be a detriment and
allow startups to kind of start taking over more and more market

(35:52):
share? Or do you view it more that the
large budgets that come with having these big corporations so
they're able to run more experiments, try, try out
different ways of leveraging is going to keep them ahead?
How, how do you see kind of the the market share of the economy
changing with us? Yeah.
I mean, it's the, the, these moments like the like, like
we've defined our, you know, if you look back on humanity, our

(36:15):
epochs are defined off the dominating technology, the Stone
Age, the Iron Age, the Bronze Age, the industrial Age, right?
And there's no question that, you know, like we're not moving
into the AI age and it represents A tremendously
disruptive moment that's going to change everything, right?

(36:35):
Like it's, it's going to change the unit economics associated
with work. It's going to change the, the,
the, the types and classes of, of goods that we can produce
and, and market, right? So I don't think there's any
question that this represents a moment of profound and
fundamental disruption to almostevery facet of industry.
And it's breaking faster than than any of the other
technologies I think we've ever encountered, right?

(36:57):
So, so this is the moment when there will be a massive
redistribution of IT spend of ITbudget of, of, of spend across
the industry, right? Like it, it, it does represent a
moment where organizations that are relatively unencumbered will
have opportunities to create newclasses of systems and new class

(37:17):
of service and new class of product that will will enable
them to to to outperform their competitors in terms of like,
you know, reading the crystal ball and saying like, you know,
you know, how's that going to play out?
Like are we going to see new banks displace JPMC from the
pedestal as the preeminent financial services institution?

(37:38):
You know, probably not, right, because there are, you know, a
lot of real world implications. And I think, you know, some of
these large organizations are are going to move very
authoritatively in this direction.
But I do think it's, it does represent a moment of, of
substantial disruption and organizations that are able to
navigate the complexities of their regulatory or, or kind of,

(38:05):
you know, kind of complex operating environment to start,
you know, accessing these tools to fundamentally change the the
cost profile of their ability towork are going to start to
outperform. And I think we will see a new
generation of breakout companieson the back of this, you know,
the, the Amazons, the Googles, the, that came out of the, you

(38:27):
know, the sortof.com era, right?That became the, the sort of
the, the, the, the, the sort of these, these sort of megaliths
of, of, of technology. We will see a new kind of wave
of, of companies like that startto emerge.
But yeah, I don't think it's, I don't think it's obvious that
it's, it's a complete reset of the corporate landscape.

(38:47):
I think there's opportunities for really great big companies
start to emerge. I think some of the great big
companies we have will become, you know, we'll go the path of
of GE where they're unable to necessarily, you know, translate
what they've been doing into this kind of new world.
And I think you know the the cost economics.
Across a lot of. Different areas are going to

(39:08):
change and, and, and folks that have been able to dominate in
things like, you know, like red hats dominated the the world of,
of open source delivery for the last, you know, number of years
because I have a very nuanced way of approaching open source
communities and operationalizingetcetera.
Guess what? The cost economics of, of
operationalizing open source just changed completely.
And, you know, Red Hat will either need to adjust to, to

(39:29):
those realities and, and, and probably will, or if someone
else will start to, you know, sort of show up and, and, and be
able to kind of chip away at their, their dominance in
certain areas, you know, whetherit's in the OS or in any of the
other peripheral areas that they're operating in.
So it is a, it is a period of opportunity.
It's a period of, of, of tremendous change.
And it's, it's certainly is an exciting time to be a startup

(39:52):
because you know, when you're ina in a disruptive environment
where the cost economics are changing, not being tied to an
existing sales motion, not beingtied to an existing support
motion. Gives you just tremendous
opportunity. Yeah.
And on top of being the most exciting, it's the most
accessible. People can fire up a proof
concept at home relatively easily.
You have access to all of these intelligence tools that can help

(40:12):
you get up to speed and try to identify blind spots.
You mentioned open source and and that that's my go to.
So I have to double click on that.
You have stack lock and tool. I've running, running open
source. What was the business decision
that drove that? For me it's.
I mean, there's, there's a couple of points here, reality
that. When you're building a.
Business, I'll kind of play it through two angles, right?

(40:33):
Like people say simple repeat, people say product market fit.
And for me that really means simple repeatable sale.
And you know, as we started to look at this ecosystem, there's
a lot of folks moving really quickly and open source is
probably the thought that like one of the best enablers for an
organization to move very quickly.

(40:53):
Like you don't have to have a conversation with someone to use
their product. They don't have to have any
commitment to you to you to use your products.
They can simply they don't have to give you their e-mail address
to use your product. So it is by far the the cleanest
and simplest way to capture an organization's attention and for
you to be able to then, you know, create start to create

(41:16):
value for someone that you don'teven have a relationship with,
right? So.
So for me like that, that sort of simple repeatable sale, like,
you know, you have it when people are just using your
technology and you're giving up something, right?
Like you're giving up control, you're giving up a competitive
advantage in some ways. Because one of the greatest
compliments of open source is having another organization

(41:38):
behind the scenes, take what youbuilds, package it up and sell
it like there's nothing stoppingthem doing it, right.
So your R&D is now something that they, your R&D budget is
something that they can tap into.
And so for me, the, the calculusis relatively simple, like 1 is,
I want to see what works and open source is a great way to do

(41:59):
that. 2 is I've built technologies like Kubernetes in
the past and I've recognized thepower of open source not just as
a way to, you know, kind of unlock usage, but also as a way
to start driving collaboration, right?
Like there's a lot of people outthere that are looking to solve
these problems. And, but, you know, approaching
this with an open source mindsetand enabling an open contributor

(42:20):
model where it's like, I don't have to own all of this.
Like I would be perfectly happy with someone from Red Hat
showing up or someone from Microsoft showing up or Google
showing up or anyone showing up and, and started to take
ownership for the project. So it's that sort of, if you
want to go fast, go alone. And you know, I'm happy to go
alone in single vendor open source for a while.
But if you want to go far, go together and sort of community

(42:41):
owned open source. If, if you, if we want to have
to eventually emerge as a, as a platform, you kind of need to
let your lenders go a little bitand then other people feel
ownership of what you're doing. And then it's up to you to to
figure out how to create create enough value beyond the the
community centric open source togenerate a healthy and
sustainable business and the good.
News is there's. So many obvious ways to do that

(43:04):
in this ecosystem. Like, you know, every time I
talk to an enterprise organization, there's four or
five things that they're very upfront and they would say, you
know, these are important to us and yes, we'd pay money for
them. And so you can start to use that
open source as a way to kind of create enterprise specific
features that you can charge money for, whether those are
open source themselves or whether the organization just
need someone on the line to to support the use of it.

(43:26):
Yeah, I fully agree. And it it incruse trust in the
product. Like I said, you'll get product
market fit quicker because you have real people using it.
One aspect that's become more popular too is the idea of
solutions engineering where actually have employees, staff
who are able to entry systems better.
Do you see that proliferating tomore industries where this idea

(43:46):
of having multiple systems talk to each other is going to become
more important? Or do you think that will be an
area where AI thrives and we don't even need to start
exploring that one in the in thehuman domain?
Yeah, I think this, you know, it's, it's interesting and it it
comes back to that earlier comments around like, you know,
cost economics of, of integration, right?
Like when when you think about where value creation is
happening in the ecosystem rightnow, right?

(44:07):
Like most of the value creation that's happening like, and I
think of this these sort of two areas that value creation is
happening, value creation happens in solution engineering,
value creation happens in kind of platform engineering, right?
So it's like I deliver a platform and it creates a
certain amount of value because everyone can use the platform.
And then there's the solutions engineering space where I create
a solution using that platform. And you know, as I'm using the

(44:31):
platform, I, I, I, I, I fitted to a specific use case, I fitted
to a specific area. I think the thing that surprised
me about the, the AI space is most of the value creation.
And like this has been completely con contrary to my
own instincts. Like I'm a platform.
Engineer. So like I've always kind of

(44:52):
just. Taken a platform first approach
to things, and I think you know what?
What we're starting to see in the in the AI ecosystem is most
of the value creation is actually happening in the
solution engineering space. That if you start to approach a
problem platform engineer's mindset and start to try to a
priori identify the set of things that need to exist for
the system to function like I need a vector database, I'm

(45:13):
going to use this inferencing subsystem.
I need this kind of, you know connector any of these these
things. You start from that perspective
and then try to figure out eventually how do you actually
build a solution on top of it. You will fail, right?
It won't it won't to live the goods things are moving so
quickly under the covers. You know, if you, if you've made
a bet on one model, you know, three months down the line,

(45:34):
you're going to have a differentmodel available.
And so I think there is an inversion here where where for
an organization to succeed, theyhave to be able to embrace a, a
solutions engineering mindset. They need to be able to move
very quickly to feel what works.And then once they've figured
out what works, they need to figure out what platform enables
them to, to start to unlock thatvalue.

(45:56):
And I do think they. Will they?
Will. We will see.
Full platforms emerging that we will see this kind of a Genting
middleware start to emerge over time, but it's going to be
companies like Google and Microsoft and, you know,
potentially Amazon and open AI and then those types of
organizations that actually havethe the wherewithal to create
that kind of full suit to the nuts middleware type systems.

(46:19):
And so I think it's it's premature for an an organization
to kind of embrace a full platform, build a mindset and
start to think of like, what is the Kubernetes for, for a
gentic, you know, sort of workflows.
So I do think you're embracing tools that enable you to safely
solution here is probably the most pragmatic thing an

(46:41):
organization can do right now. And you know, once you start to
identify what works, not just once, but you know, twice, 3 * 4
* 5 times. And once you've started to get a
handle on your organization's ability to consume these types
of systems, that's the moment atwhich you start to ask questions
around what platforms necessary for me to bring this into a
production context. How do I how do I deal with the

(47:03):
the ilities, the security, the reliability, the manageability,
the observability? Associated with the system.
I don't know if that answers thequestion.
The way you are seeing it, but that's that's something I've
I've been thinking a lot about recently.
Along the same vein, the the switch from going from platform
engineering to solution engineering mindset, do you have
any other advice for people on what mindset they should try to

(47:23):
be cognizant of if not fully adopt in order to thrive in this
new AIH? Yeah, I think it's this is this
has been kind of I've eaten a lot of humble pie over the last
six months. I'll be honest, right?
Like I've, I've, I've been kind of the way that I, I've
personally operated is you, you,you sort of build this
relatively sophisticated model of how the world works.

(47:44):
And then you kind of will something to come into existence
by your, your mental model of how the cooled works, right?
Like that's how we build Google Compute Engine and Kubernetes
and a variety of other things that I've, I've worked on.
And I've been wrong more often than I've been right when I'm
interacting with our systems because my instincts are wrong,
right? Like you, you, you, you, you
think, you know, like, and don'tdon't fall into this trap of

(48:08):
thinking that you, you know, like you spend a bunch of time,
you've observed, you've watched the Copathy series, you've got
this sort of model in your mind around like, you know, what the
various, you know, constituent pieces are.
And now, now you feel qualified to.
To, to, to. Assert what's going to work,
like probably going to be wrong.At least I've been wrong
consistently. There's a lot of things that
have happened that I just didn'tsee coming.

(48:29):
I didn't understand 20 years of,of experience building, you
know, enterprise class distributed systems just to
translate well into this world. And so I think the reason we're
seeing organizations like Cursoroutperform in the space is
because they come completely unfettered.
They have no, these are, these are individuals that are
incredibly bright. They're incredibly close to the

(48:51):
tools and they're not making assumptions.
They're not limiting their thinking based on their
understanding around how the world of distributed systems
tend to work, right. And so, yeah, the thing I would
say is, and I've, I've, I've heard this from a lot of people
I've talked to recently is like,you know, like, you know,
there's, there's a person I sortof did breakfast with the other
day who's an incredibly smart guy.

(49:11):
Like he's, he's one of the smartest people I've ever worked
with, one of the most organized people I've ever worked with.
And he described a situation where they set up three teams to
to go and, and tackle a problem because they were, were
struggling to address it. And the team that nailed it were
the youngest, most freewheeling Yolo team they had.
Like they couldn't use the solution they produced because,
you know, it was, it was, was, was relatively, you know, it

(49:32):
wasn't necessarily at, at the sort of production ready, but
that was the team that created the most inspiration for the
system they eventually did build.
And so I think you have to be very deliberate about not pre
like this, the sort of, hey, I can see it working, That's fine.
It's like, no, I have to see it working, actually read its
results, play with it, optimize,optimize, optimize.

(49:56):
Just not, not necessarily in the, in the, in the traditional
systems engineering way. Just try 5 different prompt
profiles. Like, you know, hey, what else
can we do? Can we run, you know, some kind
of, you know, once we start seeing this thing work, can we
actually feed the results back in and see if it can self
optimize? Like you have to hold on to that
experimental kind of playful approach to technology a lot

(50:16):
longer than you think than you think you would because it's so
easy to start to converge on a local optimum for the the class
of system that's being built that's.
Phenomenal advice. Last question for me, what are
some real world use cases for MCP that you've seen that are
have a lot of potential moving forward or what?

(50:36):
What do you use in your day-to-day life?
Yeah, so I mean, what I use, I I've started to use MCP for
pretty much everything. So like, I'll give you a couple
of examples of things that we'vedone recently.
You know, we built a system thatindexes all of our internal
documentation. It it, it indexes all of our

(50:56):
Discord traffic, it indexes our GitHub repos.
And it basically is has built a semantic engine that that now
enables me to, during my day-to-day task, just ask
questions. I can pick whichever model I
want. And so we call it the the stack
of knowledge circle, right? And it's basically just a robust

(51:18):
knowledge base that that gives me access to everything.
And it's just, it gets integrated into the tools that
I'm using all the time. And we just built a lightweight
chat interface on top of it because it's, it's convenient to
you. I'll just go to web page and ask
questions about everything. You know, when we started
looking at telemetry, we built MCP servers that enable me to
start, you know, asking questions around like what are
we actually seeing out there? Which regions are we seeing use

(51:40):
around? Like being able to process time
series telemetry data from things like the update service
that we, we, we sort of offer, which is basically it's, it's
completely anonymous. We're not getting any
information about what, what a person's doing.
We just see where things are that are requesting updates to,
to the underlying subsystem. And so, you know, being able to
start just unlocking these, this, this world where I'm

(52:03):
changing how I work, where I used to work by context shifting
between a whole bunch of different tools.
Now I'm able to go to a single tool.
And rather than kind of, you know, trying to go to a
different SAS tool to ask a question, I just go to 1 space,
pick which tools I want to use. I can then ask the question in
natural language and get synthesized results.

(52:24):
So the, the, the thing that likethe, the, the, the promise of
this is, is, is a fundamental change to how we work, right?
It used to be like, gosh, I don't even know how many SAS
applications I touched on a day-to-day basis.
You know, if, I mean, if you're a candidate, I go to a bunch of
SAS applications. If I'm trying to reason about
what's happening in the community, I go to a bunch of
different SAS applications. If I, you know, if you know,

(52:46):
between GitHub and Discord and Google Drive, etcetera.
So this this, this the shifted attention that's happening in in
my day-to-day life towards this kind of centralized AI workbench
where the data's been brought tome, not me going to the data,
right? Like the SAS systems become the
system of record in the back ends and the MCP servers become

(53:07):
the gateway for me to be able toaccess data that can then be
joined with other data that's relevant so that I can ask
questions about, you know, what community issues are we seeing
and has that, that those issues affected our, you know, like our
ability to, you know, like, let me give you a better example.
Like what campaigns have we run from a marketing perspective and

(53:28):
how those campaigns are now actually affecting our
observable usage or, or, or access to our web properties or
all these other things. So, you know, my team is, is
going through the exercise of, of looking at taking this AI
maximalist mindset, looking at all the things we use and then
asking the question like, how doI make those accessible to this

(53:49):
kind of AI workbench? Like how do the developers get
that in cursor? Like as part of the daily life?
How do I get that in, you know, in, in, in, in whatever kind of
environment that makes sense forme since I didn't run a lot of
code every day. And that's, that's, that's
changing how I work. And I think it's going to change
how everyone works over time. Love it.
Excellent Craig, I really appreciate you coming on.
This was an awesome chat. Really think more people should

(54:11):
try Tool Hive and just put more thought into security as well as
just see how they can leverage MCP to change the way they work.
Before I let you go, where can people catch up on what stack
lock is doing? How can they get involved or
test out tool hive? Where?
Where should people go? You can.
Go to tool hive dot dev. We love to see people in the in
the in the discord community. So if you're going to tool up to
dev, you can find the GitHub profile.

(54:34):
You can download the tool and try it yourself on your your
local machine, or if you want tojust jump on the Discord server,
say hi and we'd love to chat to you.
I'm in the Discord and it's a friendly group, so encourage
everyone to go there. Craig, thanks again.
Take care. Thank you for joining for my
conversation with Craig Mcluckie.
I always have a great time talking to Craig and I really
wanted you to get an understanding of the importance
of security in this day and age with a, a agents running around

(54:57):
and people vibe putting up apps,security's really taking a back
seat and that's kind of dangerous.
So using a tool like tool Hive, which I really make it recommend
you check out is is a great stepin the right direction.
It's free and it's open source. Check it on GitHub, join the
stack lock discord. I'm there.
You can come say hi. But let's make sure that this
next phase of development, as wecontinue to accelerate, includes
a focus on security. So thank you for joining and

(55:19):
I'll see you next week.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.