All Episodes

May 27, 2024 70 mins

In this episode of the DiscoPosse Podcast, Jack Naglieri, the founder of Panther Labs, shares the journey of his company from a single platform to a leading cloud-based SIEM solution. He discusses the strategies businesses can employ in the changing landscape of cybersecurity and data collection. The episode delivers invaluable insights for anyone interested in cybersecurity, data strategies, and the journey of building a tech startup.

The episode also explores how businesses can protect their most crucial assets amidst the sea of data. It covers key facets of data protection, including data capture, intent establishment, and deploying layered defenses. Furthermore, the discussion elucidates Panther's unique detection-as-code approach that caters to business-specific needs and highlights the importance of a robust risk reduction strategy.

In addition to discussing Jack’s experiences, lessons around customer retention, creating customer-centric teams, and developing intertwined cycles of feedback are also discussed. The episode explores how Panther makes choices about technology integration points and balances co-engineering and co-marketing in partner ecosystems. It highlights the emerging importance of data lakes and big data.

The conversation shifts to discuss the intersection of data security and Artificial Intelligence, emphasizing the rising intricacies of SIEM and how businesses harness data to make informed security decisions. The discussion also addresses potential risks brought about by these advancements, especially concerning data retention. Finally, the conversation underscores the optimistic outlook brought about by the democratization and advancements in open source models.

Jack shares his community-focused efforts through his podcast "Detection at Scale." Lastly, Jack reveals how he uses his personal and professional learning practices to contribute to his work in technology. This insightful episode is truly inspiring and offers valuable takeaways for those passionate about software development, data security, and customer-centric design.

This podcast is made possible thanks to GTM Delta

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hey, this is Jack Maglieri. I'm the founder of Panther Cybersecurity Company
based in San Francisco, California, and you're listening to the Disco Posse podcast.
Music.

(00:31):
So, Jack, thank you very much for joining.
I've been doing a lot of work in some of the stuff that I've been doing for
my clients and just always keeping my eyes and ears on the industry.
And more and more, Panther keeps coming up. And I finally said, okay, this is it.

(00:52):
Why have I not gotten in front of these folks more? more.
And then I realized, yeah, this is definitely so.
Thankfully, reached out, your team connected me with you, and I really appreciate you jumping on.
For folks that are brand new to you, Jack, if you want to do a quick intro,
and then we'll talk about kind of the Panther story, which is really interesting

(01:14):
unto itself in the way that you're solving problems.
And also just, This is a really interesting area of study for a lot of people.
And I would love to help people understand like actually what can we do about this stuff?
We're in a weird world where everybody has, you know, top three things to keep yourself secure.

(01:38):
And it's be careful, trust less and capture logs. You're like,
okay, that's actually not helpful.
So with that over to you, Jack. Yeah, Eric, thank you so much for having me on.
Really excited to talk with you today, share a little bit about the story with
the company and the team we're working on.
So I've been working in security my whole career. I started as an intern in college.

(02:02):
I went to George Mason University School back in the DC metro area,
for those who are familiar with that.
And I ended up moving to California in 2012 to work for a company called Yahoo.
And people listening might know that company. They had a couple billion users
and they had some interesting news stories too around that time that I was there.

(02:24):
And really cut my teeth doing...
Security analyst work, forensics, wearing a lot of the different types of hats
on security teams, and doing a lot with SIEM. I've always done work in SIEM.
Even from my first internship in college, I was just given a screen of alerts
and said, like, okay, let's go triage this and let's go reimage machines.

(02:45):
And let's just kind of learn about what compromise really means.
And I did that for many years, led me to Airbnb.
And I worked on the security team there for about two and a half years.
And then I founded a company called Panther. Panther Labs is sometimes how we're known.
That's like our official company name, but just more commonly we're referred to as Panther.

(03:07):
And what Panther is, is a lot of the extensions of what I worked on in my career.
So it's a cloud-based SIEM.
And our really big focus is around like super high scalability,
detectionist code and more developer oriented security flows.
We can talk through all about the evolution and things like that.
I, I founded the company in August of 2018.

(03:32):
So it's been about six years, which is pretty wild. It's the most time I've spent at any company.
It's, it's amazing how time turns, you know, weeks turn into months and turn into years,
but well, and they've been a tumultuous six years in,
in world economics and, and a lot

(03:52):
of conditions and technology shifts so wow you
uh you've seen a lot even in what some would think is
a short time but it's it's a lifetime in in
startup world it's very true yeah the pandemic i
mean we going into 2020 we were a
really small team and i actually raised my series
a during the pandemic and my series b i guess kind of on like post vaccines

(04:15):
and getting more to the real like real life world again but yeah that was a
really intense experience and then so many things with the market going you
know up and down and all the post-pandemic challenges,
svb collapse it's just like every year there was some big thing and i think
just in most startups you have to just figure out you have to figure out the

(04:39):
challenge of that year every year so it's been a really remarkable experience
and I've had the privilege of working around some really talented,
smart individuals and that's been really inspiring.
Well, and looking at your origins too, it's always what I think is the best
success pathway for a startup is somebody who really, truly lived the experience of the problem.

(05:05):
And it's not that that has to occur, but I find that, especially in a very technical
platform as you've got, they're technical users, technical implementation.
There's less marketing, more like real results that you have to show.
There's marketing around it, obviously, but the core of it, it's a very different type of approach.

(05:26):
Approach and because you've lived those CLI pouring through log files learning
you know as I do that I don't know enough about grep but I I relearn it every
time I have to use it you know,
when you look at those things then falling in love with the problem allows you
to then really look at the solution in a way of I know what it felt like to

(05:51):
experience this pain and now I
understand the journey, which is really, really hard to lay out.
A lot of startup founders really struggle, like, our product's amazing.
Why isn't everybody just buying it? You're like, well, because we actually have
to genuinely improve somebody's life.
It sounds kind of crazy, but that's, you know, what can we do?

(06:11):
Then we talk about delighting customers.
But then you also came from very large scale companies that had like really
diverse front end offerings. So you got to see the business side.
I'm curious when you were on the technology, like as a practitioner,
how much did you kind of look out
into business and overall operations to understand where that impact sat?

(06:36):
It's a good question. I think at the time of being a practitioner so early in
my career that I was very focused on solving the security problems and less
so thinking about building a business around it.
It was really just about, you know, I was just kind of a technologist,
a security nerd, trying to, you know, get into coding, get into DevOps and solve

(06:58):
problems at a really high scale.
And then, you know, the business opportunity kind of presented itself after
that moment. So it wasn't very proactive.
I always wanted to do my own thing.
I've always been a builder and enjoyed creating things just in general.
It doesn't have to be in tech.
Just I've enjoyed creating. Like I love to cook as an example.
Like I love creating like a ditch or baking a lot of bread and things like that.

(07:22):
I don't know, just to get a lot of joy and satisfaction out of like creating
something that people enjoy.
And that's always been something that's been very core to me.
And insecurity always felt like it was just a genuine interest and something
that I spent a lot of time in. And we had really hard problems to solve at Yahoo and Airbnb.
And Airbnb, especially being the hyper growth, hyper scale cloud startup.

(07:47):
And then, you know, being, you know, a multi-billion dollar public company after.
So it was very inspiring to watch that.
And I think there was a lot of lessons that we all picked up along the way.
Just being in the company at that time and hearing Brian Chesky talk through
the journey that he was also going through as CEO. so I think it certainly picked
up some chops along the way there and,

(08:10):
And then, yeah, the opportunity kind of presented itself to get into startups
and kind of see the other side of Silicon Valley through that.
So that was really interesting.
It's been a wild journey.
Did you choose where you began sort of based on company profile or were you
more looking for like functionally this is where you really enjoyed working?

(08:34):
What do you mean by that? Were you saying my goal was to get into a high growth
startup or was it more just like that happened to be like,
I want to do this thing and it's more likely to occur at these companies? That's a good question.
I think I just wanted to change from Yahoo, to be honest, Yahoo is a really big company.

(08:56):
And I learned a ton of amazing things from those people.
They're some of the smartest people I've ever worked with in my career,
technically, like Like they do incredible work.
And I was just looking for something new. And I like the idea of going to a
smaller company, having a little bit more ownership around the problems that we could solve.
Kind of getting thrown into a whole new environment was really enticing too,

(09:18):
especially going from a big company that was very data-centric,
data center-centric into a company that's fully cloud-centric. So a lot to learn there.
And like most engineers, we just want to learn and solve problems.
That's kind of what motivates us to do our jobs. And the Airbnb opportunity presented itself.
And, you know, it was really interesting. And I took it and that kind of led
me to the next space of like startup and cloud native business.

(09:43):
When you started with you know under
when using sim as a sort of platform and practice
and even in your early work did you find it was you were sort of getting direction
from the what was probably new was cso's right like ceaseless were fairly fresh
well actually not that they're they're a little i'm an old fella so i've i remember

(10:04):
the first time hearing a cso and people like.
What is you know i don't even understand what that means it was like oh okay
so somebody who's you know, has fiduciary responsibility over that.
So, so I'm saying on the security practice as well. Did you find it was like
security focused and centric, or was it more like business centric?

(10:27):
And they were saying, we need as a business to be secure. How can you do it?
I often find that people are, Hey, we're a security team.
And we're kind of like, that's our focus. They're like seal
team six the missions are given but
they're not centered on the mission if it's kind of
hard to explain it that way but i think everything's business

(10:47):
motivated at the end of the day and in security in particular
there's a balance of like how much do you want to
invest and how much do you want to lock something down right it's a very much
a give and take between the technical organization and you know the desires
of the security team to reduce risk or have the data to be able to react properly when things go wrong.

(11:10):
And that's predominantly the field of security I always worked in,
which is around detection and response.
And detection and response is really just a big insurance policy in a lot of
ways for companies that don't get attacked very often.
But for big companies that are getting attacked every day, it's a necessity.
And it really is to protect the business and to protect our data,

(11:31):
protect our systems, our employees, our executives, and because they're constantly
getting attacked and targeted.
And a lot of things are financially motivated. I mean, you can read through...
You know, the Verizon DBIR report is quite good at breaking down the categories
of attacks and things like that.
So it is very business motivated to reduce risk in the business,

(11:51):
to protect and to put the right controls in place that we don't regret later on.
That's an interesting phrasing. I love,
because this is one thing that we often forget too, in, in anything we do with
automation or, or anything in, in it operations and cloud operations is we have
to revisit the things that we're doing today.

(12:13):
Like I often say that today's best practices are tomorrow's.
What the hell were we thinking?
You know, just because things change and we've probably seen a lot of like technology
change, which one would think like, okay, perfect.
We've got these amazing accelerations with AI and technology that we can do,
but that doesn't solve the problem.

(12:33):
In fact, it probably amplifies the problem.
The tools are better and the platforms on which you can build a Panther and
other platforms are better.
However, nothing is solved. We just simply have just maybe changed the veneer of the core problem.
Yeah. I mean, SIEM has changed a lot in that time, and we've had to adapt based

(13:00):
on the environment around us.
So everyone's always talking about data volumes continuously increasing and
data is the new gold, especially with training AI models or just being smarter
and less deterministic about the types of logic we want to look for.
So it's been an interesting journey to watch that happen and to watch a lot

(13:21):
of the legacy tooling kind of fail out by default because it just falls over based on the scale.
I mean, this, you know, the scale that we hear about is, you know,
there's some organizations doing petabytes of data per day of just log data,
which is kind of mind blowing to think about how much that is.
But, you know, this is some of the some of the biggest tech companies in the
world are doing things like that.

(13:42):
So you really need to take a
very thoughtful approach to what is
even the data you need to collect because i think
at that point at that massive scale you have to ask those questions and you
have to be very intentional about what are the things i actually care to understand
about my threat model and what are the things in my organization want to prioritize

(14:03):
because otherwise you could spend infinite dollars in infinite time and the
type of security work that we do is really.
In intentional and important for that you have to just be very specific about what you care about,
this is the interesting challenge as you look at like data
that you're pulling and and gathering and instrumenting like things that you're

(14:27):
instrumenting in order to then have you know analytics and decision yeah that's
really what we're aiming for is ideally decision automation even better actionable
outcomes that we get from that.
And how do we know what's the right data to capture,
and not just get caught out where there's a volumetric amount of data that we can pull in,

(14:52):
and maybe it's meaningful, not meaningful now, it may be meaningful later,
but it's also terrifyingly expensive for people to just collect and hold a lot
of this data over time. Yeah, I mean.
Going to intention, it all starts with, what do you care about protecting?
What's important for your business to protect?
If you take a very easy example, like customer data.

(15:13):
Customer data lives here in this database or this set of systems or whatever else.
You're like, okay, that's good to know. We'll label that. That's like our most important thing.
What are all the ways that you can go in and out of those systems?
Maybe you have a web application that touches a database. Maybe you have certain
folks in the organization and your engineering team. can access that database.

(15:36):
And then you basically want to get eyes around multiple layers.
So defense in depth is always about getting multiple angles of the same data at different layers.
So an very trivial example is, if you're looking at application level logs,
which is very high level, you can be looking at host level logs,
which is people logging into the systems, you can be looking at network traffic

(15:57):
logs that tell a different story.
And they all sort of are helpful for different parts.
Some are more helpful for being proactive because they're very detailed,
like application and higher level logs like Glitter 7.
And then the lower level stuff is for answering very tactical questions around,
okay, there's a bunch of data exiled from that database, how much was let go?

(16:18):
Right, things like that. And I think what it comes down to are most of the time,
like compliance requirements, what do you have to be logging?
A lot of these things are just sort of forced by default and then there's the
next layer of that which is what do you want to be logging like as an additional
layer to either improve your detection response capability or just to be able
to be more exhaustive in your answers and then you sort of balance all this

(16:41):
with okay what's the volume of signals that we're getting are we getting too many alerts.
Is that data actually being usable or not is it is it is the roi of that high
what's the cost of storing it?
You know, there's a lot of these like follow on downstream questions.
It all goes down to intention, intention, your core threat model.
And I think the best teams that I've seen start with that.

(17:04):
And a lot of them are actually really good about being essentialists,
both in the alerting the detections and the data, it's all really important to put together.
What are sort of the golden signals that are ones that you in particular are
seeking out that are going to derive the best value, you know,

(17:25):
as you said, like once we have intention of understanding of what we're after,
then how do we, how do you, you know, as a platform,
sort of identify what are better signals to pull and focus on so that you don't
end up with just this plethora of data, that's just taken up a data lake.
That's a good question.

(17:47):
We look at statistics across all of our customers, because we do threat research
on our own data sets based on supported integrations that we have.
So basically, the way that our product works is that we are effectively like
the heart and soul of detection team, we are pulling the data in,
we're normalizing it, we're running some rules on it, and then we're sending
alerts out to wherever our team does their workflow.

(18:09):
And we look at the high level metadata of all that all the time.
And we see, we can only look at volume, because Because fidelity is a very subjective
thing to measure, meaning like, was the alert valid or not?
It's such a gray area because you can be getting an alert.
It can be an alert that you actually wanted to get, but it doesn't necessarily
mean that an attack or something nefarious happened.

(18:30):
It just means that something relevant happened that you need to know about and
you need to maybe take some follow-up action.
So we predominantly looked at
volumes and you can basically guess you
know using fairly subjective ranges
of volume to be like okay yeah that that one's actually
working out pretty well if there's ones that are repetitive and

(18:51):
consistently repetitive those are probably not great we want to turn those off
so we look at the high level stats to determine that and then we're also looking
predominantly in like where in the kill chain are we focusing our efforts This
is the thing that I was just talking with someone on my podcast about yesterday,
and he had a really great point.
It's this security engineer from Brex, and he was talking about how they just

(19:13):
focus on like a really narrow part of the kill chain because everything else is noise.
So we try to take that same approach to where like the built-in rules are more
centered on like, you know, more important parts of the kill chain and try to
like shut other things off or just kind of label it as like a low pride thing
that doesn't really go anywhere,
but is labeled in case you need it for IR incident response.

(19:34):
Yeah. This is the, the interesting thing then goes, where do we add like anecdotal knowledge?
And I think this is one of the gaps we had for the longest time,
it was always filled by literally just sort of tribal knowledge in the team
of, hey, this application behaves this way, so this is stuff to look for.
Now, looking at how Panther approaches this at a technological layer,

(19:59):
how much do you have that's just core data and how much is stuff that's additional
inference that's human generated?
Yeah, that's a good question. I mean, this is where a lot of detectionist code comes into play.
And the reality is
that every organization is different it's different in their business logic
it's different in their threat model it's different in their technology stack

(20:22):
everything and there's objectively common behaviors at every org is probably
looking at but then it's the layering on top of that is what makes.
A system more effective or less effective. So we take a detection as code approach
where we have the baseline rules, all of them are open source and users can

(20:44):
effectively add additional logic on top of the baseline that we've created.
They can check it into a repo. It's all version controlled.
It's all automated and it's collaborative with their team as well.
So that's really the mode of maintaining that tribal knowledge and codifying it.
It's just so important. And then that way, as your team evolves, you get more people.

(21:06):
It's easy to scale that up. So we've seen a lot of success in that.
And that's a pattern that's continued growing in insecurity in general.
Yeah. And this is, I wish that all of IT operations would learn from this bloody
lesson because that's one of the things that we often have is that the belief
that everything is core to the product, but it's like, it's actually the rule engine.
And that is where the business logic comes in and there's usually this big gap

(21:30):
and so much is left to the human side of the team to just, all right,
we'll give you as much as we can,
but you know, good luck and may your God go with you. You deal with the data.
And I think that's why security is interesting that they've,
they understood very early people in security that just the volume cannot be,

(21:52):
it's not tenable at any human scale, period.
Period, we're already against the wall.
Everything we do is a fight. And it's an interesting position to come from versus
IT operations is just like, whew, all right, not my problem.
Like we very much, I call it MTTNMP, meantime to not my problem.
Most of it is around moving the, hey, it's a database issue.

(22:14):
It's a cloud issue. It's a network issue, but in security, there's a very strong sense of ownership.
That then also changes the way you interact with systems, I think.
So security platforms are used differently than a lot of IT ops platforms.
Sure. And it's a very different context, obviously, as well.

(22:36):
People in security are trying to reduce risk. And sometimes we end up buying
too many tools that we don't need.
Happens to the best of us. It's all good. Yeah, it's like tool bingo.
Go yeah well and to
the point of there's really no way to
eliminate it all this is the other the the the

(22:58):
single pane of glass myth that happened especially in itom which was like hey
i'm going to give you one product that's going to eliminate your need for all
these other things and that never came to fruition at best you just added to
the portfolio and i find that security buyers and security users,
they don't care about getting rid of the four other tools.

(23:20):
They care about getting the right outcome, which is such a different buying and value pattern.
But at the same time, who do you sell to in security?
Because when you talk about security as code and this idea of compliance as
code and all of these ideas, the primary champion, your user is a developer

(23:43):
who probably wants as little as possible to do with security,
because that's not their primary focus.
Their focus is features, business logic.
But meanwhile, we require them to participate.
In the process of security, which is new, I'll say in, in their world.

(24:04):
So how do you approach, I'll say that like the selling story when you've got,
obviously a CISO is your, your economic buyer likely, but your practitioners,
may not be the people that are in the console of the platform you're, you're selling them.
Well, so the practitioners are in security by the way. So there's a whole title

(24:28):
called security engineering, detection engineering.
And I think security engineers are typically the folks who work on the application itself.
Like if, for example, at Airbnb, security engineers would sit with the actual
product teams and make sure that what they were developing was secure and sane
and all these things, right?

(24:48):
Detection engineering is the other side of that, which is around building logging
pipelines and rules and really configuring the same configuring the response
capabilities and working on optimizing that whole thing.
And detection engineers, at least in my experience, start as security analysts,
they don't start as software developers like that. That was my exact journey, right?

(25:09):
Like, I was an intern security analyst, forensic analyst, and then a security engineer, right?
So that persona is directly within the realm of the CISO. And usually under
a manager, like a detection engineering manager, or a C cert manager,
it really just varies, a lot of teams are very different, but they are very much in that umbrella.

(25:29):
And, and they're not, because they don't come from that background of like,
pure software development, it.
It's a very interesting product to build for them, right? Because it's not 100%
dev workflow and you kind of have to meet in the middle of capability.
You don't want to expose too much of the metal because it's probably not the
right abstraction layer of technicality that is useful for them or productive for them.

(25:54):
So because of that, we sway a little bit more towards like some abstraction,
but not too much to where they can't get in and like change things and mold it for them.
But yeah, we sell predominantly into CISOs, managers, and things like that, people like that.
Now, how do you help them, I guess, help their customer in that case,

(26:17):
which would be the actual, you know, the software engineers who are building
code that we want to build secure coding practices,
secure development practices, secure deployment practices.
So I guess it's interesting to think that you may not be selling to the developer
or showing the output to the developer, but you're guiding behaviors that are

(26:38):
going to occur in a development team.
That's true. What are sort of the tools and practices that you found effective
in your champions being able to tell that story and affect change inside the organization?
Yeah, that's a good question. A lot of it comes down to the why of why someone did something is bad.

(27:01):
So like someone actually just told me about this the other day,
someone who works at a company that uses our product and they're like,
oh yeah, we got some exposure to Panther because we got an alert that we were
doing something bad in production and Panther told us to stop.
A lot of it has to do with just exposing the sim to the greater organization
and allowing them to get in and see the details and then for someone in security,

(27:25):
or maybe the alert itself just explains,
hey, this is a bad practice because XYZ, like, next time do this.
So there's certainly an element of getting the organization exposed both on
the alerting side and potentially the data side as well.
Because one of the things I've used to always see people struggle with,

(27:46):
we talked about DevOps was a thing.
I suppose it still is as a general practice.
But then somebody said at one point it was like DevSecOps.
And I think that was probably a term that was lit by somebody rightly in the
security practice is like, hey, we need to get involved here.
And then a lot of people got really upset because they're like,

(28:07):
why do we have to so explicitly name it?
And it was like, well, because we didn't explicitly do it. It's like everybody
assumed that someone else was handling it or that it was at the end of the chain.
And that is absolutely where the pain, I used to feel this early days of we
would do a pen test right before it went to production.
And that's bizarre in today's practices. Like we should be on first code commit.

(28:32):
We should be inherently doing security as a practice inside everything.
But it was as much as we think it's part of daily practice in software building now.
I mean, I don't know what the numbers are. You probably have way more understanding
of the data and the size of the problem. them how much code is is secure code

(28:54):
these days and how many practices are are secure practices.
I don't know if I have enough context to answer that question as I haven't been
inside an organization working on that for a while.
So not my area of expertise necessarily right now.
I like that. I like that good answer. The best answer is like,

(29:15):
I don't want to give you an answer that I can't back up.
But I'll guess on the other side, I ask this, you've got a ton of data and you
talk about you can derive a lot across your customers.
What surprising patterns have you seen emerge that even maybe your team was learned about and,

(29:36):
and adjusted to as you saw more and more data coming in and being able to see
what was actually going on?
Oh man, I, I think my customers would kill me if I answered that question.
But I guess I'll say as a practice, as a foundation of how you built the platform

(29:59):
and the company, that is a distinct advantage over somebody trying to go it alone.
A lot of people will say, yeah, there's tons of open source solutions.
There's tons of things we can do.
But moving to a platform that has a broader view and a broader set of data to
derive knowledge from, I guess that's really where I want to go is like,
you have an advantage over anybody in a single silo implementation,

(30:22):
deploy your own standalone solution is great,
but they are only as knowledgeable as the things that touches versus with Panther
now that customer has everybody else's learning that's now potentially going
to derive value for them.

(30:43):
So how did you, as you built the platform and you looked at how do we bring
knowledge back into the core?
That's a complex challenge to do that securely and obviously and securely and
safely, but also to add the most value for every customer.
So what was your ethos as you built the platform wrapped around that idea?

(31:06):
Yeah, I mean, the thing that's most amazing about building a product and having
users is that they inspire you to solve the problem in new ways.
And it's the same way of hiring a great team.
They emerge with a different perspective based on their experience,
and they solve problems in better ways.

(31:27):
And putting all that knowledge together and getting
the you know sort of spreading that benefit into the customer base
has been one way i think that we've taken
customer feedback and ran with it so we
launched a we launched a new
version of our search product last year and we
got a lot of great feedback we incorporated that in and we're continuously

(31:49):
working on products based on customer feedback and in
our own team's inspiration of how they think to best
solve that problem for them and then we'll roll that
out we'll get some quick feedback we'll incorporate it we'll
think about something you know they'll inspire us to think about something new
that we didn't think about when we built it and then now you have a new capability
that's sort of rinsed and repeated across the entire customer base so that's

(32:11):
been really inspiring and really remarkable to watch happen one thing that definitely
stands out in even the way you describe the problems.
And development in the company is you're very customer centric and customer
focus or customer obsessed, I guess is the Bezos phrase that I'm sure it wasn't just him that said it.

(32:31):
But so it seems as though you knew right away that deep involvement with customers
and listening was going to be the most helpful thing for guiding platform growth.
So how do you spend, And like, how do you build a listening team as you're doing product development?

(32:53):
I think in the early cycles, it's just spending time with the customer really critical points.
So when you get on the first call and the second call, maybe you do a demo.
There's always a lot of following questions folks have, and that's when you
get in the mind of what problems are they trying to solve?
And then you triangulate that with what are we trying to solve?

(33:17):
And then sort of course correct based on that. And then once they become customers,
you know, the first few months are really critical as well when they start to adopt.
And it's the same sort of thing being very.
Being very careful to watch all the feedback and act on it very quickly,
right? Because you want people to have a good experience from,
you know, the first few months.
And honestly, that first year is really important because, you know,

(33:40):
one of my board members has always said, oh, the first year is a trial, right?
You have to win their business continuously every year.
And you do that by being receptive to their feedback, being there when they
need support, which is something that we've heard a lot of positive feedback
on. Our support team's excellent.
And just reacting and evolving, right? Like, because people want to watch evolution happen.

(34:02):
And in every startup, no product is complete in the beginning.
There's always things you need to fill in over time.
And as long as you show that progression with your customer base,
that's one of the things that motivates them to continue working with us.
So yeah, just the critical touch points, checking in, you can do customer advisory
boards, things like that, roadmap presentations.

(34:24):
But I've always found that just the, even in the sales cycle and the early customer
cycle, if you stick with them very consistently, you'll get a lot of the right
signals to incorporate back in.
Now that's a fun, actually, I love that phrase that the first year is a trial because it's funny.
We, we really do think like, oh yeah, once you're, it's, it's now a renewal problem.

(34:45):
And sometimes you'll even like a lot of companies build, they have a second
desk that handles renewals.
Like you'll find that they've, they move account reps. Like that's,
that's a future me problem, you know, is there to the meme that we say,
but as you built that organization,
then that must have changed how you built the company because you had to have
a real end to end customer, you know, retention flow and customer listening

(35:11):
flow probably fairly early on.
So did you have that organizations kind of like built for scaling across the whole thing?
Where did you put your focus in those early hires?
The early, yeah, the early hires were just filling critical gaps across the board.
So hiring folks for sales and marketing, I mean, a lot of it was just engineering

(35:34):
hiring in the beginning.
And maybe one or two small like marketing, field marketing, product marketing types of things.
And then there was a phase where we just kind of, we had rapid customer growth.
So we're like, okay, we need to build a customer success org.
We need to build our sales org out, our marketing org out, and just kind of
like went into that crazy growth mode.
And since then, we've learned a lot of what works and what doesn't work and

(35:59):
have, you know, evolved the organization accordingly to that.
So, yeah, the early days, it was really just a huge, like, zero to one or one
to ten and figuring out, like, how are we going to support all these customers
in a way that we can continue to, you know, renew them year over year.
So that was it. Yeah. Yeah, well, and I don't think how much,

(36:20):
I'll say how much surprised you about every day that occurred.
As a founder myself, I'm like, every time you think you got it sorted out,
you get reminded about 14 minutes later that we're all on the same roller coaster.
Just, I may have a better sense of I'm closer to the front of the car so I can
see what's going on. Yeah.
You learn that you don't know much every year. You think you know, and then you don't.

(36:45):
You're like, oh, that's new. Okay. Someone, someone on my board always says,
or he doesn't always say it, but he has a quote that stuck with me,
which is like, we only want to make new mistakes.
And I love that. That's a good phrase.
It's a, you, I mean, the funny thing is like, talk about timing.
You've like every series of black swans has occurred in the time that you've

(37:06):
been, been building this team.
And at least in the last, like, you know, 10 years.
There is, I often think like how much of this is repeatable?
Like I kind of worry that there's just, we're in a bespoke lesson world every
day, these days. Yeah, very true.
To be prepared for anything. Now, the next thing is because if you,

(37:31):
how many other places you, you gather,
and potentially send data and knowledge partner implement partner ecosystem
system and partner integrations is a tough one, especially as an early stage startup,
spending your efforts in engineering, co-engineering and co-marketing,
it's very easy to get lost.

(37:51):
I'm like, oh, let's come up with 10 partners because that'll open up account access.
But that's a tough balance. So how did you approach where your integration points
are at a technology layer as well as maybe even on the business side? Yeah.
Early on, it was just based on the architecture that we had built on top of.

(38:13):
So AWS and Snowflake were really key ones and still are to this day.
And showcasing the capabilities together. You know, I remember doing a podcast,
or not podcast, a webinar with Snowflake very early on.
And it was kind of mind-blowing because Snowflake was really innovating on the
database side and a little bit more than we had seen in other parts of the industry.

(38:36):
So we really spent a lot of time building with them. and optimizing and we still
do, you know, a lot together with Snowflake. And Amazon is similar,
We've been partnering with them from a go-to-market perspective.
And I think what it really comes down to is how do you want to distribute your
software and who is closest to your customer, right?

(38:58):
And then making sure that we're aligned on the same go-to-market motion.
Like your sellers are going after the same people we're going after.
And just building that relationship and that business relationship,
in addition to a technology relationship, is super important to make that very
fruitful in the long run. And I'd say focus is important here too.
Like you don't want too many.

(39:18):
I mean, you really only have maybe one distribution model that's core.
So really building that relationship up is important. And you can do it on many
layers. You can do it on like the technical layer.
You can do it on the leadership level. You can do it and go to market,
like all these different things.
You know, and that becomes the fun part of your distribution model and how you're
going to connect with that customer.

(39:40):
Customer, that becomes an interesting challenge because we've got,
obviously, even some of the most smooth SaaS implementations often still have
a bonus services opportunity where we can get higher levels of adoption.
And so there's a potential partner ecosystem to light up some services partners.
But again, that's also a risky thing early on where, are you turning over knowledge?

(40:07):
Are you turning over account control to some of these other partners?
So, you know, how do you, how did you focus early on, on where the services
and post implement post sales implementation stuff would live?
We didn't do it for a while. I think because early on when you don't have product

(40:28):
repeatability yet, it's hard
to just give that to a services provider and be like, okay, go do this.
I think when you're in rapid iteration mode, it doesn't work well.
Once you hit a critical mass of stability, then I think it becomes more scalable to do that.
And we had a few MSSP's using us early on.

(40:49):
And we've been going deeper into that ecosystem now that we've been in the market for several years.
So again, it's just focusing on a few of the key relationships,
making sure that they work and are repeatable, and then moving on to others
that are very like-minded. did. And there's a handful that we've been partnering
quite closely with recently.
That's been, that's been great. We really enjoyed it.

(41:09):
I guess looking at your own personal experience and where you came from before
then you worked at pretty high scale organizations and high scale technology teams. Did you.
You know, how important was scalability versus like functionality in what you
were trying to solve, you know, same works at massive scale,

(41:33):
but that, that in itself is a monstrous problem.
And you can, you can get caught out over engineering a solution,
or then you could come on the other side where you're like, I have a functional
problem that I'm solving, but then this platform may not scale.
Well, yeah, that was always the problem in SIM by the way, was the second,
And, you know, the last thing you said,

(41:53):
so a lot of it was starting with the fundamental problem that we need to solve
is cost and scale and more control around how we want to analyze our data.
Because at that point in time, in the late 2010s, it was becoming all about
data lakes and big data and all of that because of the intelligence you can derive at that scale.

(42:17):
And you've seen this in BI business intelligence forever, right?
Massive data warehouses, harvesting information, and then using that to make
business decisions. We're doing the same thing in security now.
And a lot of those fundamentals have to make their way into security around
data pipelines, data engineering, scalability, cost considerations,

(42:38):
ETL, like all of these different things.
And now obviously training AI models and LLMs and Gen AI.
And so there's been quite a lot of advancement there. But everything that we
do at Panther was really centered around the concept of detectionist code and scalability,
because we knew that those were things that we needed to support modern security

(42:59):
operations teams that want longevity in their programs.
What would you describe then as sort of the fundamentals of SIEM?
And you should be rightly biased because obviously you're building around it,
but what are the core fundamentals in the same way we think of,
you know, we've got golden signals of Kubernetes and we've got core practices in this.

(43:22):
What is the purpose and the reason for a SIM to need to exist?
Oh, I mean, there's a multitude of reasons. You won't be able to see anything
happening in your organization if you don't have one. That's the first thing.
You won't pass a lot of compliance standards as well if you don't have it,
which is probably the most important thing to the business. this,

(43:42):
PCI, HIP ISO 27001, SOC 2, all those things.
So there's just a core need for it from pretty early days, even on the most basic level.
You just need to collect your data and you need to have an audit trail of the
sensitive things going on in the organization. That's just the core need.
And then when it comes down to the components of that, how you actually operationalize it.

(44:06):
Based on what we're seeing around scale, data pipeline is now a core requirement
of this in modern organizations that have a lot of data, which is everybody.
And after that, I'll be biased and say detection is code system,
because that's really what we center on.
But even putting us aside, that's just a pattern that we see in general.

(44:27):
We see all types of organizations adopting it because it allows you to codify
the business logic into the rules.
It allows you to have a more collaborative practice with your team.
It allows you to have reliability, which is probably one of the most important things.
We don't want a SIEM that's running that's untested. Like any production software,

(44:47):
we want high testability and high coverage.
So a lot of those paradigms flow from software development, which is a key the
key component of detection is code.
And then the other side of that is how do you respond in a way that's very quick and scalable?
I think scalable is probably the keyword here, both from a team and cost perspective.

(45:07):
So a lot of things we talked about before around being intentional about what
things you're collecting, there's a huge push now for tiering your data and
filtering your data. And again, a lot of these are pipelining concepts.
These aren't new things. They've just now made their way into security.
And we just see that those components of a pipeline detection as code system,

(45:27):
you know, something that's backed by data lakes or something that's very scalable,
you can, you can integrate with training AI models and things like that,
and then responding in a very like repeatable scalable way.
And we sort of see those core components often configured in many different types of ways.
The, the data thing is definitely, And of course, you know, AI being the new

(45:52):
buzzword of the new acai berry of technology here, that it's the new superfood.
So everybody's AI-ing everything and gen AI.
But as much as there's technological advancement, there's, I'd say,
significantly more risk that's coming along with these advancements,
especially around like data retention.
Retention, even like building internal systems that we've got,

(46:14):
like I'm having to build in right to right to be forgotten capabilities in vector stores.
Like that's really gnarly. When you think about the cost and technical debt
of being able to retrain models,
if there's a data problem, like it's there's stuff that we never thought we'd

(46:35):
have to experience or even worse that maybe that a lot of people don't know.
So when it comes to, you know, data pipelines and AI data and the sort of the
growth of people putting way too much data in chat GPT,
what do you see as this next wave of risk that people are about to discover?

(46:56):
Well, I mean, there's a lot of red teaming happening on models to break them,
right? Right. And I think we all know the consequences of that.
Like you just made a joke about people putting too much into attached EBT and
then someone prompting the model to
get that information about some big organization that may have done that.
So, I mean, on LLMs and Gen AI, that's the risk is that they're used for nefarious purposes.

(47:21):
Like people have done and published a lot of research that allow them to jailbreak
the model, basically. Yeah.
So preventing things like that is really important.
And then in addition to how you just mentioned the data privacy perspective.
But the beauty is that there's a lot of really amazing open source models that
are becoming super popular, like Claude, for example, and they compete quite

(47:43):
well with open AI models.
So I think that's the most fascinating part of this is thinking about the democratization.
Of those models, those foundation models, and sort
of like watching where they pop up and all these other places like
i know facebook had also done some open source here too
but the ecosystem is exciting i think i'm

(48:04):
personally more excited than i am pessimistic or
afraid of it i think the advancements we'll continue to see in the next years
are going to be mind-blowing and there's just such a huge potential for significantly
improving our comprehension of the world around us with it or creating new solutions
that couldn't have existed before for because of it. So that's where I'm most excited.

(48:26):
And the open source side of it is really encouraging to watch happen.
So it's not just monopolized by Microsoft, you know?
Yeah, yeah. And this is, we see even the battle that's going on in the licenses
that are applied to stuff, you know, people even going on about open AI should
change their name because it is not open, but more like.

(48:51):
Open source in a truly open fashion because we've even
seen it in broad software adoption you know
redis was one example where they because of some of the difficulty they ran
into in the they had an aws challenge and aws had a service so along came the
the agpl and there's mgpls there's lots of modified gpls that give a little

(49:12):
bit more protection the idea that it could be used but but not commercially.
And then, you know, literally as the day we're recording, you know,
we saw there's Valky, it was the new platform that's now a new,
you know, Redis alternative.
And I don't mean to call it Redis, it's just, it came to mind.
But that's the case where we're gonna see this now in the open source models,

(49:33):
where even something that's gonna be really fantastic in GPT-5,
we're likely gonna see Claude 5 by then as well, that's gonna be doing pretty fantastic stuff.
Like the jump from Claude, you know, two to three or whatever the,
the naming structure we've got of it, like that just came out that last week,
they're making huge changes and huge advancements. Yeah. Yeah.

(49:58):
And it's becoming cheaper, too. Right. And I guess that's the interesting thing.
Now, for you, looking at what more you can do with AI, how much does AI seem
as an opportunity internally in what you can do with existing data that you're
already capturing, maybe?

(50:19):
Oh, it's a huge opportunity. We've already done a lot of projects on it.
And we'll probably release something around the RSA conference that sort of
dips our toe into the water here as an early way.
So I'm excited. I think there's just so much potential to improve the life of

(50:43):
analysts and engineers.
And the thing I've always thought with AI is like it gets you 80% of the way
there, you sort of fill in the 20.
And then that allows you to free your mind up to do the more important things
it's like you know i have i have a tesla and like most san francisco-based founders
it has autopilot and autopilot does exactly that right i'm driving on the highway

(51:06):
i turn on i shut my brain off to,
to you know this i don't have as much attention on the task anymore because
i've trusted another other system to do it for me.
And I think that's kind of the promise of AI is similar.
It's like, okay, I want you to edit this for me. I want you to,
you know, take this mundane task that I was doing repetitively and automate it for me.

(51:28):
And a lot of times it does it better because it has access to so much reference data.
Right. So I think in security, it's just, there's a huge opportunity to use
it. I just want to make sure that we use it in the right ways.
AI and ML has been thrown around in infosec for so long.
And in a lot of ways, people don't trust it. because abnormalities is a common

(51:50):
everyday practice in human behavior.
Humans are certainly creatures of habit, but humans can also deviate from those
habits, and that doesn't necessarily mean it's bad.
I think a lot of systems that we've built in security just auto-dictate, oh, that's bad.
And then what ends up happening is we look at it for one second and we're like, no, it's not. It's fine.

(52:13):
Model just doesn't know that we do this thing every Saturday, right? Right.
Yeah, and that becomes the interesting challenge of anything to do with detection
is that there's random anomalous behavior, there's periodic behavior,
there's systemic behavior, there's seemingly a random behavior.

(52:34):
That's always the hardest part is that what looks like one thing,
when you actually connect enough of the backend systems together and a broader understanding,
we think we're looking at the problem, but we're actually looking at the results,
the problem happened a long time ago.
As a practice, you're a very optimistic person for security person.

(52:55):
So looking at your industry, you know, and the events, how do you find the tone
of security communities now compared to when you came up as a practitioner?
That's a good question. I haven't been to the conferences this year.

(53:15):
So So once that happens, I'll have a more informed answer.
But I think just mostly people are embracing the new capabilities of Gen AI and things like that.
And yeah, I'm not a skeptical security person as much anymore.
I think I haven't been a practitioner for five, six years.
But I think if you talk to me, then I probably would be.

(53:38):
But yeah, I've been paying a lot of attention to more of the AI community in
general and just watching things there. And of course, I pay attention to the
big security companies and what they're working on.
Yeah, I just see a lot of the same embracing, embracing new tech,
embracing a more holistic risk based approach, and doing what we can to harness
the technology in really interesting ways to make our jobs easier and to protect our organizations.

(54:02):
So generally just focused on that versus complaining about things.
Yeah, well, it's very easy for us to get caught in that.
But again, I think as a community of practice, security people are...
What people think they're like strange, they're all, they're oddly guarded.

(54:23):
I'm like, well, that's, that's natural. It's in because you're,
you're looking at the world through a different lens of what you're caring about.
But at the same time, I find they're very collegial and very open to sharing
information amongst each other.
There's even looking at like RSA is it's a big conference, but very seldom.

(54:43):
Do I really think about the fact that RSA is a brand, a company like versus
AWS reInvent, all you think about is like, it's their money, it's their thing.
But RSA, I really see the human side of the conference is so much stronger than
a lot of other vendorized conferences.
And then of course you get into Black Hat and even down to B-Sides,

(55:06):
like the regional stuff.
They're very human centric. And this has always been the most fascinating thing.
They're the most tool loving folks, but they're also the most human understanding folks.
It's an interesting dichotomy of the way that that is versus ITOM and IT ops
folks, which is my, my angry bunch that I grew up in where the trolls under

(55:28):
the bridge that just inherently distressed other people.
But security folks, you distrust products and you trust people.
It's an interesting thing. Yeah.
The, the conference circuit's fun. It's fun to see what new things are out there.
It's fun to hear on the other side how practitioners are getting value from those things.
And I think conferences like B-Sides and DEF CON allow you to see the other side.

(55:53):
I mean, Black Hat and RSA are like 100% vendor conferences.
And as practitioners, you know, I always used to ignore those.
I used to just be like, I don't want to get sold to.
So it's funny, I always didn't want to go to those conferences.
And now as a founder, it's like, those are things that I look forward to every

(56:15):
year just to see other founder friends, to connect with new companies to partner with new companies.
That's how we've made some of our partnerships. It's just being on the floor
and talking to people and making friends and working together and fun.
Yeah, finding birds of a feather. It's funny, I guess that maybe that's, that's me,
like my lens was so different that I never really saw the brand attachment as

(56:36):
strongly because i was outside of interfacing with those brands directly so
that's uh that's interesting that you brought that up now,
what's your advice? Yeah. So Jack 2.0, 18, going to university,
ready to make decisions about future life like security.

(56:57):
What do you, what do you tell that person? You know, what are those people?
What, what advice would you give to help them towards their path in, in this industry?
In security? In like protection? Yeah.
I'd say learn to code, learn how to be a developer, learn how to like run software,

(57:18):
learn how to set up infrastructure, just kind of understand how the system works,
like know how the sausage is made to a degree.
And then as you do that and you get some practical experience,
like you'll, you'll figure out where to go next.
But I always found that in school, it's all just so conceptual and,

(57:38):
you know, it's not saying anything groundbreaking here.
But by learning how
to actually build the system or to set
up a system or really just like how something works to its
core like that gives you a different fundamental set of knowledge that you wouldn't
otherwise get and you have to consistently do this year over year so now with

(57:59):
gen ai it's like okay learn how fundamentally an lm works right basically right
because you need to know how it works to get the value from it from a certain context perspective.
And prompt engineering is one of those things where, to take that example further,
if you don't know how to prompt the model, like you're not going to really be able to use it.
So as you learn how to prompt the model, how the model can get more contextual,

(58:21):
then you sort of apply that in a context of something like security, or specifically,
like triaging an alert, or understanding what
this event is actually saying and just repeating
that over your career as whatever new thing comes
up new cloud provider new technology that's
really helpful for solving some type of problem just kind of go into the weeds

(58:44):
of it learn about it and always be learning always be open to growth like just
read books every month and yeah i'm a very growth oriented person in general
so that That would be my advice is just to be a sponge forever.
Just continuously learn. Don't make the same mistake twice. And yeah.
I like that. Make, we only want to make new mistakes.

(59:06):
That's such a fantastic phrase.
And so looking at, because you are growth oriented and you have a heavy responsibility
as a founder and a builder and a community representative and you're furthering
the people of this industry and the community of practice.
Where do you feel that you wish you could spend more time? Hmm.

(59:31):
I think doing more community focused work is an area that planning to spend
more time in, but really just.
Replaying the story of others has been really fun. I mean, I have a podcast
myself. It's called Detection at Scale.
I'm going to plug it just because I think it helps folks understand a little
bit the context of the work that I do and what we do as a company.

(59:54):
It's not at all an advertisement for Panther.
It's all about the people who are in the field working on the problems.
And we talk about solutioning and things like that, but
it's really just centered around what it's like to be in
that role and some of the challenges and some of
the really interesting solutions people have come up with there so yeah
just be more outward writing more taking a

(01:00:15):
lot of the knowledge that you know i've had being a
founder and sort of like spreading that through and teaching others how to be
better at detection and response so that's that's an area i'm gonna spend some
more time in have you found really good sort of study methods or or even you
know communities of practice that that have been helpful for you on that journey?

(01:00:39):
Talking with customers is really helpful. But I've learned a lot through just
hearing people on sales calls, talk about solving their problem.
So like, you know, I've written a few blogs as well and sub stack about the
various ways of like, driving down false positives, for example.

(01:00:59):
And a lot of that just comes from my experience of being around a lot of these people.
Talking to them on the podcast is a huge source of that inspiration.
Reading other articles. I think just in general security articles tend to be
so buzzwordy and surface level.
So like one of my calls is to like write it out and be exhaustive.
And I do that by using a combination of things like LLMs to ask basic questions.

(01:01:21):
And then I look at how vendors have talked about in the past.
I looked at other tools and just kind of triangulate and be like,
okay, how are we all collectively trying to solve this in a lot of different
ways and to talk about it.
It's hard to like it's hard to have the nuance that nuance is really difficult
especially if you're looking at other tools and trying to differentiate like

(01:01:43):
okay this tool does the same type of thing slightly different what does that
actually mean for my capability of that,
and then how's it compared to this other tool so it can actually help people
reason about using the best tool to solve their problem which is another hope
that i have is that people just kind of get the context and they understand
it and then they can take advantage of the technology,
around them yeah and this is a

(01:02:04):
the one thing that i find as a practitioner used
to get stuck on is that it would have a bunch of tech bunch of
tools that i'm using 30 of and it
was really difficult to invest the focus in
getting the most out of each platform and maybe even finding efficiencies that
i could get by you know either doing more with something or even maybe even

(01:02:25):
eliminating step that's in the chain it's i'm with you on like keep learning
and find a peer group you know and,
selfishly i started the podcast so i could find amazing folks like you jack
and i could learn these things because and it worked out that when we i started
listening to other podcasts and and i always gravitated towards ones that were

(01:02:47):
conversational people in similar areas i'm like like, oh, okay.
I basically, I guess it's a podcast in practice of, you know,
finding other founders and, but yet technically focused.
So it's neat to be able to split the line between those things.
What's your, uh, what's your book of, uh, of choice this, this month?

(01:03:11):
It's has kind of a funny title. I bought it in the airport when I was flying
back to San Francisco recently. It's how to make a few billion dollars.
Nice it's a pretty good read it's about this serial entrepreneur who started
all these different businesses throughout his life and just the lessons that
he's learned through that it's a fun read.

(01:03:32):
Also have see what else am i reading right now i'm reading a book called happier
no matter what that one's really good it's kind of like i don't know if you've heard of anti-fragile,
yes yeah yeah yeah it's adjacent to that okay cool that book's been good so
far i always typically have like two three books open it's kind of maybe i'm
listening to audible as well,

(01:03:52):
so one of my favorite like random things i learned a couple years ago was the
idea that like in in Audible and in Kindle, I had no idea that they were synced.
So I'd be like, I would open my Kindle and I'm like, this looks exactly where
I left off on the audio book.
And if I finally realized like, oh, wow, what a great way to pair off,
especially when there was PDFs and examples.

(01:04:12):
I'm like, oh, I can stop here, bust open the Kindle, see it in tandem with where I am.
It's pretty sweet. Yeah. And just especially with flying and driving,
it's anytime i can pop on something and
be learning it it feels good you know and
and quickly let's say to merge two things this is i just
wrote an article that's about to launch right in line with what you talked about

(01:04:36):
when when i saw i said panther number one i love the platform and the approach
beautifully simple in the way you describe it but i know the complexity of the
problem you're solving.
Your team is fantastic and your way of telling the story is really, really good.
And I love people who are not just like, we're the greatest thing ever.

(01:04:59):
We're game changing, industry leading, all the hyphenated buzzwords.
Your humility in your approach to it means a lot as somebody who,
when I look to other founders for lessons, I really like your approach.
But let's go to cooking because I say every piece of content that I generate.
You know, I try, I called it, how do you create Michelin star content?

(01:05:22):
And so I looked at Daniel Hum, who's the head chef and owner of 11 Madison in,
in, in New York. And I've been lucky enough to go there.
My partner, she, uh, she and I went there a couple of times and he says the
four fundamentals for a true dish.
And I think this tells your story of, of even a startup or a platform or a thing.

(01:05:43):
Number one, the dish has to be delicious. It seems obvious, but it's actually
got to be, you have to think about it. Number two, the dish has to be beautiful.
So the, and it is amazing that, you know, people say presentation doesn't matter.
So if that was the case, there'd be sushi milkshakes.
It's not the case. The third is creativity.
So every dish should add to the dialogue of food.

(01:06:06):
And I thought that was such a great way of this extending the vocabulary of
cuisine with something we do. And then to something you mentioned so strongly
throughout this intention, it needs to make sense that this disexists.
And from spending even very short amount of time with you, Jack,
you have intention and I'm very, I'm proud for all that you've done.

(01:06:30):
And thank you for sharing time and your story and your lessons today.
So for folks that want to check it out, of course, go to panther.com and make
sure you check out the platform and the podcast.
Where do we find what's the fastest way to get to the podcast
and also if folks want to reach out and connect with you jack my podcast
is called detection at scale and i also

(01:06:51):
have a sub stack that's called that same thing and then
on twitter now called x jack underscore naglieri just my first underscore last
name i mean yeah like you said the website panther.com reach out to us there
we have a slack channel too for folks that use the platform and want to chat
with each other so So it's been really fun.

(01:07:12):
And yeah, thanks for having me. The more folks I see from your team is that
you make fantastic hiring decisions.
So your people represent you very well. And it's a tough thing knowing that
hiring is a challenge, running a business is a challenge.
I'm looking forward to seeing more and more updates. Good luck with all the conferences coming up.
And yeah, hopefully we catch up again in future and hear more about how things are progressing.

(01:07:35):
Yeah, that'd be great. One thing I'm gonna add, You mentioned the Levinson 11 Madison Park Ave.
Yeah. Well, Hadara has a book called Unreasonable Hospitality as well. Ooh.
You can check that out if you haven't. It's pretty good. That is awesome. Yeah, and it's funny.
Like I've, some of my best books when people ask me like, how did you get good at technology?
It's like, what's the book that you recommend about learning how to deal with

(01:07:58):
technology and telling the stories in tech evangelism?
And I said, my most profound learning was from DSM-IV, which is,
you know, now DSM-V. I learned about how people break so that I learned about how people work.
And most of my study is behavioral psychology and engagement psychology.
And it just so happens that my dad was in tech, and so I grew up around it.

(01:08:20):
I was a database developer when I was 15, but then I went down a very different
path, did retail, I was a cobbler, I did all sorts of strange things along the
way, and then came back to tech.
And so I love non-tech books that have a surprising impact on my technology work.
And it sounds like that's exactly the kind of book that I'm going to love.

(01:08:42):
There's two things. One, I'm the opposite. it. My dad is a psychologist and I'm a technologist.
And the other thing is that if you want to look for really good startup advice
in a non-startup book, read The Design of Everyday Things.
Fantastic. That book's amazing.
I owe you so much just from all these book recos. This is fantastic.

(01:09:04):
So this is really, really cool. And I thank you.
I love your approach to all this stuff And it's, this has been super awesome. So yeah, thanks Eric.
Very cool. Excellent. Well, there you go, folks. I got a book list for you.
I got a lot of reading to do ahead.
Well, with that, thanks very much, Jack. And yeah, definitely for folks,
follow the links that are in the show notes and do check out the platform because

(01:09:25):
I've rarely seen somebody that does it in what they say and actually see it show up in the platform.
So, so great work. Thank you, sir. Appreciate it.
Music.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.